Jaish came close to striking Delhi last December, deterred by IED leak

Jaish came close to striking Delhi last December, deterred by IED leak

Jaish came close to striking Delhi last December, deterred by IED leak

| TNN | 

ISKCON temple in South Delhi was among the spots recced by JeM operatives for terror strikes.ISKCON temple in South Delhi was among the spots recced by JeM operatives for terror strikes.
NEW DELHI: In the winter of 2015, when Indian agencies were busy tracking down alQaida modules, two Jaish-eMuhammed terrorists had quietly sneaked into the capital. Intelligence sources said the duo rented a room in Lajpat Nagar, assembled six improvised explosive devices (IEDs) and recced at least four places for a strike, including the Taj Mahal and two city spots -Iskcon temple and the Select Citywalk mall in Saket.

The IEDs were specially prepared, using ingredients such as shampoo, a highly placed source told TOI. By mid-December, the two were ready to carry out the strike. A control room was set up in Khyber Pakhtunkhwa on the Afghanistan-Pakistan border and instructions were being passed by the handler, who was in touch with the mastermind, codenamed MAR.

Fortunately, things went wrong. During a dry run a day before the attack, an IED "leaked" while being detonated in the bathroom. This led to a thick cloud of smoke in the building, caused the Jaish duo to panic and hurriedly flush the IEDs and all other material down the toilet.

Still out of the Indian agencies' radar, the two men took a return flight to Kabul.

Their act may have remained under wraps forever. But things changed in 2016. Since January this year, the noose around Jaish tightened after the attack on the Indian consulate. Four terrorists neutralized in that attack wrote about the planned 2015 strike being a revenge for Afzal Guru, who was executed for his role in the Parliament attack of December 2000.

Two months later, around Holi, the Kabul police arrested two Jaish operatives named Ahamd Khan Durrani, an Afghan national, and Abdul Qadri, a Pakistani, and recovered explosives and ammunitions from them. During interrogation, they spilled the beans on their plot to attack the Indian capital. Their travel details corroborated their claims. The Indian consulate was informed and the intelligence establishment taken in loop.

A six member team was dispatched to Kabul, comprising sleuths of the research and analysis wing (RAW), intelligence bureau, military intelligence and the special cell of Delhi Police. The joint interrogation left the Indian sleuths shocked and alarmed.

The matter was taken up at the highest level in the government and a detailed probe launched. The sleuths of the special cell sprang into action and thoroughly scanned the Delhi hideout -M3, Kasturba Niketan complex in Lajpat Nagar.

The details from the foreign regional registration office (FRRO) corroborated that the two men had indeed come to Delhi on November 24, 2015. Qadri, it turned out, had come to Delhi with an assumed identity of Shoib Abbas.


According to their interrogation report, the duo was asked to prepare special IEDs which would spread fire rapidly along with the explosion to cause maximum damage. For explosives, they bought normal crackers from Jama Masjid, Pantene shampoo (to be used as a chemical), wire, watch and other stuff from the Lajpat Rai Market on December 5.


The special cell tracked their route, including the taxi driver they had hired for their travel and the middleman they had engaged to rent a house. On November 26, they had even got themselves verified from the local police station in Lajpat Nagar, it was found. There was no other material evidence.


All that the Indian agencies could gather was a lesson to be more prepared.

Bookmark or read stories offline - Download the TOI app
13, right now

13, right now

13, right now

This is what it's like to grow up in the age of likes, lols and longing

She slides into the car, and even before she buckles her seat belt, her phone is alight in her hands. A 13-year-old girl after a day of eighth grade.
She says hello. Her au pair asks, “Ready to go?”
She doesn’t respond, her thumb on Instagram. A Barbara Walters meme is on the screen. She scrolls, and another meme appears. Then another meme, and she closes the app. She opens BuzzFeed. There’s a story about Florida Gov. Rick Scott, which she scrolls past to get to a story about Janet Jackson, then “28 Things You’ll Understand If You’re Both British and American.” She closes it. She opens Instagram. She opens the NBA app. She shuts the screen off. She turns it back on. She opens Spotify. Opens Fitbit. She has 7,427 steps. Opens Instagram again. Opens Snapchat. She watches a sparkly rainbow flow from her friend’s mouth. She watches a YouTube star make pouty faces at the camera. She watches a tutorial on nail art. She feels the bump of the driveway and looks up. They’re home. Twelve minutes have passed.

Katherine Pommerening in the front seat of her family’s station wagon. Katherine was born in 2002, meaning she is a member of what’s being called Generation Z.
Katherine Pommerening’s iPhone is the place where all of her friends are always hanging out. So it’s the place where she is, too. She’s on it after it rings to wake her up in the mornings. She’s on it at school, when she can sneak it. She’s on it while her 8-year-old sister, Lila, is building crafts out of beads. She sets it down to play basketball, to skateboard, to watch PG-13 comedies and sometimes to eat dinner, but when she picks it back up, she might have 64 unread messages.
Now she’s on it in the living room of her big house in McLean, Va., while she explains what it’s like to be a 13-year-old today.
“Over 100 likes is good, for me. And comments. You just comment to make a joke or tag someone.”
The best thing is the little notification box, which means someone liked, tagged or followed her on Instagram. She has 604 followers. There are only 25 photos on her page because she deletes most of what she posts. The ones that don’t get enough likes, don’t have good enough lighting or don’t show the coolest moments in her life must be deleted.
“I decide the pictures that look good,” she says. “Ones with my friends, ones that are a really nice-looking picture.”
Somewhere, maybe at this very moment, neurologists are trying to figure out what all this screen time is doing to the still-forming brains of people Katherine’s age, members of what’s known as Generation Z. Educators are trying to teach them that not all answers are Googleable. Counselors are prying them out of Internet addictions. Parents are trying to catch up by friending their kids on Facebook. (P.S. Facebook is obsolete.) Sociologists, advertisers, stock market analysts – everyone wants to know what happens when the generation born glued to screens has to look up and interact with the world.
Katherine at her house, playing Xbox (top left), in the kitchen with her au pair Rachel and 8-year-old sister Lila (top right) and outside with her skateboard (bottom.) Katherine got her first phone in the 5th grade.
Right now, Katherine is still looking down.
“See this girl,” she says, “she gets so many likes on her pictures because she’s posted over nine pictures saying, ‘Like all my pictures for a tbh, comment when done.’ So everyone will like her pictures, and she’ll just give them a simple tbh.”
A tbh is a compliment. It stands for “to be heard” or “to be honest.”
Katherine tosses her long brown hair behind her shoulder and ignores her black lab, Lucy, who is barking to be let out.
“It kind of, almost, promotes you as a good person. If someone says, ‘tbh you’re nice and pretty,’ that kind of like, validates you in the comments. Then people can look at it and say ‘Oh, she’s nice and pretty.’”
“It kind of, almost, promotes you as a good person. If someone says, ‘tbh you’re nice and pretty,’ that kind of, like, validates you in the comments. Then people can look at it and say ‘Oh, she’s nice and pretty.’ ”
Tbh, Katherine is both nice and pretty. She has the cheeks of a middle schooler and the vocabulary of a high schooler. She has light brown eyes, which she only paints with makeup for dances, where there are boys from other schools. Her family is wealthier than most and has seen more sorrow. She is 5-foot-1 but will have a growth spurt soon, or so said her dad, Dave, in a very awkward talk he had with her about puberty even after she told him, “Please, don’t.” She is not sure how Converse shoes became cool, but it’s what happened, so she is almost always wearing them. Black leggings, too, except at her private school, where she has to wear uncomfortable dress pants.
School is where she thrives: She is beloved by her teachers, will soon star as young Simba in the eighth-grade performance of “The Lion King” musical, and gets straight A’s. Her school doesn’t offer a math course challenging enough for her, so she takes honors algebra online through Johns Hopkins University.
Now she’s on her own page, checking the comments beneath a photo of her friend Aisha, which she posted for Aisha’s birthday.
“Happy birthday posts are a pretty big deal,” she says. “It really shows who cares enough to put you on their page.”
Katherine is the point guard on her basketball team.
Katherine is the point guard on her basketball team.
Rachel, Katherine’s au pair, comes in the room and tells her it’s time to get ready for basketball practice. Katherine nods, scrolling a few more times, her thumb like a high-speed pendulum. She watches Vines — six-second video clips — of NCAA basketball games while climbing the stairs to her room, which is painted cobalt blue. Blue is her favorite color. She describes most of her favorite things using “we,” meaning they are approved by both herself and her friends: Jennifer Lawrence, Gigi Hadid, Sprite, quesadillas from Chipotle filled only with cheese.
Her floor is a tangle of clothes, and her bed is a tangle of cords. One for her phone, one for an iPod, one for her school laptop, and one for the laptop that used to belong to her mom, Alicia.
A pink blanket with Alicia’s name on it lies across her comforter. A black and white photo of her mom on her wedding day sits on her desk. In a frame on her nightstand, handprint art they made together one Mother’s Day. Now, Katherine’s handprints are almost as big as her mom’s were.
The breast cancer appeared right after Katherine was born. It went away, then came back when Katherine was in third grade. In fifth grade, Alicia and Dave bought Katherine a cellphone, in case things took a turn. She was one of the first in her class to own one.
She signed up for Snapchat and Instagram, Twitter and VSCO. She stopped inviting friends to the house, because her mom was there, sick.
Last year, on a cloudy Thursday in March, Alicia died. Katherine won’t talk about it, today or any day. Not talking about it means she doesn’t need to think about it, except when the house is quiet and the thinking just seeps in. She doesn’t tell her friends how it feels. When she’s asked about it, she crumples. Her shoulders hunch, her eyes well, but no tears fall on her cheeks. Please, she would say if she were reading this, go back to talking about her phone.
A photo of Katherine and her mom, Alicia, that sits in the Pommerenings’ living room. Alicia died after a long battle with breast cancer in March of 2015, when Katherine was in 7th grade.
Lila can’t find her tap shoes, Rachel is sick, the dogs are waiting for breakfast, and Katherine is heading straight to the garage.
“Don’t you think you should eat something?” her dad asks, rummaging through a cabinet. “A breakfast bar?”
Katherine’s arms are crossed with her pastel pink phone case in her hand.
“I feel like you should eat something before —”
“I’m fine,” she says.
Lila comes down the stairs, wearing shorts and complaining she’s cold.
“It’s 45 degrees out,” her dad tells her. “Do you think it’s a good idea to wear shorts today?”
He turns back to Katherine, but she’s already gone, somewhere in the house, doing something, he’s not sure what, on her phone.
Dave Pommerening wants to figure out how to get her to use it less. One month, she ate up 18 gigabytes of data. Most large plans max out at 10. He intervened and capped her at four GB.
“I don’t want to crimp it too much,” he says. “That’s something, from my perspective, I’m going to have to figure out, how to get my arms around that.”
When he was 13, he lived only two miles away. He didn’t have a cell phone, of course, and home phones were reserved for adults. When he wanted to talk to his friends, he rode his bike to their houses. His parents expected him to play outside all day and be back by dinnertime.
He says that a lot. He’s a 56-year-old corporate lawyer who doesn’t know how to upload photos to his Facebook page. When he was 13, he lived only two miles away. He didn’t have a cellphone, of course, and home phones were reserved for adults. When he wanted to talk to his friends, he rode his bike to their houses. His parents expected him to play outside all day and be back by dinnertime.
Some of Katherine’s very best friends have never been to her house, or she to theirs. To Dave, it seems like they rarely hang out, but he knows that to her, it seems like they’re together all the time. He tries to watch what she sends them — pictures of their family skiing, pictures of their cat Bo — but he’s not sure what her friends, or whomever she follows, is sending back.
He checks the phone bill to see who she’s called and how much she’s been texting, but she barely calls anyone and chats mostly through Snapchat, where her messages disappear. Another dad recommended that Dave use parental controls to stop Katherine from using her phone at night. He put that in place, but it seemed like as soon as he did, there was some reason he needed to switch it off.
He finds Katherine waiting in the car with two backpacks, one for her books and one for her laptop.
“What jacket are you going to wear?” he asks.
“I’m going to grab a sweater,” she says, as if she already had this plan before he asked. She heads back into the house, phone in hand, protecting it from prying eyes.
Even if her dad tried snooping around her apps, the true dramas of teenage girl life are not written in the comments.
Like how sometimes, Katherine’s friends will borrow her phone just to un-like all the Instagram photos of girls they don’t like. Katherine can’t go back to those girls’ pages and re-like the photos because that would be stalking, which is forbidden.

(Left) Dave and Katherine at a park near their house. (Right) Lila, 8, stands among her crafts. Dave grew up just two miles away from where he is raising his children, but technology has made their childhoods vastly different than his own.
Or how last week, at the middle school dance, her friends got the phone numbers of 10 boys, but then they had to delete five of them because they were seventh-graders. And before she could add the boys on Snapchat, she realized she had to change her username because it was her childhood nickname and that was totally embarrassing.
Then, because she changed her username, her Snapchat score reverted to zero. The app awards about one point for every snap you send and receive. It’s also totally embarrassing and stressful to have a low Snapchat score. So in one day, she sent enough snaps to earn 1,000 points.
Snapchat is where flirting happens. She doesn’t know anyone who has sent a naked picture to a boy, but she knows it happens with older girls, who know they have met the right guy.
Nothing her dad could find on her phone shows that for as good as Katherine is at math, basketball and singing, she wants to get better at her phone. To be one of the girls who knows what to post, how to caption it, when to like, what to comment.
She gets back in the car with a navy blue sweater. One small parenting win for Dave. He needs to figure out what Snapchat is about. And how to be a Washington lawyer and a single parent. And how to get them to eat breakfast and brush their hair and get to school on time.
He clicks on the car’s satellite radio and changes the channel from “60s on 6” to “Hits 1,” the station he thinks Katherine and Lila like. It’s playing Justin Bieber. He pulls out of the driveway and glances over at the passenger seat. Katherine is looking out the window, headphones on.
Katherine works on her homework. All of her 8th grade classes have an online homepage where she can access notes and homework.
One afternoon, Katherine accidentally leaves her phone in her dad’s car. She shouldn’t need it while she does her homework, but she reaches for it, momentarily forgetting it’s not next to her on the U-shaped couch.
Her feet are kicked up onto a coffee table, and her mom’s old MacBook is on her stomach. She’s working on her capstone project, a 12-page essay and presentation on a topic of her choice. At the beginning of the year, she chose “Photoshop and the media,” an examination of how women are portrayed in magazines.
She types into Google, “How to change Chrome icon.” She finds what she needs in seconds. The icon becomes pink . She flips back to the essay and copies a line into the PowerPoint presentation she will give in front of her classmates.
Photoshop affects women of all ages ranging as young as six and even to women older than 40.
Her mom used to have People magazines around the house, but now there’s only junk mail with her name still on it.
Katherine doesn’t need magazines or billboards to see computer-perfect women. They’re right on her phone, all the time, in between photos of her normal-looking friends. There’s Aisha, there’s Kendall Jenner’s butt. There’s Olivia, there’s YouTube star Jenna Marbles in lingerie.
The whole world is at her fingertips and has been for years. This, Katherineoffers as a theory one day, is why she doesn’t feel like she’s 13 years old at all. She’s probably, like, 16.
“I don’t feel like a child anymore” she says. “I’m not doing anything childish. At the end of sixth grade” — when all her friends got phones and downloaded Snapchat, Instagram and Twitter — “I just stopped doing everything I normally did. Playing games at recess, playing with toys, all of it, done.”
Her scooter sat in the garage, covered in dust. Her stuffed animals were passed down to Lila. The wooden playground in the back yard stood empty. She kept her skateboard with neon yellow wheels, because riding it is still cool to her friends.
“I don’t feel like a child anymore,” Katherine says. “I’m not doing anything childish. At the end of 6th grade” — when all her friends got phones, and downloaded Snapchat, Instagram and Twitter — “I just stopped doing everything I normally did. Playing games at recess, playing with toys, all of it, done.”
Katherine switches from her essay to Instagram, which she opens in a new tab. There’s a photo of a girl who will go to Katherine’s high school climbing out of a pool. A photo of clouds above a parking lot. A poorly-lit selfie. She flips back to her essay. There’s a section about how unrealistic portrayals of women lead to teenage eating disorders.
If you aren’t thin, you aren’t attractive
Being thin is more important than being healthy
Thou shall not eat without feeling guilty
She found the words on a blog encouraging anorexia. Its pages were filled with photos of rail-thin girls and tips for how to stop yourself from eating. If she were to go looking for them, Katherine could find sites like this for bulimia, cutting, suicide – all the dangerous behaviors that are more prominent for teens who have been through trauma. She could scroll through them on her phone, looking no different than when she’s reading a BuzzFeed article.
In the past you have heard all of your teachers and parents talk about you. You are “so mature”, “intelligent”, “14 going on 45”, and you possess “so much potential”. Where has that gotten you, may I ask? Absolutely no where!
She copies and pastes some lines from the blog into her presentation. She has never dieted. But for some reason, she says, when she first found this blog, she just couldn’t seem to get it out of her head.
On the morning of her 14th birthday, Katherine wakes up to an alarm ringing on her phone. It’s 6:30 a.m. She rolls over and shuts it off in the dark.
Her grandparents, here to celebrate the end of her first year of teenagehood, are sleeping in the guest room down the hall. She can hear the dogs shuffling across the hardwood downstairs, waiting to be fed.
Propping herself up on her peace-sign-covered pillow, she opens Instagram.Later, Lila will give her a Starbucks gift card. Her dad will bring doughnuts to her class. Her grandparents will take her to the Melting Pot for dinner. But first, her friends will decide whether to post pictures of Katherine for her birthday. Whether they like her enough to put a picture of her on their page. Those pictures, if they come, will get likes and maybe tbhs.
They should be posted in the morning, any minute now. She scrolls past a friend posing in a bikini on the beach. Then a picture posted by Kendall Jenner. A selfie with coffee. A basketball Vine. A selfie with a girl’s tongue out. She scrolls, she waits. For that little notification box to appear.
How the Internet works: Submarine fiber, brains in jars, and coaxial cables

How the Internet works: Submarine fiber, brains in jars, and coaxial cables

CONNECTING THE FUTURE SPECIAL FEATURE

How the Internet works: Submarine fiber, brains in jars, and coaxial cables

A deep dive into Internet infrastructure, plus a rare visit to a subsea cable landing site.

Ah, there you are. That didn't take too long, surely? Just a click or a tap and, if you’ve some 21st century connectivity, you landed on this page in a trice.
But how does it work? Have you ever thought about how that cat picture actually gets from a server in Oregon to your PC in London? We’re not simply talking about the wonders of TCP/IP or pervasive Wi-Fi hotspots, though those are vitally important as well. No, we’re talking about the big infrastructure: the huge submarine cables, the vast landing sites and data centres with their massively redundant power systems, and the elephantine, labyrinthine last-mile networks that actually hook billions of us to the Internet.
And perhaps even more importantly, as our reliance on omnipresent connectivity continues to blossom, our connected device numbers swell, and our thirst for bandwidth knows no bounds, how do we keep the Internet running? How do Verizon or Virgin reliably get 100 million bytes of data to your house every second, all day every day?
Well, we’re going to tell you over the next 7,000 words.

Table of Contents

Enlarge / A map of the world's submarine cables. Not pictured: Lots and lots of terrestrial cables.
TeleGeography

The secret world of cable landing sites

BT might be teasing its customers with the promise of fibre to the home (FTTH) to boost bandwidth, and Virgin Media has a pretty decent service, offering speeds of up to 200Mbps for domestic users on its hybrid fibre-coaxial (HFC) network. But as it says on the tin, the World Wide Web is a global network. Providing an Internet service goes beyond the mere capabilities of a single ISP on this sceptred isle or, indeed, the capabilities of any single ISP anywhere in the world.
First we’re going to take a rare look at one of the most unusual and interesting strands of the Internet and how it arrives onshore in Britain. We’re not talking dark fibre between terrestrial data centres 50 miles apart, but the landing station where Tata’s Atlantic submarine cable terminates at a mysterious location on the west coast of England after its 6,500km journey from New Jersey in the USA.
Connecting to the US is critical for any serious international communications company, and Tata’s Global Network (TGN) is the only wholly owned fibre ring encircling the planet. It amounts to a 700,000km subsea and terrestrial network with more than 400 points of presence worldwide.
Tata is willing to share, though; it’s not just there so the CEO’s kids get the best latency when playing Call of Duty, and the better half can stream Game of Thrones without a hitch. At any one time Tata’s Tier 1 network is handling 24 percent of the world’s Internet traffic, so the chance to get up close and personal with TGN-A (Atlantic), TGN-WER (Western Europe), and their cable consortium friends is not to be missed.
The site itself is a pretty much vanilla data centre from the outside, appearing grey and anonymous—they could be crating cabbages in there for all you’d know. Inside, it’s RFID cards to move around the building and fingerprint readers to access the data centre areas, but first a cuppa and a chat in the boardroom. This isn’t your typical data centre, and some aspects need explaining. In particular, submarine cable systems have extraordinary power requirements, all supported by extensive backup facilities.

Armoured submarine cables

Carl Osborne, Tata’s VP of international network development, joined us to add his insights during the tour. When it comes to Tata’s submarine cable network, he has actually been on board the cable ship to watch it all happen. He brought with him some subsea cable samples to show how the design changes depending on the depth. The nearer to the surface you get, the more protection—armour—you need to withstand potential disturbances from shipping. Trenches are dug and cables buried in shallow waters coming up onto shore. At greater depths, though, areas such as the West European Basin, which is almost three miles from the surface, there’s no need for armour, as merchant shipping poses no threat at all to cables on the seabed.
Enlarge / The core of a submarine cable: the fibre-optic pairs protected by steel, the copper sheath for power delivery, and a thick polyethylene insulating layer.
Bob Dormon / Ars Technica
At these depths, cable diameter is just 17mm, akin to a marker pen encased by a thick polyethylene insulating sheath. A copper conductor surrounds multiple strands of steel wire that protect the optical fibres at the core, which are inside a steel tube less than 3mm in diameter and cushioned in thixotropic jelly. Armoured cables have the same arrangement internally but are clad with one or more layers of galvanised steel wire, which is wrapped around the entire cable.
Without the copper conductor, you wouldn’t have a subsea cable. Fibre-optic technology is fast and seemingly capable of unlimited bandwidth, but it can’t cover long distances without a little help. Repeaters—effectively signal amplifiers—are required to boost the light transmission over the length of the fibre optic cable. This is easily achieved on land with local power, but on the ocean bed the amplifiers receive a DC voltage from the cable’s copper conductor. And where does that power come from? The cable landing sites at either end of the cable.
Although the customers wouldn’t know it, TGN-A is actually two cables that take diverse paths to straddle the Atlantic. If one cable goes down, the other is there to ensure continuity. The alternative TGN-A lands at a different site some 70 miles (and three terrestrial amplifiers) away and receives its power from there, too. One of these transatlantic subsea cables has 148 amplifiers, while the other slightly longer route requires 149.
Site managers tend not to seek out the limelight, so we’ll call our cable landing site tour guide John, who explains more about this configuration.
“To power the cable from this end, we’ve a positive voltage and in New Jersey there’s a negative voltage on the cable. We try and maintain the current—the voltage is free to find the resistance of the cable. It’s about 9,000V, and we share the voltage between the two ends. It’s called a dual-end feed, so we’re on about 4,500V each end. In normal conditions we could power the cable from here to New Jersey without any support from the US.”
Needless to say, the amplifiers are designed to be maintenance-free for 25 years, as you’re not going to be sending divers down to change a fuse. Yet looking at the cable sample itself, with a mere eight strands of optical fibre inside, you can’t help but think that, for all the effort involved, there should be more.
“The limitations are on the size of the amplifier. For eight fibre pairs you’d need twice the size of amplifier,” says John, and as the amplifier scales up, so does the need for power.
At the landing site, the eight fibres that make up TGN-A exist as four pairs, each pair comprising a distinct send and receive fibre. The individual fibre strands are coloured, such that if it’s broken, and a repair needs to be done at sea, the technicians know how to splice it back together again. Similarly, those on land can identify what goes where when plugging into the Submarine Line Terminal Equipment (SLTE).

Fixing cables at sea

After the landing site trip, I spoke to Peter Jamieson, a fibre network support specialist at Virgin Media, for a few more details on submarine cable maintenance. “Once the cable has been found and returned to the cable-repair ship, a new piece of undamaged cable is attached. The ROV [remotely operated vehicle] then returns to the seabed, finds the other end of the cable and makes the second join. It then uses a high-pressure water jet to bury the cable up to 1.5 metres under the seabed,” he says.
“Repairs normally take around 10 days from the moment the cable repair ship is launched, with four to five days spent at the location of the break. Fortunately, such incidents are rare: Virgin Media has only had to deal with two in the past seven years.”

QAM, DWDM, QPSK...

With cables and amplifiers in place, most likely for decades, there’s no more tinkering to be done in the ocean. Bandwidth, latency, and quality-of-service achievements are dealt with at the landing sites.
“Forward error correction is used to understand the signal that’s being sent, and modulation techniques have changed as the amount of traffic going down the signal has increased," says Osborne. “QPSK [Quadrature Phase Shift Keying] and BPSK [Binary Phase Shift Keying], sometimes called PRK [Phase Reversal Keying] or 2PSK, are the long distance modulation techniques. 16QAM [Quadrature Amplitude Modulation] would be used on a shorter length subsea cable system, and they’re bringing in 8QAM technology to fit in between 16QAM and BPSK.”
DWDM (Dense Wavelength Division Multiplexing) technology is used to combine the various data channels, and by transmitting these signals at different wavelengths—different coloured light within a specific spectrum—down the fibre optic cable, it effectively creates multiple virtual-fibre channels. In doing so the carrying capacity of the fibre is dramatically increased.
Currently, each of the four pairs has a capacity of 10 terabits per second (Tbps), amounting to a total of 40Tbps on the TGN-A cable. At the time, a figure of 8Tbps was the current lit capacity on this Tata network cable. As new customers come on stream they’ll nibble away at the spare capacity, but we're not about to run out: there’s still 80 percent to go, and another encoding or multiplexing enhancement will most likely be able increase the throughput capabilities in years to come.
One of the main issues affecting this application of photonics communications is the optical dispersion of the fibre. It’s something designers factor in to the cable construction, with some sections of fibre having positive dispersion qualities and others negative. And if you need to do a repair, you’ll have to be sure you have the correct dispersion cable type on board. Back on dry land, electronic dispersion compensation is one area that’s being increasingly refined to tolerate more degraded signals.
“Historically, we used to use spools of fibre for dispersion compensation,” says John, “but today it’s all done electronically. It’s much more accurate, enabling higher bandwidths.”
So now rather than initially offering customers 1G (gigabit), 10G, or 40G fibre connectivity, technological enhancements in recent years mean the landing site can prepare “drops” of 100G.

The cable guise

Although hard to miss with its bright yellow trunking, at a glance both the Atlantic and west European submarine cables inside the building could easily be mistaken for some power distribution system. Wall-mounted in the corner, this installation doesn’t need to be fiddled with, although if a new run of optical cable is required, it will be spliced together directly from the subsea fibre inside the box. Coming up from the floor of the landing site, the red and black sticker shouts “TGN Atlantic Fiber," while to the right is the TGN-WER cable, which sports a different arrangement with its fibre pairs separated at the junction box.
To the left of both boxes are power cables inside metal pipes. The thicker two are for TGN-A, the slimmer ones are for TGN-WER. The latter also has two submarine cable paths with one landing at Bilbao in Spain and the other near Lisbon in Portugal. As the distance from these countries to the UK is shorter, there’s significantly less power required, hence rather thinner power cables.
Enlarge / The power lines that feed into TGN-A and TGN-WER.
Bob Dormon / Ars Technica UK
Referring to the setup at the landing station, Osborne says, “Cables coming up from the beach have three core parts: the fibres that carry the traffic, the power portion, and the earth portion. The fibres that carry the traffic are what are extended over that box. The power portion gets split out to another area within the site.”
The yellow fibre trunking snakes overhead to the racks that will perform various tasks, including demultiplexing the incoming signals to separate out different frequency bands. These are potential "drops," where an individual channel can terminate at the landing station to join a terrestrial network.
As John puts it, “100G channels come in and you have 10G clients: 10 by 10s. We also offer a pure 100G.”
“It depends what the client wants,” adds Osborne. “If they want a single 100G circuit that’s coming out of one of those boxes it can be handed over directly to the customer. If the customer wants a lower speed, then yes, it will have to be handed over to further equipment to split it up into lower speeds. There are clients who will buy a 100G direct link but not that many. A lower-tier ISP, for example, wanting to buy transmission capability from us, will opt for a 10G circuit.
“The submarine cable is providing multiple gigabits of transport capability that can be used for private circuits in between two corporate offices. It can be running voice calls. All that transport can be augmented into the Internet backbone service layer. And each of those product platforms has different equipment which is separately monitored.
“The bulk of the transport on the cable is either used for our own Internet or is being sold as transport circuits to other Internet wholesale operators—the likes of BT, Verizon, and other international operators who don’t have their own subsea cables buy transport from us.”
Enlarge / A distribution frame at the Tata landing site/data centre.
Tall distribution frames support a patchwork of optical cables that divvy up 10G connectivity for clients. If you fancy a capacity upgrade then it’s pretty much as simple as ordering the cards and stuffing them into the shelves—the term used to describe the arrangements in the large equipment chassis.
John points out a customer’s existing 560Gbps system (based on 40G technology), which recently received an additional 1.6Tbps upgrade. The extra capacity was achieved by using two 800Gbps racks, both functioning on 100G technology for a total bandwidth of more than 2.1Tbps. As he talks about the task, one gets the impression that the lengthiest part of the process is waiting for the new cards to show up.
All of Tata’s network infrastructure onsite is duplicated, so there are two of rooms SLT1 and SLT2. One Atlantic system internally referred to as S1 is on the left of SLT1, and the Western Europe Portugal cable referred to as C1 is on the right. And on the other side of the building there’s SLT2, with the Atlantic S2 system together with C2 connecting to Spain.
In a separate area nearby is the terrestrial room, which, among other tasks, handles traffic connections to Tata’s data centre in London. One of the transatlantic fibre pairs doesn’t actually drop at the landing site at all. It’s an “express pair” that continues straight to Tata's London premises from New Jersey to minimise latency. Talking of which, John looked up the latency of the two Atlantic cables; the shorter journey clocks up a round trip delay (RTD) of 66.5ms, while the longer route takes 66.9ms. So your data is travelling at around 437,295,816 mph. Fast enough for you?
On this topic he describes the main issues: “Each time we convert from optical to electrical and then back to optical, this adds latency. With higher-quality optics and more powerful amplifiers, the need to regenerate the signal is minimised these days. Other factors involve the limitations on how much power can be sent down the subsea cables. Across the Atlantic, the signal remains optical over the complete path.”

Testing submarine cables

To one side is a bench of test equipment and, as seeing is believing, one of the technicians plumbs a fibre-optic cable into an EXFO FTB-500. This is equipped with an FTB-5240S spectrum analyser module. The EXFO device itself runs on Windows XP Pro Embedded and features a touchscreen interface. After a fashion it boots up to reveal the installed modules. Select one and, from the list on the main menu, you choose a diagnostic routine to perform.
Enlarge / A big ol' Juniper backbone IP router.
Bob Dormon / Ars Technica UK
“What you’re doing is taking a 10 percent tap of light from the cable system,” the technician explains. “You make a spectrum analyser access point, so you can then tap that back to analyse the signal.”
We’re taking a look at the channels going up to London, and, as this particular feed is in the process of being decommissioned, you can see that there is unused spectrum showing on the display. The spectrum analyser can’t detail what the data rate of a particular frequency band is; instead you have to look up the frequency in a database to find out.
“If you’re looking at a submarine system,” he adds, “there are a lot of sidebands and stuff as well, so you can see how it’s performing. One of the things you get is drift. And you can see if it’s actually drifting into another frequency band, which will decrease its performance.”
Enlarge / An ADVA FSP 3000, connecting the landing site to other terrestrial customers and data centres.
Bob Dormon / Ars Technica UK
Never far from the heavy lifting in data communications, a Juniper MX960 universal edge router acts as the IP backbone here. In fact, there are two onsite confirms John: “We have the transatlantic stuff coming in and then we can drop STM-1 [Synchronous Transport Module, level 1], GigE, or 10GigE clients—so this will do some sort of multiplexing and drop the IP network to various customers.”
The equipment used on the terrestrial DWDM platforms takes up far less space than the subsea cable system. Apparently, the ADVA FSP 3000 equipment is pretty much exactly the same thing as the Ciena 6500 kit, but because it’s terrestrial the quality of the electronics doesn’t have to be as robust. In effect, the shelves of ADVA gear used are simply cheaper versions, as the distances involved are much shorter. With the subsea cable systems, the longer you go, the more noise is introduced, and so there’s a greater dependence on the Ciena photonics systems deployed at the landing site to compensate for that noise.
One of the racks houses three separate DWDM systems. Two of them connect to London on separate cables (each via three amplifiers), and the other goes to a data centre in Buckinghamshire.
The landing site also plays host to the West Africa Cable System (WACS). Built by a consortium of around a dozen telcos, it extends from here all the way to Cape Town. Subsea branching units enable the cable to split off to land at various territories along Africa’s South Atlantic coastline.

The power of nightmares

You can’t visit a landing site or a data centre without noticing the need for power, not only for the racks but for the chillers: the cooling systems that ensure that servers and switches don’t overheat. And as the submarine cable landing site has unusual power requirements for its undersea repeaters, it has rather unusual backup systems, too.
Enter one of the two battery rooms and instead of racks of Yuasa UPS support batteries—with a form factor not too far removed from what you’ll find in your car—the sight is more like a medical experiment. Huge lead-acid batteries in transparent tanks, looking like alien brains in jars, line the room. Maintenance-free with a life of 50 years, this array of 2V batteries amounts to 1600Ah, delivering a guaranteed four hours of autonomy.
Enlarge / You can see the PFEs on the left, the blue cabinets.
Bob Dormon / Ars Technica UK
Battery chargers, which are basically the rectifiers, supply the float voltage so the batteries are maintained. They also supply the DC voltage to the building for the racks. Inside the room are two PFEs (Power Feed Equipment) all housed together within sizeable blue cabinets. One is powering the Atlantic S1 cable and the other is for Portugal C1. A digital display gives a reading of 4,100V at around 600mA for the Atlantic PFE and another shows just over 1,500V at around 650mA for the C1 PFE.
John describes the configuration: “The PFE has two separate converters. Each converter has three power stages. Each one can supply 3,000V DC. So this one cabinet can actually supply the whole cable, so we have an n+1 redundancy, because there’re two onsite. However, it’s more like n+3, because if both convertors failed in New Jersey and a convertor here failed also, we could still feed the cable.”
Enlarge / One of the two 2MVA diesel generators.
Bob Dormon / Ars Technica UK
Revealing some rather convoluted switching arrangements, John explains the control system: “This is basically how we turn it on and off. If there is a cable fault, we have to work with the ship managing the repair. There are a whole load of procedures we have to go through to ensure it’s safe before the ship’s crew can work on it. Obviously, voltage that high is lethal, so we have to send power safety messages. We’ll send a notification that the cable is grounded and they’ll respond. It’s all interlocked so you can make sure it’s safe.”
The site also has two 2MVA (megavolt-ampere) diesel generators. Of course, as everything’s duplicated, the second one is a backup. There are three huge chillers, too, but apparently they only need one. Once a month the generator backup is tested off load, and twice a year the whole building is run on load. As the site also doubles up as a data centre, it’s a requirement for SLAs and ISO accreditation.
In a normal month, the electricity bill for the site comfortably reaches five figures.

Next stop: Data centre

At the Buckinghamshire data centre there are similar redundancy requirements, albeit on a different scale, with two giant collocation and managed hosting halls (S110 and S120), each occupying 10,000 square feet. Dark fibre connects S110 to London, while S120 connects to the west coast landing site. There are two network setups here—autonomous systems 6453 and 4755: MPLS (Multi-Protocol Label Switching) and IP (Internet Protocol) network ports.
As its name implies, MPLS uses labels and assigns them to data packets. The contents of the packets don’t need to be inspected. Instead, the packet forwarding decisions are performed based on what’s contained in the labels. If you’re keen to understand the detail of MPLS, MPLSTutorial.com is a good place to start.
Likewise, Charles M. Kozierok’s TCP/IP Guide is an excellent online resource for anyone wanting to learn about TCP/IP, its various layers, and its OSI (Open System Interconnection) model counterpart, plus a whole lot more.
In some respects, the MPLS network is the jewel in the Tata Communications crown. This form of switching technology, because packets can be assigned a priority label, allows companies using this scalable transport system to offer guarantees in terms of customer service. Labelling also enables data to be directed to follow a specific route, rather than a dynamically assigned path, which can allow for quality-of-service requirements or even avoiding traffic tariffs from certain territories.
Bob Dormon / Ars Technica UK
Again, as its name implies, being multi-protocol, an MPLS network can support different methods of communication. So if an enterprise customer wants a VPN (Virtual Private Network), private Internet, cloud applications, or a specific type of encryption, these services are fairly straightforward to deliver.
For this visit, we’ll call our Buckinghamshire guide Paul and his on-site NOC colleague George.
“With MPLS we can provide any BIA [burned in address] or Internet—any services you like depending on what the customers want,” says Paul. “MPLS feeds our managed hosting network, which is the biggest footprint in the UK for managed hosting. So we’ve got 400 locations with multiple devices which connect into one big network, which is one autonomous system. It provides IP, Internet, and point-to-point services to our customers. Because it has a mesh topology [400 interconnected devices]—any one connection will take a different route to the MPLS cloud. We also provide network services—on-net and off-net services. Service providers like Virgin Media and NetApp terminate their services into the building.”
Enlarge / The ADVA equipment, where customer connections are linked into Tata's network.
In the spacious Data Hall 110, Tata’s managed hosting and cloud services are on one side, with collocation customers on the other. Data Hall 120 is much the same. Some clients keep their racks in cages and restrict access to just their own personnel. By being here, they get space, power, and environment. All the racks have two supplies from A UPS and B UPS, by default. They each come via a different grid, taking alternative routes through the building.
“So our fibre, which comes from the SLTE and London, terminates in here,” says Paul. Pointing out a rack of Ciena 6500 kit, he adds, “You might have seen equipment like this at the landing site. This is what takes the main dark fibre coming into the building and then it distributes it to the DWDM equipment. The dark fibre signals are divided into the different spectrums, and then it goes to the ADVA from where it’s distributed to the actual customers. We don’t allow customers to directly connect into our network, so all the network devices are terminated here. And from here we extend our connectivity to our customers.”

A change in the data tide

Enlarge / A lot of the equipment in the data centre is Dell or HP.
Bob Dormon / Ars Technica UK
A typical day for Paul and his remote-hands colleagues is more about the rack-and-stack process of bringing new customers on board and remote-hands tasks such as swapping out hard drives and SSDs. It doesn’t involve particularly in-depth troubleshooting. For instance, if a customer loses connectivity to any of their devices, his team is there for support and will check that the physical layer is functioning in terms of connectivity, and, if required, will change network adapters and suchlike to make sure a device or platform is reachable.
He has noticed a few changes in recent years, though. Rack-and-stack servers that were 1U or 2U in size are being replaced by 8U or 9U chassis that can support a variety of different cards including blade servers. Consequently, the task of installing individual network servers is becoming a much less common request. In the last four or five years, there have been other changes, too.
“At Tata, a lot of what it provides is HP and Dell—products we’re currently using for managed hosting and cloud setups. Earlier it used to be Sun as well but now we see very little of Sun. For storage and backup, we used to use NetApp as a standard product but now I see that EMC is also being used, and lately we’ve seen a lot of Hitachi storage. Also, a lot of customers are going for a dedicated storage backup solution rather than managed or shared storage.”

The NOC's NOC

The layout in the NOC (network operations centre) area of the site is much the same as you’d find in any office, although the big TV screen and camera linking the UK office to the NOC staff in Chennai in India is a bit of surprise. It’s a network test of sorts, though: if that screen goes down, they both know there’s a problem. Here, it’s effectively level one support. The network is being monitored in New York, and the managed hosting is monitored in Chennai. So if anything serious does happen, these remote locations would know about it first.
George describes the setup: “Being an operations centre we have people calling in regarding problems. We support the top 50 customers—all top financial clients—and it’s a really high priority every time they have a problem. The network that we have is a shared infrastructure, so if there’s a major problem then a lot of customers may be impacted. We need to be able to update them in a timely fashion, if there’s an ongoing problem. We have commitment to some customers to update every hour, and for some it’s 30 minutes. In the critical incident scenario, we constantly update them during the lifetime of the incident. This support is 24/7.”

The ISP's ISP's SLA

Being an international cable system, the more typical problems are the same for communications providers everywhere: namely damage to terrestrial cables, most commonly at construction sites in less well-regulated territories. That and, of course, wayward anchors on the seabed. And then there are the DDoS (distributed denial-of-service) attacks, where systems are targeted and all available bandwidth is swamped by traffic. The team is, of course, well equipped to manage such threats.
Enlarge / Might not look like much, but that's the Formula One rack.
Bob Dormon / Ars Technica UK
“The tools are set up in a way to monitor the usual traffic patterns of what is expected during that period during a day. It can examine 4pm last Thursday and then the same time today. If the monitoring detects anything unusual, it can proactively deal with an intrusion and reroute the traffic via a different firewall, which can filter out any intrusion. That’s proactive DDoS mitigation. The other is reactive, where the customer can tell us: 'OK I have a threat on this day. I want you to be on doubt.’ Even then, we can proactively do some filtering. There’s also legitimate activity that we will receive notification of, for example Glastonbury, so when the tickets go on sale, that high level of activity isn’t blocked.”
Latency commitments have to be monitored proactively, too, for customers like Citrix, whose portfolio of virtualisation services and cloud applications will be sensitive to excessive networking delays. Another client that appreciates the need for speed is Formula One. Tata Communications handles the event networking infrastructure for all the teams and the various broadcasters.
“We are responsible for the whole F1 ecosystem, including the race engineers who are on site and are also part of the team. We build a POP [point of presence] on every race site—installing it, extending all the cables and provisioning all the customers. We install different Wi-Fi Internet breakouts for the paddocks and everywhere else. The engineer on site does all the jobs, and he can show all the connectivity is working for the race day. We monitor it from here using PRTG software so we can check the status of the KPIs [key performance indicators]. We support it from here, 24/7.”
Such an active client, which has regular fixtures throughout the year, means that the facilities management team must negotiate dates to test the backup systems. If it’s an F1 race week, then from Tuesday to the following Monday, these guys have to keep their hands in their pockets and not start testing circuits at the data centre. Even during the tour, when Paul pointed out the F1 equipment rack, he played safe and chose not to open up the cabinet to allow a closer look.
Oh, and if you’re curious about the backup facilities here, there are 360 batteries per UPS and there are eight UPSes. That’s more than 2,800 batteries, and as they’re all 32kg each, this amounts to around 96 tonnes in the building. The batteries have a 10-year lifespan, and they’re individually monitored for temperature, humidity, resistance, and current around the clock. At full load they’ll keep the data centre ticking over for around eight minutes, allowing plenty of time for the generators to kick in. On the day, the load was such that the batteries could keep everything running for a couple of hours.
There are six generators—three per data centre hall. Each generator is rated to take the full load of the data centre, which is 1.6MVA. They produce 1,280kW each. The total coming into the site is 6MVA, which is probably enough power to run half the town. There is also a seventh generator that handles landlord services. The site stores about 8,000 litres of fuel, enough to last well over 24 hours at full load. At full fuel burn, 220 litres of diesel an hour is consumed, which, if it were a car travelling at 60mph, would notch up a meagre 1.24mpg—figures that make a Humvee seem like a Prius.

The last mile

The final step—the last few miles from the headend or NOC to your home—appears rather less overwhelming, as we take a glimpse at the thin end of the communications infrastructure wedge.
There have been changes, though, with new streetside cabinets appearing alongside the older green incumbents, as Virgin Media and Openreach bring DOCSIS and VDSL2 respectively to an increasing number of homes and businesses.

VDSL2

Inside Openreach's new VDSL2 cabinets is a DSLAM (digital subscriber line access module, in BT parlance). In the case of older ADSL and ADSL2, DSLAM kit tends to be found farther away at the exchange, but its use in the street is to amplify the fibre-optic cable signal connected to the exchange to enable a broadband speed increase to the end user.
Using tie pair cables, the mains-powered DSLAM cabinet is linked to the existing street cabinet, and this combination is described as a primary cross-connection point (PCP). The copper cabling to the end user’s premises remains unchanged, while VDSL2 is used to deliver the broadband connectivity to the premises from the conventional street cabinet.
Enlarge / Inside an Openreach VDSL2 cabinet.
Bob Dormon / Ars Technica UK
This isn’t an upgrade that can be done without a visit from an engineer, however, as the NTE5 (Network Terminating Equipment) socket inside the home will need to be upgraded, too. Still, it’s a step forward that has allowed the company to offer an entry-level download speed of 38Mbps and a top speed of 78Mbps to millions of homes without having to go through all the effort of delivering on FTTH.

DOCSIS

It’s a far cry from Virgin Media’s HFC network, which currently has homes connected at 200Mbps and businesses at 300Mbps. And while the methods used to get these speeds rely on DOCSIS 3 (Data Over Cable Service Interface Specification) rather than VDSL2, there are parallels. Virgin Media uses fibre-optic cables to deliver its services to streetside cabinets, which distribute broadband and TV over a single copper coaxial cable (a twisted pair is still used for voice).
It's also worth mentioning that DOCSIS 3.0 is the leading last-mile network tech over in the US, with about 55 million out of 90 million fixed-line broadband connections using coaxial cable. ADSL is in second place with about 20 million and then FTTP with about 10 million. Hard numbers for VDSL2 deployment in the US are hard to come by, but it appears to be used sporadically in some urban areas.
There's still plenty of headroom with DOCSIS 3 that will allow cable ISPs to offer downstream connection speeds of 400, 500, or 600Mbps as needed—and then after that there'll be DOCSIS 3.1 waiting in the wings.
The DOCSIS 3.1 spec suggests more than 10Gbps is possible downstream and eventually 1Gbps upstream. These capacities are made possible by the use of quadrature amplitude modulation techniques—the same as used on short-distance submarine cables. However, the terrestrial rates here are considerably higher, at 4,096QAM, and are combined with orthogonal frequency-division multiplexing (OFDM) subcarriers that, like DWDM, spread transmission channels over different frequencies within a limited spectrum. ODFM is also used for ADSL/VDSL variants and G.fast.

The last 100 metres

While FTTC and DOCSIS look set to dominate the wired UK consumer Internet access market for at least the next few years, we’d be remiss if we completely ignored the other side of the last-mile (or last-100m) equation: mobile devices and wireless connectivity.
Ars will have another in-depth feature on the complexities of managing and rolling out cellular networks soon, so for now we’ll just look at Wi-Fi, which is mostly an extension of existing FTTC and DOCSIS Internet access. Case in point: the recent emergence of almost blanket Wi-Fi hotspot coverage in urban areas.
First it was a few plucky cafes and pubs, and then BT turned its customers’ routers into open Wi-Fi hotspots with its "BT with Fon" service. Now we’re moving into major infrastructure plays, such as Wi-Fi across the London Underground and Virgin’s curious “smart pavement” in Chesham, Buckinghamshire.
For this project, Virgin Media basically put a bunch of Wi-Fi access points beneath manhole covers made of specially made radio-transparent resin. Virgin maintains a large network of ducts and cabinets across the UK that are connected to the Internet—so why not add a few Wi-Fi access points to share that connectivity with the public?
Enlarge / One of the Virgin Media "smart pavement" manhole covers in Chesham.
Talking to Simon Clement, a senior technologist at Virgin Media, it sounds like they were expecting the smart pavement installation to be harder than it actually was.
“The expected issues that had been encountered in the past with local authorities had not occurred,” Clement says. “Chesham Town Council has been very proactive in working with us on this pilot, and there is a general feeling that local authorities across the board have begun to embrace communications services for their residents and understand the work that needs to go into providing them.”
Most of the difficulties seem to be self-imposed, or regulatory.
“The biggest issue tends to be challenging conventional thinking. For example, traditional wireless projects involve mounting a radio as high as permission allows and radiating with as much power as regulations permit. What we tried to do was put a radio under the ground and work within the allowed power levels of traditional in home Wi-Fi," he says.
“We have to assess all risks as we move through the project. As with all innovation projects, a formal risk assessment is only as valid as long as the scope remains static. This is very rarely the case, and we have to perform dynamic risk assessments on a very regular basis. There are key cornerstones we try to adhere to, especially in wireless projects. We always stay within regulation EIRP [equivalent isotropically radiated power] limits and always maintain safe working practices with radios. We would rather be conservative on radio emissions.”

Back to the future of wired Internet

Enlarge / The white-grey box is an under-pavement DSLAM from a UK G.fast trial.
The next thing on the horizon for Openreach’s POTS network is G.fast, which is best described as an FTTdp (fibre to distribution point) configuration. Again, this is a fibre-to-copper arrangement, but the DSLAM will be placed even closer to the premises, up telegraph poles and under pavements, with a conventional copper twisted pair for the last few tens of metres.
The idea is to get the fibre as close to the customer as possible, while at the same time minimising the length of copper, theoretically enabling connection speeds of anywhere from 500Mbps to 800Mbps. G.fast operates over a much broader frequency spectrum than VDSL2, so longer cable lengths have more impact on its efficiency. However, there has been some doubt whether BT Openreach will be optimising speeds in this way as, for reasons of cost, it could well retreat to the green cabinet to deliver these services and take a hit on speed, which would slide down to 300Mbps.
Then there’s FTTH. Openreach had originally put FTTH on hold as it worked out the best (read: cheapest) way to deliver it but recently said that it had “ambition” to begin extensively rolling out FTTH. FTTC or FTTdp is more likely to be the short- and mid-term reality for most consumers whose ISP is an Openreach wholesale customer.
Virgin Media, on the other hand, doesn’t seem to be resting on its coaxial laurels: as its telecom behemoth rival ponders its obligations, Virgin has been steadily delivering FTTH, with 250,000 customers covered already and a target of 500,000 this year. Project Lightning, which will connect another four million homes and offices to Virgin’s network over the next few years, will include one million new FTTH connections.
Virgin’s current deployment of FTTH uses RFOG (radio frequency over glass) so that standard coaxial routers and TiVo can be used, but having an extensive FTTH footprint in the UK would give the company a few more options in the future as customer bandwidth demands increase.
Enlarge / One last photo of some submarine cable segments...
Bob Dormon / Ars Technica UK
The last few years have also been exciting for smaller, independent players such as Hyperoptic andGigaclear, which are rolling out their own fibre infrastructure. Their footprints are still hyper-focused on a few thousand inner-city apartment blocks (Hyperoptic) and rural villages (Gigaclear), but increased competition and investment in infrastructure is never a bad thing.

Quite a trip

So, there we have it: the next time you click on a YouTube video, you’ll know exactly how it gets from a server in the cloud to your computer. It might seem absolutely effortless—and it usually is on your part—but now you know the truth: there are deadly 4,000V DC submarine cables, 96 tonnes of batteries, thousands of litres of diesel fuel, millions of miles of last-mile cabling, and redundancy up the wazoo.
The whole setup is only going to get bigger and crazier, too. Smart homes, wearable devices, and on-demand TV and movies are all going to necessitate more bandwidth, more reliability, and more brains in jars. What a time to be alive.
Bob Dormon’s technological odyssey began as a teenager working at GCHQ, yet his passion for music making took him to London to study sound recording. During his studio days he regularly contributed to music technology and Mac magazines for over 12 years. Fascinated by our relationship with technology he eventually turned to journalism full-time, and for over six years was part of The Register’s senior editorial team. Bob lives in London with far too many gadgets, guitars, and vintage MIDI synths.
This post originated on Ars Technica UK