How four rotten packets broke CenturyLink's network for 37 hours, knackering 911 calls, VoIP, broadband
.jpg)
A handful of bad network packets triggered a massive chain reaction that crippled the entire network of US telco CenturyLink for roughly a day and a half.
This is according to the FCC's official probe into the December 2018 super-outage, during which CenturyLink's broadband internet and VoIP services fell over and stayed down for a total of 37 hours. This meant subscribers couldn't, among other things, call 911 over VoIP at the time – which is a violation of FCC rules, and triggered a formal investigation.
"This outage was caused by an equipment failure catastrophically exacerbated by a network configuration error," America's communications regulator said in its summary of its inquiry, published yesterday.
"It affected communications service providers, business customers, and consumers who directly or indirectly relied upon CenturyLink’s transport services, which route communications traffic from various providers to locations across the country, resulting in extensive disruptions to phone service, including 911 calling."
CenturyLink has six long-haul networks that make up the backbone of its digital empire, interconnecting regions of America. These networks use Infinera-built nodes to switch packets over high-speed optic fiber: data flowing into each node is directed to other nodes, ultimately pumping VoIP, regular internet traffic, and more, across the nation as needed.
Each dodgy packet would arrive at a node, get rejected and be passed along a chain of filters until it was injected into a management channel and handed to all connecting nodes. Here's a flow diagram, courtesy of the FCC, showing how the corrupted packets ended up being forwarded on to all neighbouring nodes, and so on and so on, producing a growing chain reaction of corrupted packets.
"Due to the packets’ broadcast destination address, the malformed network management packets were delivered to all connected nodes. Consequently, each subsequent node receiving the packet retransmitted the packet to all its connected nodes, including the node where the malformed packets originated," the FCC said in its report.
"Each connected node continued to retransmit the malformed packets across the proprietary management channel to each node with which it connected because the packets appeared valid and did not have an expiration time. This process repeated indefinitely."
As you might imagine, the exponentially growing storm of packets soon overwhelmed CenturyLink's optic-fibre backbone, causing regular traffic to stop flowing: VoIP phones stopped working, internet access slowed to a halt, and so on. Folks in New Orleans were first to spot their connections stalling, at roughly 0356 EST on December 27.
Here is where things went from really, really bad to terrible: the nodes along the fiber network were so flooded, they could not be reached by their administrators to troubleshoot the issue. It wasn't until some 15 hours later the techies could finally track down the single errant node in Colorado responsible for sparking the deluge, not that replacing it helped. The packet tsunami was still washing back and forth, knocking nodes over.
"At 2102 on December 27, CenturyLink network engineers identified and removed the module that had generated the malformed packets," the report noted. "The outage, however, did not immediately end; the malformed packets continued to replicate and transit the network, generating more packets as they echoed from node to node."
It would be another three hours before CenturyLink's network admins could begin to get through to the other nodes and get them to kill off the spread of bad packets. It took until 1130 on December 28 to get visibility of the network back, and it wasn't until 2336 that all nodes had been restored. On December 29, just after midday, CenturyLink finally declared the crisis over.
"The event caused a nationwide voice, IP, and transport outage on CenturyLink’s fiber network. CenturyLink estimates that 12,100,108 calls were blocked or degraded due to the incident," the FCC said.
"Where long-distance voice callers experienced call quality issues, some customers received a fast-busy signal, some received an error message, and some just had a terrible connection with garbled words."
The outage also knackered local governments and telcos that relied on the CenturyLink network for portions of their services. State governments in Illinois, Kansas, Minnesota, and Missouri all had portions of their networks down for roughly 36 hours thanks to CenturyLink, and phone services sold by Comcast, Verizon, TeleCommunication Systems, General Dynamics IT, and West Safety Services – including 911 call centers – saw connectivity interrupted for some or all of the outage period.
As to what can be done to prevent similar failures, the FCC is recommending CenturyLink and other backbone providers take some basic steps, such as disabling unused features on network equipment, installing and maintaining alarms that warn admins when memory or processor use is reaching its peak, and having backup procedures in the event networking gear becomes unreachable.
"Currently, CenturyLink is in the process of updating its nodes’ Ethernet policer to reduce the chance of the transmission of a malformed packet in the future," the report notes. "The improved ethernet policer quickly identifies and terminates invalid packets, preventing propagation into the network. This work is expected to be completed in Fall, 2019."
source theregister
Industry: Unified Communications & Telecommunications

Latest Jobs
-
- OUTSIDE IR35 Contract- Functional tester- SC clearance Microsoft Windows Server
- London
- Outside IR35 contract
-
Front End Functional tester with SC clearance needed for an Outside IR35 project. Current valid SC clearance is required Experience with functional testing with exchange, sharepoint, SQL and other applications relating across a windows server Migration to 2019. Must be able to get to Central London 3 days a week. Jira, Wiki documentation and automation experience highly desirable.
-
- ForgeRock Consultant- UK
- United Kingdom
- Upto £100,000 plus benefits
-
ForgeRock Consultant/ Architect is require for niche consultancy who are looking to expand their presence within the UK/European Market Looking for a lead IAM architect, ideally with ForgeRock experience but would consider other vendors, But looking for someone who is able to advice and consultant with Clients but have the implementation background so they can get involved in projects as and when needed. Key duties will be: Provider IAM consultancy to clients, with a focus on ForgeRock Product stack ·Responsible for the design and implementation of ForgeRock solutions ·Install and configure ForgeRock stack to meet customer authentication and authorization requirements, ·Design and implement OAuth2 protocol using ForgeRock OpenAM, ·Design and develop OpenAM custom authentication modules, ·Configure ForgeRock stack to protect RESTful API, ·Troubleshoot and support ForgeRock IAM stack. This is a great role to join a niche play as they look to kick of their European expansion
-
- ForgeRock Consultant- Netherlands
- Netherlands
- N/A
-
ForgeRock Consultant required for 6 Month Contract This will be a mix of on site and home based, so need people to be based in the Netherlands We are looking for a lead ForgeRock Technical Consultant/ Architect with strong experience of ForgeRock to lead a new deployment project. ·Responsible for the design and implementation of ForgeRock stack ·Install and configure ForgeRock stack to meet customer authentication and authorization requirements, ·Design and implement OAuth2 protocol using ForgeRock OpenAM, ·Design and develop OpenAM custom authentication modules, ·Configure ForgeRock stack to protect RESTful API, ·Troubleshoot and support ForgeRock IAM stack. ·Designed and developed Restful APIs, This is a great project with an expanding leading IAM player within Europe, We are looking for someone with the above experience, who is comfortable hitting the ground running and taking on the reins at the start of a project