How four rotten packets broke CenturyLink's network for 37 hours, knackering 911 calls, VoIP, broadband
.jpg)
A handful of bad network packets triggered a massive chain reaction that crippled the entire network of US telco CenturyLink for roughly a day and a half.
This is according to the FCC's official probe into the December 2018 super-outage, during which CenturyLink's broadband internet and VoIP services fell over and stayed down for a total of 37 hours. This meant subscribers couldn't, among other things, call 911 over VoIP at the time – which is a violation of FCC rules, and triggered a formal investigation.
"This outage was caused by an equipment failure catastrophically exacerbated by a network configuration error," America's communications regulator said in its summary of its inquiry, published yesterday.
"It affected communications service providers, business customers, and consumers who directly or indirectly relied upon CenturyLink’s transport services, which route communications traffic from various providers to locations across the country, resulting in extensive disruptions to phone service, including 911 calling."
CenturyLink has six long-haul networks that make up the backbone of its digital empire, interconnecting regions of America. These networks use Infinera-built nodes to switch packets over high-speed optic fiber: data flowing into each node is directed to other nodes, ultimately pumping VoIP, regular internet traffic, and more, across the nation as needed.
Each dodgy packet would arrive at a node, get rejected and be passed along a chain of filters until it was injected into a management channel and handed to all connecting nodes. Here's a flow diagram, courtesy of the FCC, showing how the corrupted packets ended up being forwarded on to all neighbouring nodes, and so on and so on, producing a growing chain reaction of corrupted packets.
"Due to the packets’ broadcast destination address, the malformed network management packets were delivered to all connected nodes. Consequently, each subsequent node receiving the packet retransmitted the packet to all its connected nodes, including the node where the malformed packets originated," the FCC said in its report.
"Each connected node continued to retransmit the malformed packets across the proprietary management channel to each node with which it connected because the packets appeared valid and did not have an expiration time. This process repeated indefinitely."
As you might imagine, the exponentially growing storm of packets soon overwhelmed CenturyLink's optic-fibre backbone, causing regular traffic to stop flowing: VoIP phones stopped working, internet access slowed to a halt, and so on. Folks in New Orleans were first to spot their connections stalling, at roughly 0356 EST on December 27.
Here is where things went from really, really bad to terrible: the nodes along the fiber network were so flooded, they could not be reached by their administrators to troubleshoot the issue. It wasn't until some 15 hours later the techies could finally track down the single errant node in Colorado responsible for sparking the deluge, not that replacing it helped. The packet tsunami was still washing back and forth, knocking nodes over.
"At 2102 on December 27, CenturyLink network engineers identified and removed the module that had generated the malformed packets," the report noted. "The outage, however, did not immediately end; the malformed packets continued to replicate and transit the network, generating more packets as they echoed from node to node."
It would be another three hours before CenturyLink's network admins could begin to get through to the other nodes and get them to kill off the spread of bad packets. It took until 1130 on December 28 to get visibility of the network back, and it wasn't until 2336 that all nodes had been restored. On December 29, just after midday, CenturyLink finally declared the crisis over.
"The event caused a nationwide voice, IP, and transport outage on CenturyLink’s fiber network. CenturyLink estimates that 12,100,108 calls were blocked or degraded due to the incident," the FCC said.
"Where long-distance voice callers experienced call quality issues, some customers received a fast-busy signal, some received an error message, and some just had a terrible connection with garbled words."
The outage also knackered local governments and telcos that relied on the CenturyLink network for portions of their services. State governments in Illinois, Kansas, Minnesota, and Missouri all had portions of their networks down for roughly 36 hours thanks to CenturyLink, and phone services sold by Comcast, Verizon, TeleCommunication Systems, General Dynamics IT, and West Safety Services – including 911 call centers – saw connectivity interrupted for some or all of the outage period.
As to what can be done to prevent similar failures, the FCC is recommending CenturyLink and other backbone providers take some basic steps, such as disabling unused features on network equipment, installing and maintaining alarms that warn admins when memory or processor use is reaching its peak, and having backup procedures in the event networking gear becomes unreachable.
"Currently, CenturyLink is in the process of updating its nodes’ Ethernet policer to reduce the chance of the transmission of a malformed packet in the future," the report notes. "The improved ethernet policer quickly identifies and terminates invalid packets, preventing propagation into the network. This work is expected to be completed in Fall, 2019."
source theregister
Industry: Unified Communications & Telecommunications

Latest Jobs
-
- Security infrastructure Engineer - Firewall | IDS | NAC | Cloud. Exclusive project, £75,000+
- United Kingdom
- 75000
-
Security infrastructure Engineer - Firewall | IDS | NAC | Cloud. Exclusive project DCL Search exclusive Identifier Project. To join a financial services business. Key hire to be part of / influence the cyber security capability and direction within the business. Hands on experience managing / monitoring / upgrading / implementing / using the following; · Firewalls: SonicWall and or Palo Alto · NAC: Cisco OR Macmon · WAF | CASB | MDM Experience with SIEM monitoring / vulnerability analysis also highly desirable. MUST have current hands on experience with vulnerability management tooling and best practice. Current Financial services is a nice to have. Learn new skills working within a collaborative team. Grow as a security professional.
-
- Outside IR 35 CONTRACT SC CLEARED Cyber Security Operations Analyst SPLUNK ES- UK REMOTE- £500 a day.
- N/A
- 500
-
6 month contract Outside IR35 Operational Cyber Security Analyst. Hands on Splunk Security Enterprise and Security clearance is required As is someone that holds SC clearance. SOC and Vulnerability management experience. Vulnerability Analysis / Management - Tenable
-
- SailPoint Consultant
- Sweden
- Upto €80,000
-
SailPoint Consultant is need for this rapidly expanding global business, The business is currently in the middle of a SailPoint Deployment, they require an experienced Consultant who is able to help them on this Journey You will be responsible for helping to configure and deploy SailPoint as well as on board applications onto the platform You will also work with the business to understand workflow and process to help align the way the business works to ensure that the business gets the most from the deployment We are looking for an experienced SailPoint consultant who has experience with both Deployment and BAU work and is interested in joining a business which is at the start of an interesting IAM Journey
-
- SOC Manager Security Operations. SIEM, Threat / Vulnerability, IR, SOC Service- Exclusive
- United Kingdom
- 90,000+
-
SOC Manager- SIEM, Threat / Vulnerability, Incident response. Exclusive Project. Management and on growth growth of Security Operations Centre capability. Managing and maturing the team, technical services line and fronting client engagements where needed. An in-depth technical background is essential, experience across SOC SIEM/ Threat Hunting (IR) tools, processes, techniques, operational is a MUST. The role will include, but not limited to; evolving the technical process, building operational capability, managing and hiring team, involved at a high level overviewing policy/playbooks, fine turning of the go-to-market collateral etc.