
Redazione RHC : 7 December 2025 09:25
Cloudflare experienced a significant outage on the morning of December 5, 2025, when at 8:47 a.m. UTC, a portion of its infrastructure began experiencing internal errors. The incident, which lasted approximately 25 minutes, was resolved at 9:12 a.m. with full service restoration.
According to the company, approximately 28% of the HTTP traffic handled globally was affected. The impact only affected customers using a specific combination of configurations, as explained by engineers.
Cloudflare clarified that the outage was not linked to any malicious activity: no cyberattack, intrusion attempt, or malicious behavior contributed to the event. Instead, the issue was caused by an update introduced to mitigate a recently disclosed vulnerability related to React Server components, identified as CVE-2025-55182.
The outage was caused by a change to the HTTP request body parsing system, part of measures taken to protect users of React-based applications. The change involved increasing the Web Application Firewall (WAF)’s internal memory buffer from 128 KB to 1 MB, which aligns with the default limit in Next.js frameworks.
This first change was released in a phased rollout. During implementation, engineers discovered that an internal WAF testing tool was incompatible with the new limit. Deeming that component unnecessary for real-world traffic, Cloudflare implemented a second change to disable it.
It was this second change, deployed with the global configuration system— which does not allow for gradual rollouts —that triggered the chain of events that led to the HTTP 500 errors. The system quickly reached every server on the network within seconds.
At that point, a particular version of the FL1 proxy found itself running a piece of Lua code containing a latent bug. This resulted in some requests being blocked from processing and the affected servers returning 500 errors.
Cloudflare engineers report that customers using the FL1 proxy in conjunction with the Cloudflare Managed Ruleset were affected. Requests to sites configured this way began responding with 500 errors, with very few exceptions (such as some test endpoints, such as /cdn-cgi/trace).
Customers using different configurations or those served by the Cloudflare network operating in China were not affected.
The problem was traced to the operation of the rules system used by the WAF. Some rules, using the “execute” action, trigger the evaluation of additional rule sets. The killswitch system, used to quickly deactivate problematic rules, had never been applied to a rule with an ” execute ” action before.
When the change disabled the test set, the system correctly skipped the rule execution , but it didn’t handle the missing “execute” object in the next step of processing the results. This resulted in the Lua error that generated the HTTP 500s.
Cloudflare clarified that this bug does not exist in the FL2 proxy, which is written in Rust, thanks to the different type management system that avoids these scenarios.
The company recalled that a similar dynamic occurred on November 18, 2025 , when another unrelated change caused a widespread failure. Following that incident , several projects were announced to make configuration updates more secure and reduce the impact of individual errors.
Initiatives still underway include:
Cloudflare admitted that if these measures had already been fully implemented, the impact of the December 5th incident could have been less severe. For now, the company has suspended all network changes until the new mitigation systems are complete.
Cloudflare reiterated its apologies to customers and confirmed that it will publish a full analysis of ongoing projects to improve the resilience of its entire infrastructure within the next week.
Redazione