Cloudflare Outage: 25 Minutes of Chaos Due to React Server Issue
Red Hot Cyber, il blog italiano sulla sicurezza informatica
Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Select Italian
Search
Crowdstrike 320×100
Banner Ancharia Desktop 1 1
Cloudflare Outage: 25 Minutes of Chaos Due to React Server Issue

Cloudflare Outage: 25 Minutes of Chaos Due to React Server Issue

Redazione RHC : 7 December 2025 09:25

Cloudflare experienced a significant outage on the morning of December 5, 2025, when at 8:47 a.m. UTC, a portion of its infrastructure began experiencing internal errors. The incident, which lasted approximately 25 minutes, was resolved at 9:12 a.m. with full service restoration.

According to the company, approximately 28% of the HTTP traffic handled globally was affected. The impact only affected customers using a specific combination of configurations, as explained by engineers.

Cloudflare clarified that the outage was not linked to any malicious activity: no cyberattack, intrusion attempt, or malicious behavior contributed to the event. Instead, the issue was caused by an update introduced to mitigate a recently disclosed vulnerability related to React Server components, identified as CVE-2025-55182.

How did the accident happen?

The outage was caused by a change to the HTTP request body parsing system, part of measures taken to protect users of React-based applications. The change involved increasing the Web Application Firewall (WAF)’s internal memory buffer from 128 KB to 1 MB, which aligns with the default limit in Next.js frameworks.

This first change was released in a phased rollout. During implementation, engineers discovered that an internal WAF testing tool was incompatible with the new limit. Deeming that component unnecessary for real-world traffic, Cloudflare implemented a second change to disable it.

It was this second change, deployed with the global configuration system— which does not allow for gradual rollouts —that triggered the chain of events that led to the HTTP 500 errors. The system quickly reached every server on the network within seconds.

At that point, a particular version of the FL1 proxy found itself running a piece of Lua code containing a latent bug. This resulted in some requests being blocked from processing and the affected servers returning 500 errors.

Who was hit

Cloudflare engineers report that customers using the FL1 proxy in conjunction with the Cloudflare Managed Ruleset were affected. Requests to sites configured this way began responding with 500 errors, with very few exceptions (such as some test endpoints, such as /cdn-cgi/trace).

Customers using different configurations or those served by the Cloudflare network operating in China were not affected.

The technical cause

The problem was traced to the operation of the rules system used by the WAF. Some rules, using the “execute” action, trigger the evaluation of additional rule sets. The killswitch system, used to quickly deactivate problematic rules, had never been applied to a rule with an ” execute ” action before.

When the change disabled the test set, the system correctly skipped the rule execution , but it didn’t handle the missing “execute” object in the next step of processing the results. This resulted in the Lua error that generated the HTTP 500s.

Cloudflare clarified that this bug does not exist in the FL2 proxy, which is written in Rust, thanks to the different type management system that avoids these scenarios.

Connection with the incident of November 18

The company recalled that a similar dynamic occurred on November 18, 2025 , when another unrelated change caused a widespread failure. Following that incident , several projects were announced to make configuration updates more secure and reduce the impact of individual errors.

Initiatives still underway include:

  • a more rigid versioning and rollback system,
  • break glass ” procedures to keep critical functions operational even in exceptional conditions,
  • fail-open management in case of configuration errors.

Cloudflare admitted that if these measures had already been fully implemented, the impact of the December 5th incident could have been less severe. For now, the company has suspended all network changes until the new mitigation systems are complete.

Essential chronology of the event (UTC)

  • 08:47 – Incident started after configuration change propagated
  • 08:48 – Impact extended to the entire affected part of the network
  • 08:50 – The internal alert system reports the problem
  • 09:11 – Revoking configuration change
  • 09:12 – Traffic fully restored

Cloudflare reiterated its apologies to customers and confirmed that it will publish a full analysis of ongoing projects to improve the resilience of its entire infrastructure within the next week.

  • #cybersecurity
  • Cloudflare Managed Ruleset
  • Cloudflare outage
  • HTTP traffic
  • network error
  • proxy FL1
  • React Server issue
  • server downtime
  • WAF configuration
  • Web Application Firewall
Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli