
Redazione RHC : 21 November 2025 17:20
A major outage in Cloudflare’s infrastructure has unexpectedly tested the robustness of the cloud and its security systems for many businesses. On November 18, service outages caused websites around the world to go down multiple times, and some customers attempted to temporarily abandon the platform to maintain resource availability.
This forced maneuver also caused web applications to lose traditional malicious traffic filtering, which Cloudflare typically blocks at the edge of the network, for several hours.
The problems began around 6:30 AM EST (11:30 UTC), when a notification about internal service degradation appeared on the status page. Over the next few hours, resources came back online, only to become unavailable again. The situation was complicated by the fact that Cloudflare’s portal was frequently down and many domains also relied on the company’s DNS service, making it technically difficult to switch to alternative solutions.
Nonetheless, some website owners changed their routing anyway, and it was this attempt to ensure availability without relying on Cloudflare’s security perimeter that made their infrastructure more vulnerable to attackers.
Third-party experts emphasize that the platform effectively mitigates the most common types of application-layer attacks, including brute-force credential attacks, SQL injection attacks, API control bypass attempts, and numerous automated traffic scenarios. Therefore, the sudden loss of this layer exposed hidden vulnerabilities, from local security controls to long-standing compromises in application-side controls.
In one case, the increase in log volume was so significant that the company is still trying to determine which events were actual intrusion attempts and which were just noise.
Analysts point out that during the period when some major websites were forced to operate without Cloudflare, any observer could have noticed changes in DNS records and realized that the defensive line was gone .
For criminal groups, such periods represent an opportunity to launch attacks previously blocked at the perimeter, especially if the target was already under surveillance. Therefore, organizations that have redirected traffic to alternative routes must now carefully examine event logs to ensure no hidden attackers have emerged after the default network has been restored.
Cloudflare later published an analysis of the incident. The company stated that the outage was not related to any attacks or malicious activity. Rather, it was caused by an authorization error in one of its internal databases, which generated a large number of entries in a separate configuration file for the bot management system.
The file doubled in size and was then automatically propagated across the entire network, triggering a cascade of errors. Considering that Cloudflare services are used by approximately a fifth of the internet, such incidents demonstrate how vulnerable modern web services are to isolated errors originating from a single provider.
The issue of reliance on single points of failure is attracting further attention. Consultants view this incident as a further reminder of the need to distribute security functions across multiple zones and providers. To this end, they recommend implementing filtering, DDoS protection, and DNS maintenance tools across multiple platforms, segmenting applications to prevent a failure on one provider’s side from triggering a chain reaction, and regularly monitoring critical dependencies to promptly identify the impact of single-vendor networks.
Redazione