Cloudflare faced a widespread network disruption on November 18, 2025, that left many websites around the world unreachable. The outage began at 11:20 UTC, showing users a Cloudflare error page instead of the sites they were trying to access.
The company later confirmed that the incident was not caused by a cyberattack. Instead, it was triggered by an internal configuration issue. A change in permissions for one of Cloudflare’s database systems caused the system to generate duplicate entries inside a feature file used by the company’s Bot Management engine. The file became much larger than normal and was rolled out across Cloudflare’s global network.
Once deployed, the oversized file caused traffic-routing software to fail because it could not handle the increased file size. This resulted in a wave of HTTP 5xx errors for millions of requests.
Initial confusion and slow recovery
At first, Cloudflare engineers suspected a large-scale DDoS attack because of the unusual traffic patterns. Only after deeper investigation did the team realize the issue originated from the faulty feature file.
By 14:30 UTC, Cloudflare replaced the problematic file with a previously working version and stopped further propagation. Core traffic slowly returned to normal, and by 17:06 UTC, the company announced that all systems were fully restored.
Services affected
The outage caused failures across several Cloudflare products:
- Core CDN and security services: Users saw 5xx errors when visiting Cloudflare-protected sites.
- Turnstile: Failed to load, making logins difficult.
- Workers KV: Delivered high error rates because requests couldn’t pass through the proxy.
- Dashboard: Many users were unable to log in.
- Email Security: Temporary drop in spam-detection accuracy.
- Access: Widespread authentication failures.
The cascade happened because all these systems depend on Cloudflare’s core proxy service, which relied on the corrupted feature file.
Why the issue kept repeating
The faulty file was being regenerated every five minutes based on a database query. Cloudflare was updating part of its ClickHouse database cluster around the same time, and the update caused the query to return a much larger dataset than expected. As each updated node generated the incorrect file, it created an on-and-off pattern of system failures until the entire cluster eventually produced the bad data consistently.
Cloudflare apologizes and outlines next steps
Cloudflare described the incident as its worst outage since 2019 and said such failures are unacceptable given its role in supporting global internet traffic. The company promised several improvements, including:
- Better validation for internal configuration files
- Stronger global kill-switch options
- More resilient error-handling
- Tighter review of system failure modes
Cloudflare said it is committed to preventing similar outages and expressed regret to customers affected by the disruption.