Cloudflare outage on February 20, 2026

(blog.cloudflare.com)

98 points | by nomaxx117 2 hours ago

22 comments

  • CommonGuy 1 hour ago
    Insufficient mock data in the staging environment? Like no BYOIP prefixes at all? Since even one prefix should have shown that it would be deleted by that subtask...

    From all the recent outages, it sounds like Cloudflare is barely tested at all. Maybe they have lots of unit tests etc, but they do not seem to test their whole system... I get that their whole setup is vast, but even testing that subtask manually would have surfaced the bug

    • dabinat 1 hour ago
      I think Cloudflare does not sufficiently test lesser-used options. I lurk in the R2 Discord and a lot of users seem to have problems with custom domains.
    • martinald 36 minutes ago
      Just crazy. Why does a staging environment matter? They should be running some integration tests against eg an in memory database for these kinds of tasks surely?
    • asciii 1 hour ago
      It was also merged 15 days prior to production release...however, you're spot on with the empty test. That's a basic scenario that if it returned all...is like oh no.
  • user205738 3 minutes ago
    They should have rewritten this code in Rust using these brilliant language models. /jk
  • otar 46 minutes ago
    Reliability was/is CF's label.

    It's alarming already. Too many outages in the past months. CF should fix it, or it becomes unacceptable and people will leave the platform.

    I really hope they will figure things out.

    • argestes 40 minutes ago
      I have many things dependent on Cloudflare. That makes me root for Cloudflare and I think I'm not the only one. Instead of finding better options we're getting stuck on an already failing HA solution. I wonder what caused this.
      • slothsarecool 1 minute ago
        There are no alternatives, and those alternatives that did exist back in the day, had to shut down due to either going out of business or not being able to keep a paygo model.

        Not everybody needs cloudflare, but those that need it and aren't major enterprises, have no other option.

      • arcatech 14 minutes ago
        Do you not feel concern about you and everybody else deciding to put ALL of their eggs into one basket like this?
  • djfobbz 3 minutes ago
    I'm honestly amazed that a company CF's size doesn't have a neat little cluster of Mac Minis running OpenClaw and quietly taking care of this for them.
  • alansaber 43 minutes ago
    Not sure why everyone is complaining, new MCP features are more important than uptime
  • atty 1 hour ago
    I do not work in the space at all, but it seems like Cloudflare has been having more network disruptions lately than they used to. To anyone who deals with this sort of thing, is that just recency bias?
    • Icathian 1 hour ago
      It is not. They went about 5 years without one of these, and had a handful over the last 6 months. They're really going to need to figure out what's going wrong and clean up shop.
      • NinjaTrance 1 hour ago
        Engineers have been vibe coding a lot recently...
        • jsheard 1 hour ago
          The featured blog post where one of their senior engineering PMs presented an allegedly "production grade" Matrix implementation, in which authentication was stubbed out as a TODO, says it all really. I'm glad a quarter of the internet is in such responsible hands.
          • gtowey 17 minutes ago
            It's spreading and only going to get worse.

            Management thinks AI tools should make everyone 10x as productive, so they're all trying to run lean teams and load up the remaining engineers with all the work. This will end about as well as the great offshoring of the early 2000s.

          • blibble 54 minutes ago
            there was also a post here where an engineer was parading around a vibe-coded oauth library he'd made as a demonstration of how great LLMs were

            at which point the CVEs started to fly in

          • dana321 1 hour ago
            Thats a classic claude move, even the new sonnet 4.6 still does this.
            • bonesss 1 hour ago
              It’s almost as classic as just short circuiting tests in lightly obfuscated ways.

              I could be quite the kernel developer if making the test green was the only criteria.

            • brutalc 1 hour ago
              [dead]
        • dakiol 1 hour ago
          No joke. In my company we "sabotaged" the AI initiative led by the CTO. We used LLMs to deliver features as requested by the CTO, but we introduced a couple of bugs here and there (intentionally). As a result, the quarter ended up with more time allocated to fix bugs and tons of customer claims. The CTO is now undoing his initiative. We all have now some time more to keep our jobs.
          • samrus 30 minutes ago
            Thats actively malicious. I understand not going out of your way to catch the LLMs' bugs so as to show the folly of the initiative, but actively sabotaging it is legitimately dangerous behavior. Its acting in bad faith. And i say this as someone who would mostly oppose such an initiative myself

            I would go so far as to say that you shouldnt be employed in the industry. Malicious actors like you will contribute to an erosion of trust thatll make everything worse

            • sp00chy 19 minutes ago
              Might be but sometimes you don’t have another choice when employers are enforcing AIs which have no „feeling“ for context of all business processes involved created by human workers in the years before. Those who spent a lot of love and energy for them mostly. And who are now forced to work against an inferior but overpowered workforce.

              Don’t stop sabotaging AI efforts.

          • renegade-otter 15 minutes ago
            I see someone is not familiar with the joys of the current job market.
          • hypeatei 24 minutes ago
            That's extremely unethical. You're being paid to do something and you deliberately broke it which not only cost your employer additional time and money, but it also cost your customers time and money. If I were you, I'd probably just quit and find another profession.
          • logicchains 50 minutes ago
            That's not "sabotaged", that's sabotaged, if you intentionally introduced the bugs. Be very careful admitting something like that publicly unless you're absolutely completely sure nobody could map your HN username to your real identity.
      • Ylpertnodi 53 minutes ago
        Typo: "shop", should have been with an 'el'.

        (: phonetically, because 'l's are hard to read.

    • dazc 1 hour ago
      Launching a new service every 5 minutes is obviously stretching their resources.
    • lysace 1 hour ago
      It has been roughly speaking five and a half years since the IPO. The original CTO (John Graham-Cumming) left about a year ago.
      • jacquesm 1 hour ago
        They coasted on momentum for half a year. I don't even think it says anything negative about the current CTO, but more of what an exception JGC is relative to what is normal. A CTO leaving would never show up the next day in the stats, the position is strategic after all. But you'd expect to see the effect after a while, 6 months is longer than I would have expected, but short enough that cause and effect are undeniable.

        Even so, it is a strong reminder not to rely on any one vendor for critical stuff, in case that wasn't clear enough yet.

      • dazc 1 hour ago
        I wondered what happened to him?
    • candiddevmike 1 hour ago
      Wait till you see the drama around their horrible terraform provider update/rewrite:

      https://github.com/cloudflare/terraform-provider-cloudflare/...

    • slophater 52 minutes ago
      been at cf for 7 yrs but thinking of gtfo soon. the ceo is a manchild, new cto is an idiot, rest of leadership was replaced by yes-men, and the push for AI-first is being a disaster. c levels pretend they care about reliability but pressure teams to constantly ship, cto vibe codes terraform changes without warning anyone, and it's overall a bigger and bigger mess

      even the blog, that used to be a respected source of technical content, has morphed into a garbage fire of slop and vaporware announcements since jgc left.

      • goalieca 18 minutes ago
        I’ve had a lot of problems lately. Basic things are failing and it’s like product isn’t involved at all in the dash. What’s worse? The support.. the chat is the buggiest thing I’ve ever seen.
      • __turbobrew__ 12 minutes ago
        You know what they say, shit rolls downhill. I don't personally know the CEO, but the feeling I have got from their public fits on social media doesn't instill confidence.

        If I was a CF customer I would be migrating off now.

      • a24446ff87 25 minutes ago
        GSD! GSD!! ship! ship! ship!

        **everything breaks**

        ...

        **everything breaks again**

        oh fuck! Code Orange! I repeat, Code Orange! we need to rebuild trust(R)(TM)! we've let our customers down!

        ...

        **everything breaks again**

        Code Orangier! I repeat, Code Orangier!

      • slophater 44 minutes ago
        amazing how my comment was flagged in 30 seconds... keep bootlicking
    • Betelbuddy 1 hour ago
      Cloudflare Outages are as predictable, as the Sun coming up tomorrow. Its their engineering culture.

      https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

  • blibble 1 hour ago
    is this blog post LLM generated?

    the explanation makes no sense:

    > Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed. The system interpreted this as all returned prefixes being queued for deletion.

    client:

         resp, err := d.doRequest(ctx, http.MethodGet, `/v1/prefixes?pending_delete`, nil)
    
    server:

        if v := req.URL.Query().Get("pending_delete"); v != "" {
            // ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
            prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
            if err != nil {
                api.RenderError(ctx, w, ErrInternalError)
                return
            }
    
            api.Render(ctx, w, http.StatusOK, renderIPPrefixAPIResponse(prefixes, nil))
            return
        }
    
    even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block
    • bretthoerner 1 hour ago
      > even if the client had passed a value it would have still done exactly the same thing, as the value of "v" (or anything from the request) is not used in that block

      If they passed in any value, they would have entered the block and returned early with the results of FetchPrefixesPendingDeletion.

      From the post:

      > this was implemented as part of a regularly running sub-task that checks for BYOIP prefixes that should be removed, and then removes them.

      They expected to drop into the block of code above, but since they didn't, they returned all routes.

      • blibble 1 hour ago
        okay so the code which returned everything isn't there

        actual explanation: the API server by default returns everything. the client attempted to make a request to return "pending_deletes", but as the request was malformed, the API instead went down the default path, which returned everything. then the client deleted everything.

        makes sense now

        but is that explanation is even worse

        because that means the code path was never tested?

        • jbxntuehineoh 36 minutes ago
          or they tested it, but not with a dataset that contained prefixes not pending deletion
    • bstsb 1 hour ago
      doesn't look AI-generated. even if they have made a mistake, it's probably just from the rush of getting a postmortem out prior to root cause analysis
    • himata4113 1 hour ago
      yep, no mention that re-advertised prefixes would be withdrawn again as well during the entire impact even after they shut it down.
  • vimda 4 minutes ago
    One has to wonder when the board realises Dane was a bad replacement for JGC. These outages are getting ridiculous
  • boarush 1 hour ago
    While neither am I nor the company I work for directly impacted by this outage, I wonder how long can Cloudflare take these hits and keep apologizing for it. Truly appreciate them being transparent about it, but businesses care more about SLAs and uptime than the incident report.
    • llama052 1 hour ago
      I’ll take clarity and actual RCAs than Microsoft’s approach of not notifying customers and keeping their status page green until enough people notice.

      One thing I do appreciate about cloudflare is their actual use of their status page. That’s not to say these outages are okay. They aren’t. However I’m pretty confident in saying that a lot of providers would have a big paper trail of outages if they were more honest to the same degree or more so than cloudflare. At least from what I’ve noticed, especially this year.

      • boarush 1 hour ago
        Azure straight up refuses to show me if there's even an incident even if I can literally not access shit.

        But last few months has been quite rough for Cloudflare, and a few outages on their Workers platform that didn't quite make the headlines too. Can't wait for Code Orange to get to production.

    • jacquesm 1 hour ago
      Bluntly: they expended that credit a while ago. Those that can will move on. Those that can't have a real problem.

      As for your last sentence:

      Businesses really do care about the incident reports because they give good insight into whether they can trust the company going forward. Full transparency and a clear path to non-repetition due to process or software changes are called for. You be the judge of whether or not you think that standard has been met.

      • boarush 1 hour ago
        I might be looking at it differently, but aren't decisions over a certain provider of service being made by the management. Incident reports don't ever reach there in my experience.
        • samrus 22 minutes ago
          In my experience, the gist of it does reach management when its an existing vendor. Especially if management is tech literate

          Becuase management wants to know why the graphs all went to zero, and the engineers have nothing else to do but relay the incident report.

          This builds a perception for management of the vendor, and if the perception is that the vendor doesnt tell them shit or doesnt even seem to know theres an outage, then management can decide to shift vendors

  • NinjaTrance 1 hour ago
    The irony is that the outage was caused by a change from the "Code Orange: Fail Small initiative".

    They definitely failed big this time.

  • jaboostin 1 hour ago
    Hindsight is 20/20 but why not dry run this change in production and monitor the logs/metrics before enabling it? Seems prudent for any new “delete something in prod” change.
  • himata4113 1 hour ago
    This blog post is inaccurate, the prefixes were being revoked over and over - to keep your prefixes advertised you had to have a script that would readd them or else it would be withdrawn again. The way they seemed to word it is really dishonest.
  • anurag 59 minutes ago
    The one redeeming feature of this failure is staged rollouts. As someone advertising routes through CF, we were quite happy to be spared from the initial 25%.
  • dilyevsky 35 minutes ago
    > Because the client is passing pending_delete with no value, the result of Query().Get(“pending_delete”) here will be an empty string (“”), so the API server interprets this as a request for all BYOIP prefixes instead of just those prefixes that were supposed to be removed.

    Lmao, iirc long time ago Google's internal system had the same exact bug (treating empty as "all" in the delete call) that took down all their edges. Surprisingly there was little impact as traffic just routed through the next set of proxies.

  • ssiddharth 1 hour ago
    The eternal tech outage aphorism: It's always DNS, except for when it's BGP.
  • wa008 30 minutes ago
    This transparent report can earn my trust
  • tokyobreakfast 57 minutes ago
    Is this trend of oversharing code snippets and TMI postmortems done purposely to distract their customers from raging over the outage and the next impending fuckup?
    • samrus 19 minutes ago
      Just seems like transparency. I agree that we should also judge them based on the frequency of these incidents and amwhether they provide a path to non-repeatability, but i wouldnt criticize them for the transparency per se
    • alansaber 53 minutes ago
      Well I still appreciate a good postmortem even if I have no doubt it'll happen again imminently
    • bdangubic 50 minutes ago
      and if they didn’t we’d posting about lack of transparency. damned if you do, damned if you don’t
  • VirusNewbie 1 hour ago
    If you track large SaaS and Cloud uptime, it seem to correlate pretty highly with compensation for big companies. Is cloudflare getting top talent?
    • bombcar 1 hour ago
      Based on IPO date and lockups, I suspect top talent is moving on.
  • NooneAtAll3 47 minutes ago
    again?
  • henning 1 hour ago
    Sure vibe-coded slop that has not been properly peer reviewed or tested prior to deployment is leading to major outages, but the point is they are producing lots of code. More code is good, that means you are a good programmer. Reading code would just slow things down.
    • sp00chy 24 minutes ago
      that’s my feeling also. We will get this more and more in future.
  • dryarzeg 1 hour ago
    DaaS - Downtime as a Service©

    Just joking, no offence :)