On June 12, 2025, a significant portion of the internet experienced a sudden outage. What started as intermittent failures on Gmail and Spotify soon escalated into a global infrastructure meltdown. For millions of users and hundreds of companies, critical apps simply stopped working.
Will I get in trouble if I question the validity of this post?
Well, it's not that the post is deceiving. It’s rich with valid artifacts and conclusions. However, I was working on a new project that day. I remember the outages, but they seemed to be rolling in a serial manner. The observation doesn't match the narrative of the story above.
My Antennas went up; something was so awkward about an almost "relay effect." This was acting like a power grid series of outages...but much more organized and seemingly targeted. So, I fired up Claude. We had a very long conversation about it as it was happening and even into the next day. We watched one-by-one that most of the largest AI's were down or significantly degraded; (except Claud) which didn't go down until the next day. Also, Reddit, Facebook and a few other communications platforms went down, but not all at once. And then a very chilling thought came into mind and both Claude and I thought that something nefarious may be going on. But it just goes to underscore the incredible fragility of data and cloud infrastructure. In a conflict it's pretty certain that these giant data centers would be the first strategic assets to go.
Therefore, if an entity or consortium of them were to be conducting some kind of a test for the fragility, hardness or robustness of the system, is this what it would have looked like? If we were simulating an attack of some kind, either cyber or kinetic, wouldn't it look something like this? The whole rolling data blackouts and being timed like they were being measured or something as they were observed just seemed like a real possibility at the time.
If anybody wants to see that conversation, I'd be happy to share it. Financial markets weren't moving in an abnormal way. Thus, it didn't seem like we were truly under attack. However, staging something like this might very well have been an operation preplanned on the right day at the right time. The execution of a test like this would likely have involved one company at a very high level such as Google. And then after the tests and in the aftermath, how would the incidents be communicated to the public? Might it be in a story like the one above?
I sincerely hope that testing of the system in a coordinated, real word way takes on the form of a giant, quasi RED-TEAM as a scenario like the one described above.
It may sound like a conspiracy theory, but it's not intended to be that. It just shows that in some kind of an emergency, the first thing to collapse could be data transmission, social media, and nearly all cloud-based communication. At the minimum, there would be economic effects. Beyond that…I don’t even want to think about it nor discuss it.
They must be already using canary deployment. Since the new code path was not introduced until the config change was made in database, the binary and the systems weren't affected. They were gradually rolled out first within a region and then across regions.
Feature flags shouldn't be lesson #1. If they failed to test a policy scenario, they could just as easily have failed to test all feature flag scenarios. How's that for unprotected code paths?
Will I get in trouble if I question the validity of this post?
Well, it's not that the post is deceiving. It’s rich with valid artifacts and conclusions. However, I was working on a new project that day. I remember the outages, but they seemed to be rolling in a serial manner. The observation doesn't match the narrative of the story above.
My Antennas went up; something was so awkward about an almost "relay effect." This was acting like a power grid series of outages...but much more organized and seemingly targeted. So, I fired up Claude. We had a very long conversation about it as it was happening and even into the next day. We watched one-by-one that most of the largest AI's were down or significantly degraded; (except Claud) which didn't go down until the next day. Also, Reddit, Facebook and a few other communications platforms went down, but not all at once. And then a very chilling thought came into mind and both Claude and I thought that something nefarious may be going on. But it just goes to underscore the incredible fragility of data and cloud infrastructure. In a conflict it's pretty certain that these giant data centers would be the first strategic assets to go.
Therefore, if an entity or consortium of them were to be conducting some kind of a test for the fragility, hardness or robustness of the system, is this what it would have looked like? If we were simulating an attack of some kind, either cyber or kinetic, wouldn't it look something like this? The whole rolling data blackouts and being timed like they were being measured or something as they were observed just seemed like a real possibility at the time.
If anybody wants to see that conversation, I'd be happy to share it. Financial markets weren't moving in an abnormal way. Thus, it didn't seem like we were truly under attack. However, staging something like this might very well have been an operation preplanned on the right day at the right time. The execution of a test like this would likely have involved one company at a very high level such as Google. And then after the tests and in the aftermath, how would the incidents be communicated to the public? Might it be in a story like the one above?
I sincerely hope that testing of the system in a coordinated, real word way takes on the form of a giant, quasi RED-TEAM as a scenario like the one described above.
It may sound like a conspiracy theory, but it's not intended to be that. It just shows that in some kind of an emergency, the first thing to collapse could be data transmission, social media, and nearly all cloud-based communication. At the minimum, there would be economic effects. Beyond that…I don’t even want to think about it nor discuss it.
If they had used canary deployment, would it have prevented such a disaster?
They must be already using canary deployment. Since the new code path was not introduced until the config change was made in database, the binary and the systems weren't affected. They were gradually rolled out first within a region and then across regions.
Feature flags shouldn't be lesson #1. If they failed to test a policy scenario, they could just as easily have failed to test all feature flag scenarios. How's that for unprotected code paths?
I don't know about you, but the whole thing gives me Crowdstrike vibes all over again.
Pushing for a code update without testing the code in a sandboxed environment makes me go "hmm" 🙄
What happened to using smartptrs to handle null pointer issues ?