12 At dawn on March 1, 2026, Iranian drones struck three AWS facilities: two data centers in the United Arab Emirates directly hit, and a third in Bahrain affected by a nearby strike. The company advised clients to migrate workloads to other regions. Banking, payment, delivery services, and enterprise software ground to a halt across the Gulf. On March 23, 2026, and April 2, the AWS Bahrain region was again officially declared “disrupted” due to new drone attacks. This marks the first time in history that a military power has deliberately destroyed hyperscale cloud infrastructure. It’s not a security incident; it’s a major infrastructure risk for businesses and organizations. You’re a target too Tehran claimed to justify its strikes by alleging the Bahrain data center supported military operations: a claim AWS denied. Regardless of its truth, the attacker’s logic is relentless: The same servers hosting your HR data, business apps, and customer files could also process military data. You don’t know it. You can’t know it. And that ignorance won’t protect you. No executive committee had listed “physical data center destruction” in its risk mapping. It’s time to add it. After 36 years of operations, one rule still holds I won’t discuss classified matters. But the first rule every military leader learns even before the War College is this: Never concentrate critical functions on a single node. A competent adversary will find it. And strike it. AWS, Microsoft Azure, and Google Cloud hold over 70% of the global cloud market together. Organizations have collectively built exactly what every resilience doctrine, military or civilian, forbids. And we’ve done it chasing savings. Outages ignore borders You don’t even need to invoke war for this reasoning to apply at your next board meeting. On October 20, 2025, an AWS failure forced hospitals back to paper procedures. Nine days later, an Azure configuration error paralyzed Alaska Airlines, Starbucks, Costco, and telecom operators for eight hours. The economic impact ran into the billions. Between June and December 2025, each of the big three hyperscalers suffered at least one major outage. What’s the maximum tolerable downtime for your critical systems? Does your current architecture ensure you can meet that threshold when your hyperscaler fails? If you lack these answers, you have a governance and likely a technological problem. Regulators are already signaling the risk Regulators aren’t waiting. In Europe, NIS2 and DORA mandate continuity plans and explicit management of cloud provider concentration risks for critical entities and financial institutions. In the U.S., the SEC and CISA have issued similar guidance. In Asia-Pacific, Singapore’s MAS and Australia’s APRA are moving the same way. The regulatory signal is global and converging. I weigh my words: Any organization that migrated critical systems to a single hyperscaler without an alternative recovery plan is out of compliance with its regulatory framework. And the IT leader often knows it perfectly well; they just lacked the budget to do otherwise. Cloud’s hidden cost is dependency The promise was simple: Less tied-up capital, less operational complexity, more agility. The total cost of ownership reality is darker. Egress fees, cumulative subscriptions, unilateral price hikes on clients whose exit costs are prohibitive. Multiple industry studies agree: over half of IT leaders surveyed cut other budget lines to absorb rising cloud costs. Growth driven not by new usage, but by vendors knowing you can’t leave. Technical lock-in is now compounded by pricing lock-in. For a CFO, that creates a structural, uncontrolled budget risk, and one that belongs on the supplier risk register alongside dependence on a single-source raw material. The CLOUD Act isn’t a contract clause Data hosted by providers subject to U.S. jurisdiction may be accessible under U.S. legal orders, even when stored abroad. The CLOUD Act enables authorities, through legal process, to request such data regardless of storage location. For global organizations, this can create tension with local data protection frameworks, including Europe’s GDPR, Asia’s PDPA, or U.S. sector regulations such as HIPAA or GLBA. Contractual measures alone may not fully eliminate these conflicts. In a geopolitical context where national priorities are increasingly asserted, this is not just a theoretical concern. It’s a counterparty risk that should be evaluated accordingly. A technical fix exists: encryption with keys under exclusive client control. It requires retaining control of your infrastructure. Unfortunately, many organizations have already given that up. The answer is not another hyperscaler I must be honest: I long underestimated U.S. hyperscalers’ operational strengths. Their scalability, advanced services, and global availability are real. The issue isn’t their competence; it’s our exclusive dependence. The resilient answer won’t come from duplicating hyperscalers. No new entrant will become a global hyperscaler in ten years. The credible path is different: Independent specialists — leaders in areas like object storage, security, and critical data management — can be assembled into coherent alternatives to a dominant provider. These players exist. On infrastructure and object storage (the layer carrying critical data, backups, regulatory archives), independent vendors today serve top global media, leading telcos, and highly regulated financial institutions across continents. They have usage proof. The strictest security certifications — U.S., European, Asian — cover the value chain. The technology is there. What’s missing is the purchasing doctrine making it legible for a busy decision-maker facing a well-prepared hyperscaler salesperson. Public and private organizations fund their own digital dependence through their buying decisions. It’s not inevitable; it’s a choice. Most national legal frameworks allow preferences for resilience and security in tech tenders. They’re underused for lack of clear doctrine. We must produce that doctrine and legally protect those applying it. It’s decision time: Waiting is now part of the risk Three warnings in twenty months. The Paris Olympics rail sabotage in July 2024. The hyperscaler outage cascade in fall 2025. Gulf data centers in spring 2026. Each case showed the same pattern: a single point of failure, and organizations caught without real backup plans. Every invested euro, dollar or dirham in a cloud whose real location and physical resilience you don’t control is an unvalued risk on your balance sheet. Your regulators see it. Your insurers are starting to price it. Your boards will soon ask. Leaders waiting for a fourth signal to embed digital resilience in strategy expose themselves to regulatory, fiduciary, and human liability that shareholders, regulators, or teams will not forgive. The skills exist. The solutions exist. What’s missing? I hesitate to say “the decision” because it’s too simple. But that’s exactly it: the decision.