Something shifted this week. Not gradually. Not subtly. In the space of five days, the entire AI industry showed its hand — and the cards were not about who has the best model. They were about who controls the infrastructure, the regulation, and the trust.
If you have been following AI through LinkedIn hot takes and newspaper headlines, you might think the story is still about GPT versus Claude versus Gemini. Which model writes better code. Which one is “smarter.” But this week made one thing undeniably clear: the model race is over. The power race has begun.
Here is what happened, what it means, and why businesses that are not paying attention will be left behind.
at $852B valuation
by Big Tech in 2026
about AI opacity
The Week That Changed Everything
Let me walk you through what happened between 6 and 10 April 2026. Each event on its own would have been headline news. Together, they paint a picture that is impossible to ignore.
OpenAI raises $122 billion
The largest private funding round in history. Amazon put in $50 billion, Nvidia $30 billion, SoftBank $30 billion. This valued OpenAI at $852 billion — more than most countries’ GDP. This is not a technology company raising money. This is an infrastructure play on the scale of the transcontinental railroad.
OpenAI publishes “Industrial Policy for the Intelligence Age”
Sam Altman co-authored a paper proposing robot taxes, a four-day workweek, and a public wealth fund. His exact words: “AI superintelligence is so close … America needs a new social contract.” When a CEO worth $852 billion starts talking about social contracts, pay attention. This is not philanthropy. This is positioning.
Anthropic reveals Project Glasswing (Mythos)
Anthropic’s most capable model — codenamed Mythos — discovered thousands of zero-day vulnerabilities across major software systems, including a 17-year-old remote code execution flaw in FreeBSD. Then they locked it away. Access restricted to roughly 40 vetted partners. Cybersecurity stocks crashed 5 to 11 percent in a single day.
FTC mandates AI model portability
The US Federal Trade Commission ruled that AI models must be portable across cloud platforms. You fine-tune a model on AWS? You should be able to move it to Azure or Google Cloud. This is the first major regulatory move treating AI models like data — owned by the user, not the platform.
40+ researchers publish AI opacity warning
Over 40 researchers from OpenAI, Google DeepMind, Anthropic, and Meta signed a joint statement: “There is no guarantee that the current degree of visibility will persist.” Signatories included Ilya Sutskever and Geoffrey Hinton. The people who built these systems are warning that we may lose the ability to understand how they reason.
Why This Is Not Just News
Each of these events, taken individually, is interesting. Taken together, they reveal a structural shift in who holds power in the AI ecosystem.
Let me break down the pattern.
1. Money is not chasing models. It is chasing infrastructure.
OpenAI did not raise $122 billion to build a better chatbot. That money is going into data centres, custom silicon, energy contracts, and distribution deals. The same is true across Big Tech: $700 billion in annual AI infrastructure spend in 2026 alone. Microsoft, Google, Amazon, and Meta are all racing to build the physical layer that AI runs on.
The implication? Models will become commoditised. They already are. The real moat is the infrastructure underneath them. If you control the compute, you control who gets to build, train, and deploy AI.
2. The most powerful AI is being locked away, not released.
Two years ago, the narrative was about open-source AI and democratised access. This week, Anthropic showed us the opposite: a model so capable it found thousands of exploitable vulnerabilities, and they chose to restrict it to a handful of partners.
This is a new paradigm. The most capable AI systems will not be publicly available. They will be gated, licensed, and controlled. The question of who gets access — and on what terms — is now a governance question, not a technical one.
3. Regulation is catching up — but in unexpected ways.
The FTC did not ban AI or impose heavy restrictions. Instead, it targeted vendor lock-in — mandating model portability. This is smart regulation: it does not slow down AI development, but it prevents any single company from owning the ecosystem.
Meanwhile, the UK's FCA has taken a principles-based approach, using existing frameworks like Consumer Duty and SM&CR to hold firms accountable for AI decisions. No new rules, but significantly higher expectations.
“There is no guarantee that the current degree of visibility will persist.” — Joint statement by 40+ AI researchers, including Ilya Sutskever and Geoffrey Hinton
4. We may lose the ability to understand these systems.
This is the one that should keep you up at night. Forty of the world’s leading AI researchers — people who built these models — are publicly warning that interpretability is slipping away. As models become more capable, they also become more opaque. We can see what they do, but we are losing sight of how and why they do it.
For businesses deploying AI in regulated environments — financial services, healthcare, legal — this is an existential governance problem. If you cannot explain why your AI made a decision, you cannot comply with regulatory requirements for accountability and oversight.
What This Means for Your Business
If you are running a business that uses AI — or plans to — the question is no longer “which model should we use?” That question is already becoming irrelevant as models commoditise and converge in capability.
The real questions are:
- Governance: Do you have clear accountability for AI decisions in your organisation? Can you explain, audit, and defend those decisions if a regulator asks?
- Vendor strategy: Are you locked into a single AI vendor or cloud platform? The FTC portability ruling is a signal — build for flexibility now, or be forced into it later.
- Access risk: If the most capable models become gated, does your business have the relationships, budget, and compliance posture to maintain access?
- Infrastructure: Are you renting compute on someone else’s terms? Or building the capability to run AI workloads on your own terms?
- Interpretability: Can your AI systems explain their reasoning? If not, how long before a regulator, a client, or a court asks a question you cannot answer?
The Bottom Line
The winners of the AI era will not be the companies with the best models. Models are converging. Performance gaps are shrinking. The winners will be the companies that control four things: compute, distribution, regulation, and trust.
For most businesses, you will not build your own data centres or lobby governments. But you can control how you deploy AI within your organisation. You can build governance frameworks that give you credibility with regulators. You can diversify your vendor relationships. You can invest in understanding your own AI systems before someone demands an explanation you do not have.
The power shift has happened. The question is whether you are positioned on the right side of it.