Google launched Gemini 3 this month, triggering what is being described internally at OpenAI as a code red. OpenAI had released GPT-5 in August. Google's response came within months. The two companies are now in an escalation cycle that shows no signs of slowing.
This article was originally published on 29 November 2025 on The Brief at sebsauerborn.com.
This is not primarily a technology story. It is a story about what happens to the economy, to work, and to personal freedom when the most powerful tool in human history is controlled by two or three companies, all headquartered in the same country, and subject to the same regulatory and political pressures.
What AI Is Actually Doing to Knowledge Work
Let me start with something concrete. In my own business, we have integrated AI tools into the research and drafting process for client advisory work. The result is not that we need fewer people. It is that the people we have can do more, at higher quality, in less time.
That is not a universal experience. In certain categories of knowledge work — legal document review, basic financial analysis, junior-level research — AI is genuinely replacing human labour. These roles are not disappearing overnight, but the trajectory is clear.
For senior advisory work, the kind that requires genuine judgment, context, relationships, and the ability to navigate ambiguity, AI is an amplifier, not a replacement. At least for now.
The Concentration Problem
Here is the thing that I find most interesting and most concerning about the Gemini 3 and GPT-5 race.
These systems require extraordinary computational resources to train and run. The capital expenditure for a frontier AI model is now in the hundreds of millions of dollars. This is not something that can be replicated in a garage. It is not something that a European start-up, or even a European government, can compete with at scale in the near term.
The result is that the most consequential technology of the next decade is being developed and controlled by a tiny number of American companies, plus a Chinese competitor in DeepSeek, with everyone else as users rather than developers.
For Europe, this is a strategic problem of the first order. European companies and governments will use these tools, pay for these tools, and be dependent on these tools, without having any meaningful influence over how they are developed, what they are trained on, what values they embed, or what restrictions are placed on their use.
The EU's AI Act, which came into force this year, attempts to regulate the use of AI systems in Europe. It is a serious piece of legislation with some genuinely useful provisions around high-risk applications. But it does not address the fundamental dependency.
Europe is regulating the output while having no influence over the input. That is a profound strategic vulnerability.
The Freedom Dimension
The aspect of this that I think about most, from the perspective of individual freedom rather than corporate competition, is the question of whose values are baked into these systems.
AI language models are trained on data. The choice of what data to include, how to weight it, and what to filter out reflects choices made by the people building the systems. Those choices are not neutral. They reflect the cultural, political, and institutional context of Silicon Valley in 2024 and 2025.
There have been documented cases of AI systems refusing to engage with certain political positions, certain historical questions, and certain topics that the developers have judged to be sensitive. The criteria for these judgments are not transparent. They are not democratically determined. They are made by engineers and policy teams at private companies in California.
The embedding of values into technology at this scale, and the near-universal adoption of that technology, is a form of cultural influence that makes the reach of any previous media institution look modest.
For those of us who care about genuine intellectual freedom, about the ability to think through uncomfortable questions without an algorithm deciding what is and is not an acceptable conclusion, this is worth taking seriously.
What Smart People Are Doing
The entrepreneurs and investors I respect most are engaging with AI strategically.
They are identifying which parts of their business can be genuinely enhanced by AI tools, and building those capabilities in-house rather than depending entirely on external platforms. They are developing their own proprietary data assets, because the competitive advantage in an AI-enabled world lies not in access to the general models — which everyone will have — but in the quality of the specific data and context those models are applied to.
And some of them are making a different bet entirely: that in a world flooded with AI-generated content, human judgment, human relationships, and human accountability will become more valuable, not less. That bet has not yet been resolved. But I find it more credible than I did two years ago.
The race has no finish line. The question is not whether to engage, but how.
Work with Sebastian
If you are thinking about how to structure your business, your tax position, and your geographic base in the context of a rapidly changing technology and regulatory environment, let's talk. Book a consultation.
