Blog

Why we shouldn't chase Silicon Valley

alt=""
There's a certain madness to the current AI arms race. The United States has spent the last decade building data centres the size of small towns, hoovering up talent from every corner of the globe, and systematically dismantling any regulation that might slow down the sprint toward artificial general intelligence. The results are, in a way, impressive. LLMs can write poetry, debug code, and engage in philosophical debates that would make a reasonable person slightly concerned about the future of their profession. And now, predictably, European policymakers are asking: how do we catch up? Truth is: we don't. We do something different.
Sam Storm, agil coach

Sam Storm

Agile Coach

The wrong race to run

Let's look at the numbers. The largest American AI companies have raised hundreds of billions of dollars. They operate on a scale that makes European investment look like pocket change. Microsoft alone has committed over $13 billion to OpenAI. Amazon, Google, and Meta are trying to match this, each building out infrastructure the size of cities.

Europe could try to match this. We could relax our data protection laws, offer tax incentives that make Ireland look like a socialist paradise, and hope that throwing enough money at the problem will produce a European champion. This strategy has a name: it's called losing.

The game of building the biggest, most generalised frontier model is already decided. The infrastructure exists elsewhere. The talent pipelines flow toward Palo Alto and Seattle. The regulatory environment in the US – particularly after recent policy shifts – is designed to let these companies move fast and let others worry about the consequences. Europe entering this race now is like deciding to compete in Formula 1 by building your first go-kart.

But what if regulations aren't a handicap – they're a feature

The GDPR is often portrayed as an obstacle to European AI development. The argument goes that strict data protection rules make it harder to train large models, that European companies are hamstrung by compliance requirements their American competitors can ignore.

Yes, but…

The value proposition of AI is shifting. The novelty of a chatbot that can write a haiku is wearing off. What businesses actually need are AI systems they can trust – systems that handle sensitive data responsibly, that comply with regulations, that don't expose companies to liability.

Europe has spent years developing a framework for responsible data handling. The GDPR, for all its imperfections, establishes clear rules about consent and individual rights. These aren't obstacles to AI development – they're the foundation for AI systems that enterprises can actually deploy with confidence.

Demand that AI development in Europe respects these frameworks. Make GDPR compliance a feature, not a burden. Position European AI as the trustworthy choice for organisations that can't afford to treat data protection as an afterthought. There's a significant market of companies – financial services, healthcare, government – who can pay a premium for AI they can actually use without their legal department having a collective breakdown.

Small is beautiful (and practical)

Here’s what often gets lost in the breathless coverage of frontier models: most businesses don’t need an AI that can discuss Kierkegaard while simultaneously writing SQL queries and composing a crossword puzzle. They need AI that can be relied on to do a few things exceptionally well.

The real value of large language models isn’t their ability to mimic human conversation – that’s a party trick, albeit an impressive one. The value lies in their capacity to rapidly synthesise information based on their training. And here’s the crucial insight: that synthesis becomes more valuable, not less, when the model is trained on a focused corpus of domain-specific knowledge.

Consider a language model trained specifically on European Union law. Every directive, every regulation, every court ruling from the ECJ, every national implementation across 27 member states. Such a model wouldn’t need to know how to write a sonnet or explain quantum mechanics. It would need to understand the relationship between primary and secondary legislation, the hierarchy of EU legal sources, the principles of direct effect and supremacy.

A model like this could be transformative for businesses operating in the EU. Currently, navigating European regulatory compliance requires expensive legal counsel, extensive research, and considerable uncertainty. An AI advisory system trained on the complete corpus of EU law could provide instant, reliable guidance on regulatory questions – not as a replacement for lawyers, but as a first-line resource that makes legal expertise more accessible and efficient.

The same principle applies across domains. A model trained exclusively on programming could become an extraordinarily competent coding assistant – not because it can chat about the weather, but because it has internalised the patterns and practices of software development without the overhead of general knowledge. A model focused on medical literature could support clinical decision-making. A model built around financial regulations could help banks navigate compliance.

The infrastructure advantage of thinking small

Specialised models aren’t just more useful for their intended purpose – they’re dramatically more practical to deploy.

Running a frontier model requires industrial-scale computing. The inference costs alone can make deployment prohibitive for all but the largest organisations. The data security implications of sending sensitive queries to a third-party API are a compliance nightmare waiting to happen.

Smaller, focused models can run locally. They can be deployed on-premises, within an organisation’s existing security perimeter. They don’t require the massive GPU clusters. They don’t demand a fibre connection to a data centre in another jurisdiction. They can operate in environments where connectivity is limited or where data sovereignty is non-negotiable.

This isn’t speculation. France’s Mistral has already demonstrated that European companies can build commercially viable language models without matching American scale. Their models compete effectively in specific use cases while remaining small enough for practical deployment. It’s a proof of concept for a European AI-strategy.

alt=""

Beyond Europe: a model for the rest of the world

There's a broader point here that extends well beyond European competitiveness.

The current model of AI development is extraordinarily resource-intensive. Building frontier models requires access to vast computing infrastructure, reliable high-bandwidth connectivity, and the capital to sustain enormous operational costs. This effectively limits meaningful AI development to a handful of wealthy nations and well-funded corporations.

Developing countries face a stark choice under this paradigm: become dependent on American AI infrastructure, or do without. Neither option is particularly appealing.

But smaller, specialised models change the equation entirely. A model trained on agricultural practices for specific climates could run on modest hardware in rural communities. Medical advisory systems could operate in clinics without reliable internet access. Educational tools could function on mobile networks in regions where fixed broadband remains a distant prospect.

If Europe develops the techniques and frameworks for building effective specialised models, it creates possibilities that extend far beyond our own borders. It's a vision of AI that doesn't require every country to build a hyperscale data centre to participate.

The path forward

None of this requires abandoning ambition. It requires redirecting it.

Europe should invest in AI research – but research focused on efficiency, specialisation, and deployment rather than raw scale. We should support companies building domain-specific models that solve real problems. We should make our regulatory frameworks a selling point rather than an apology.

The AI future doesn't have to look like a handful of American giants controlling access to general-purpose models that can do everything adequately. It could look like a diverse ecosystem of specialised systems, each excellent at its intended purpose, deployed close to where they're needed, built on foundations of trust and regulatory compliance. That's a race Europe can win. More importantly, it's a race worth running.

Curious about how AI can create real value? 

We help organisations build and use AI that is specialised, trustworthy and aligned with European regulatory frameworks – with a focus on tangible business value rather than hype.

0 / 250
Fields marked with an asterisk (*) are required.
Privacy Policy