Small is beautiful (and practical)
Here’s what often gets lost in the breathless coverage of frontier models: most businesses don’t need an AI that can discuss Kierkegaard while simultaneously writing SQL queries and composing a crossword puzzle. They need AI that can be relied on to do a few things exceptionally well.
The real value of large language models isn’t their ability to mimic human conversation – that’s a party trick, albeit an impressive one. The value lies in their capacity to rapidly synthesise information based on their training. And here’s the crucial insight: that synthesis becomes more valuable, not less, when the model is trained on a focused corpus of domain-specific knowledge.
Consider a language model trained specifically on European Union law. Every directive, every regulation, every court ruling from the ECJ, every national implementation across 27 member states. Such a model wouldn’t need to know how to write a sonnet or explain quantum mechanics. It would need to understand the relationship between primary and secondary legislation, the hierarchy of EU legal sources, the principles of direct effect and supremacy.
A model like this could be transformative for businesses operating in the EU. Currently, navigating European regulatory compliance requires expensive legal counsel, extensive research, and considerable uncertainty. An AI advisory system trained on the complete corpus of EU law could provide instant, reliable guidance on regulatory questions – not as a replacement for lawyers, but as a first-line resource that makes legal expertise more accessible and efficient.
The same principle applies across domains. A model trained exclusively on programming could become an extraordinarily competent coding assistant – not because it can chat about the weather, but because it has internalised the patterns and practices of software development without the overhead of general knowledge. A model focused on medical literature could support clinical decision-making. A model built around financial regulations could help banks navigate compliance.
The infrastructure advantage of thinking small
Specialised models aren’t just more useful for their intended purpose – they’re dramatically more practical to deploy.
Running a frontier model requires industrial-scale computing. The inference costs alone can make deployment prohibitive for all but the largest organisations. The data security implications of sending sensitive queries to a third-party API are a compliance nightmare waiting to happen.
Smaller, focused models can run locally. They can be deployed on-premises, within an organisation’s existing security perimeter. They don’t require the massive GPU clusters. They don’t demand a fibre connection to a data centre in another jurisdiction. They can operate in environments where connectivity is limited or where data sovereignty is non-negotiable.
This isn’t speculation. France’s Mistral has already demonstrated that European companies can build commercially viable language models without matching American scale. Their models compete effectively in specific use cases while remaining small enough for practical deployment. It’s a proof of concept for a European AI-strategy.


