
To truly innovate AI, Britain must first innovate AI’s foundations
Hiren Shah, Delivery Director at Scrumconnect, makes the case for a bold AI infrastructure built on smarter compute, not just bigger models and tech.
If the UK wants to achieve its ambition of becoming a global AI leader, the foundation of that vision must rest on robust, accessible, and scalable infrastructure. Compute power, along with quality data and skilled talent, are the core components of this foundation.
But the rising demand for AI compute, paired with its market concentration, poses a strategic challenge to innovation, equity, and sovereignty. Addressing this requires end-to-end thinking, everything from hardware improvements and algorithmic efficiency, and right down to model design and exploring new paradigms.
It means that it’s time to innovate the innovation.
The surge in AI adoption, particularly in large-scale foundation models, has pushed the demand for high-performance compute to unprecedented levels. Currently, much of this demand is met by a small group of organisations that develop the largest models, while also owning or controlling the majority of the underlying compute infrastructure.
This concentration risks stifling innovation and reinforcing global inequalities.
With the UK keen to lead the global path towards responsible AI development, it is now time to honestly answer some searching questions. Do we accept the status quo and perhaps play the role of innovating through tactical improvements? Or do we pursue strategies that democratise compute access and reduce dependency on a few global giants?
One way forward is to reduce the cost of compute per operation. Hardware advances have traditionally played a key role in this driven by the fundamentals of Moore’s Law. But this accepted pace of progress is now slowing.
We're reaching the physical limits of silicon with chips being built at near-atomic scale, and further gains in speed and efficiency becoming harder and harder to achieve. At the same time, nationalistic politics and geopolitical barriers are disrupting the distribution and supply of chipsets on a global scale.
Still, innovation is far from over.
The UK's investments in semiconductor research and innovation through new materials and chip architectures may offer breakthroughs, particularly if they align with global partnerships and national AI priorities.
Finding this next generation of hardware, though, addresses just half the challenge.
Algorithmic efficiency, or the ability to do more with less, has quietly outpaced hardware in recent years. OpenAI proved that when it trained a 2012 model in the year 2020, it required 44 times less compute because the algorithms had become smarter. By comparison, hardware improvements alone would have achieved a gain of just 11 times. So algorithmic innovation is not just a technical advantage, it’s a strategic necessity. Again, UK tech organisations can play a leading role in this space, pushing the boundaries of efficient model design, pruning techniques, quantisation, and exploring new ways to train.
The reality is that reducing the cost of compute can’t be maximised through hardware advances or software innovation alone. Advancing the pair of them advances future prospects and possibilities. Put simply: hardware, meet software.
A common misconception in the AI space race is that “bigger is better”. Think more parameters, more data, more compute. But in reality, the journey towards creating ever-larger models probably isn’t sustainable or even necessary. The development of small language models (SLMs) which are trained on curated data or via imitation learning offers a promising alternative.
Stanford’s Alpaca project showed that small models fine-tuned on outputs from larger ones can achieve surprisingly strong performance. However, later studies caution against overestimating this approach. While imitation models can replicate the style of a larger model, they often fail to match its substantive capabilities. This suggests that while model size, training data quality, and architecture all matter, simply copying outputs isn’t a silver bullet.
Still, SLMs can play a critical role in public sector applications where interpretability, energy efficiency, and deployment constraints are paramount. They’re definitely an important player in the overall mix.
The UK government’s push for mission-driven AI could benefit from targeted investment in these models, selectively designing them for specific domains like healthcare, education, or urban planning.
Beyond today’s compute landscape lies the possibility of paradigm shifts. Technologies like neuromorphic and quantum computing could unlock entirely new compute capabilities.
While these approaches offer long-term promise, they’re still very much in their early stages. It means that if breakthroughs only come through the usual global giants, then they will entrench their market dominance even further.
Likewise, memory technologies are a potential bottleneck and opportunity. Logic units have improved rapidly, but memory bandwidth and capacity improvements lag behind. A breakthrough in this area could significantly expand what’s possible on existing chips.
The UK’s research base, supported by its world-class universities, talent, and AI-focused industrial strategy, is well-placed to contribute to these emerging fields, particularly if supported by agile regulation and sustained public R&D funding.
One last, but largely underexplored, route to scaling compute without centralising it is federated learning. The approach trains models locally on user devices, sharing only encrypted updates. Not only does it reduce compute centralisation, but it addresses privacy concerns too.
While technically promising, federated learning still faces major hurdles: heterogeneity in device data, high memory requirements, and the need for secure aggregation.
But its potential to reduce the compute barrier, especially in high-privacy or strongly-regulated industries, makes it a worthwhile focus for targeted innovation challenges or proof of concepts.
It presents an exciting and potentially lucrative opportunity for UK developers to help Britain achieve its ambition of becoming a global AI leader.
If the UK is serious about leading in AI, it needs to treat compute as infrastructure, not a luxury. That means funding local capability, reducing dependency on a handful of global players, and backing research that makes models smaller, faster, and more efficient. It means thinking beyond hardware to invest in smarter algorithms, federated learning, and alternative approaches that match the UK's strengths.
Leadership won’t come from replicating what others have already built. It will come from making AI innovation cheaper, more distributed, and more useful to more people. And that’s the edge the UK should be aiming for.
Hiren Shah, Delivery Director at Scrumconnect, makes the case for a bold AI infrastructure built on smarter compute, not just bigger models and tech.
AI adoption is booming, but leadership means more than innovation. For the UK to lead, we must build trust, invest in skills, and embed ethics by design.
Lord Kulveer Ranger joins Scrumconnect Consulting as a strategic advisor, bringing deep public and private sector experience to support the Company’s continued growth in digital transformation.
We are committed to continuously providing the best in tech with our people, practices and technology through our certifications.
Scrumconnect is committed to disability rights.
This enhanced government-endorsed standard demonstrates that we have implemented the essential plus security requirements for protecting your data, and that of your client’s, against unskilled internet-based cyber attackers.
We are a supplier on Crown Commercial Service’s (CCS) Cloud Compute 2 framework, a testament to our commitment to providing high-quality services to the UK public sector.
An integrated management system, designed for ISO 9001:2015 and ISO 27001:2015, operates across our business. These are regularly reviewed in order to ensure the continuous improvement to meet the needs of our customers and stakeholders.