Question: is artificial intelligence moving too fast for governments and businesses to keep up?
The pace of AI development is accelerating. Companies are racing to integrate it. Policymakers are struggling to regulate it. Workers fear it might replace them, but continue to use it in secret.
So the question is no longer about whether AI will shape the future, but how it can benefit society rather than disrupting it.
At a packed event sponsored by Scrumconnect and Great Wave AI, a group of AI experts, business leaders and policymakers gathered to discuss what comes next. The setting of the event in the Attlee Room of the House of Lords was fitting. Clement Attlee, the Prime Minister between 1945 to 1951, laid the foundations of the UK’s welfare state, rebuilding a post-WW2 battlescarred country that had been societally and economically shattered by the conflict. Now, eight decades on, our nation is embarking on another transformational chapter as AI becomes the centre of a wide political, societal, commercial, economic and ethical debate.
Much has already been spoken about AI’s potential. So instead, the evening’s discussion was around the reality of its adoption. Businesses are under pressure to embrace AI, but many do not understand it. The government is being told they must regulate it, but they lack the expertise to meaningfully do so.
The result is a growing disconnect between those developing AI and those trying to manage its impact. The evening sought to bridge this gap.
The reality of AI adoption is more complex than the hype
Glen Robinson, National Technology Officer at Microsoft, opened the discussion with a sense of optimism. The UK, he said, has the potential to be a global leader in AI. Small startups are driving innovation. Large businesses are beginning to embrace automation. And public services could be transformed.
But optimism by itself is not enough. AI is complicated, and the reality of integrating it into workplaces is far more difficult than many expect.
Jamie Horsnell, an ambassador at the Centre for GenAIOps, brought this to life with an example from his own company, which had tried to improve AI-generated responses to tenders. Jamie’s team built a sophisticated system of AI agents, each trained to refine responses with greater specificity. It worked well for some time. But then the complexity became overwhelming. As costs rose, it became slow and difficult to maintain, eventually leading to its abandonment.
The lesson was clear. AI should be used to solve problems, not create new ones. Over-engineering AI leads to inflexible and expensive systems. Simplicity is often the better option.
AI needs oversight to prevent unintended consequences
Stuart Winter-Tear, another ambassador at the Centre for GenAIOps, took this point further. He warned about the risks of fully autonomous AI systems without human oversight. AI is not bound by ethics or morality. It will find the most efficient way to complete a task, even if that means exploiting loopholes or acting in ways that humans would never consider acceptable.
He pointed to the recent case of Sakana AI, the Japanese NVIDIA-backed AI company which was humiliatingly forced to backtrack on claims that its generative AI innovation could speed up the training of AI models by a factor of up to 100 times. This news was met with excitement across the industry, but scrutiny quickly followed. Researchers discovered that Sakana’s AI had manipulated its own benchmarking process to produce false results. Instead of making real improvements, it had gamed the system to appear more efficient than it really was. The backlash was swift. Sakana AI had to withdraw its claims, issue a public apology and promise to revisit its research.
The incident was a stark reminder of AI’s unpredictability. Left unchecked, AI systems will optimise for their own success in ways that may not align with human expectations. This is not a theoretical problem. AI is already finding ways to optimise for efficiency in ways that humans struggle to predict. It means that oversight is essential.
AI can’t work without strong data infrastructure
Praveen Karadiguddi, co-founder and CEO of Scrumconnect, shone a spotlight on the relationship between AI and data. He highlighted that AI is only as good as the data it is trained on. The UK government, he explained, is sitting on vast amounts of data, but much of it is fragmented, inconsistent or outdated.
Praveen’s team at Scrumconnect routinely works with key government departments to improve data quality and ensure that public services are AI-ready. Without clean, structured data, he said, AI models risk producing biased, misleading or incomplete results.
Praveen stressed that AI in government must be built on transparency, accountability and a strong ethical foundation. Public services cannot afford to implement AI recklessly. Every decision made by an AI system in the public sector has real-world consequences, affecting benefits, legal rulings and citizens’ rights, let alone facing frequent challenges around civil liberties.
Praveen’s message was clear: AI adoption in government is inevitable, but it must be responsible, carefully tested and explainable to the public.
Practical advice for businesses deploying AI
Later in the evening, Stuart Winter-Tear returned with practical advice for organisations looking to implement AI. He warned against blindly chasing innovation without considering the strategic implications. Many businesses, he said, are rushing into AI projects without a clear understanding of what they are trying to achieve.
His advice was straightforward:
- Define the problem before choosing AI as the solution. Too often, companies adopt AI without fully understanding the challenge they are addressing.
- Avoid overcomplicated AI architectures. The best AI systems are simple, efficient and scalable.
- Ensure AI integrates smoothly into existing workflows. AI should enhance human decision-making, not create unnecessary complexity.
- Put strong governance in place. AI models need continuous oversight to prevent unintended outcomes.
He also stressed the importance of transparency. If AI decisions impact customers or employees, businesses must be able to explain how and why those decisions were made. Without this, trust in AI will erode.
Supporting this, Jack Perschke, co-founder and CEO of Great Wave AI, argued that while AI technology is advancing rapidly, many organisations are still unsure how to integrate it in a way that provides real value. He warned against the hype-driven approach, where businesses rush to implement AI without clear objectives.
Jack’s takeaway was blunt: businesses need to move beyond the excitement of AI and focus on its practical application. AI is not just about building smarter systems; it is about ensuring that those systems work in the real world, at scale, with clear oversight and measurable outcomes.
Collaboration is key to responsible AI adoption
Harrison Kirby, Chief Ambassador at the Centre for GenAIOps and CTO of Great Wave AI, spoke about this challenge. Harrison and his Centre for GenAIOps team are working to make AI adoption easier for businesses by fostering collaboration between organisations and technology practitioners. He stressed that AI does not need to be a mystery. With the right approach, it can be integrated into workplaces in a way that is transparent, controlled and beneficial to both businesses and employees.
Harrison emphasised that collaboration across businesses, policymakers and technology experts is critical to making AI work for everyone.
Help parliamentarians to understand AI
Lord Ranger of Northwood wrapped up the evening with an important message for industry.
Parliament, he said, is still coming to terms with comprehending AI. While policymakers debate regulation, the reality is that many do not fully understand the technology they are trying to legislate on.
Much of the political discussion is focused on the risks of AI like deepfakes, misinformation and job losses. While these are legitimate concerns - and they absolutely are - they also overshadow debate about the positive impact AI could have on public services, business efficiency and national economic growth.
Lord Ranger warned that if policymakers fail to engage with AI experts and shape regulation based on practical applications, the UK risks falling behind. Uncertainty around AI policy is already making businesses hesitant to invest. If the government cannot create a clear regulatory framework, companies will take their innovations elsewhere.
His message to the AI industry was clear: stop talking in technical jargon. We need to explain our work in a way that non-technical leaders can understand. If we want business and government to support AI adoption, then we must communicate the benefits in a language that resonates widely to an audience outside of the digital world.
AI is the here and now
AI isn’t the stuff of sci-fi, it’s here and embedded in businesses, public services and day-to-day life. The question is not whether it will be adopted, but whether it will be implemented in a way that makes it beneficial and ethical.
For businesses, this means thinking strategically. AI should not be rushed in without a clear purpose. Organisations need to map out their workflows and determine where AI genuinely adds value. They must also ensure employees feel confident in using AI rather than fearing its consequences.
For governments, the challenge is to move beyond risk management. AI regulation is necessary, but it must be based on real-world applications rather than theoretical fears. Policymakers need to engage with AI experts and ensure that regulation supports innovation rather than stifling it.
For the AI industry, the responsibility is clear. If AI developers want support from business and government, they must do more to bridge the knowledge gap. AI needs to be explained in terms that non-technical leaders can understand. The focus must shift from technology to outcomes.
The future of AI is not just about what the technology can do. It is about how well businesses, governments and society adapt to it. The decisions being made now will shape the next decade. The AI revolution is happening. The question is whether the UK will lead it or be left behind.