Most finance teams are already using AI, but they might not even realise it yet. This was the message at Public Finance Live, where a packed session drew a mix of senior finance leaders and practitioners from across the public sector. The panel was chaired by Ruth Brockbank, Finance Director at CIPFA, and joined by digital specialists Bob Rehill, founder of Cintriq, and Prahlad Koti from Scrumconnect.
The panel’s core point was that regardless of whether finance leaders feel optimistic or sceptical about AI, it is already built into the systems they depend upon every day. ERP platforms, procurement tools, dashboards, even spreadsheets, all have AI capabilities at their core. AI is now so widespread that even if individual teams are not actively using this functionality that is embedded in the tools they use, the systems and organisations they interact with across the wider ecosystem almost certainly are.
As Prahlad put it, deciding whether to engage with AI has long set sail. Now the question is: “how do I prepare for it?”. The real challenge is whether finance teams are prepared and equipped for what the technology expects of them. If the data flowing through those systems is incomplete, inconsistent or simply wrong, the outcomes will be too. And in finance, where decisions carry both regulatory and reputational consequences, those risks can be make or break.
Data quality problems go far beyond a rogue typo in a spreadsheet or the odd rounding error. Outdated systems, patchy standards, duplicated processes and disconnected platforms all introduce risk. Prahlad was clear that AI will not fix these weaknesses and worse, it can make them harder to spot. "You get false confidence," he said. "Once something is generated by a machine, people often stop questioning it." That is when biases and inefficiencies start slipping through undetected, hidden by the illusion of precision, in much the same way we won’t question official looking people wearing high-vis vests.
Bob stressed that this is not just a technical problem. Trust needs to be established too. He argued that AI is only useful when teams understand how it reached its conclusion. "It is not enough to accept the answer. You have to know how the system got there." This means building tools that do not just generate recommendations, but explain them as well. And it means keeping space for human judgement.
Ruth steered the conversation towards people. If AI is changing the shape of finance work, how should teams adapt? Bob reflected on his early career as a newly qualified accountant, where he was trained in a world of spreadsheets and manual checks and balances. "That won’t cut it now," he said. Today’s finance professionals need to understand data structures, automation and the basics of how AI works, skills that were not on the syllabus a decade ago.
Prahlad made the point that while many younger professionals may not yet understand predictive analytics or model design, they are already using AI in practical ways. "Tools like ChatGPT have become a regular part of how they learn and work," he said. That familiarity is a strength, but no one is exempt from upskilling. The technology is evolving much too quickly for that.
The panel urged teams to begin with small, contained pilots rather than sweeping transformations. Trying out AI in areas like contract reviews or internal reporting allows for experimentation in a low-risk space. Running those new processes alongside existing ones builds confidence and gives people space to learn. "It gives people room to learn without the stakes being too high," Prahlad explained.
The future of audit also came under the spotlight. As AI becomes standard in finance platforms, traditional audit techniques will need to shift. "The old way of ticking and tying cells in a spreadsheet does not work with AI," Bob said. Audit teams will need new skills to track how outputs were generated and to spot issues before they are embedded. Those who build that capability early will be in a far stronger position when scrutiny increases.
When asked about resistance, Prahlad said that most teams are comfortable using AI for internal improvements. But are cautious when it comes to frontline decisions. There is good reason for this: CFOs have legal and fiduciary duties, so if something goes wrong, and they’ve signed it off, they’re accountable. Bob described the general feeling as optimistic but cautious, a fair reflection of the general mood in the room.
One question from the audience cut to the heart of the issue. If finance teams are already at capacity, how can they make time to experiment with AI? Bob acknowledged that time and resources are real constraints. But he urged leaders to carve out space, even in small amounts. "You do not need a transformation programme, you just need to start," he said. Prahlad agreed, adding that given the pace of AI, "the cost of doing nothing will very quickly dwarf the cost of doing something."
The panel also addressed a question around the need for CFOs to be concerned about tools like ChatGPT. While experimentation is important, boundaries matter more, Bob and Prahald agreed. "Set guardrails," Bob advised. "Do not allow sensitive data to be shared with public tools. But do not block your teams from learning either. You need to give them safe ways of trying things."
The session ended with two clear calls to action. First, get your data in shape. Without it, even the best AI tools will not deliver. Second, do not wait. Start with low-risk pilots, demonstrate value and build from there. "The biggest danger is doing nothing," said Prahlad. “Stand still now and you will be stuck reconciling yesterday’s spreadsheets, delivering underwhelming results, all while other finance functions surge way ahead.” Bob added that this is not just about transforming through new tools or clever features. "Job descriptions, recruitment, objectives, they all need to change,” he explained. “This is a shift in capability, not just in technology."
AI in finance is not a future decision. It is already reshaping the tools and processes that finance teams rely on. The only choice left is whether leaders are ready to do the quieter, less visible work on data, skills and systems that allows AI to succeed. If they are not, it might not be just the technology that falls short. It could be the organisation they support too.