Why tax & accounting managed service providers can’t afford to ignore AI governance (2026)

Last year, Deloitte Australia refunded part of a $440,000 contract with the Australian government after delivering a report full of AI-generated errors, including fabricated quotes from a federal court judgment and references to academic papers that don't exist. A few months later, KPMG Australia fined a partner for using AI tools outside firm policy during internal training. Two Big Four firms. The same market. Within months of each other.
If it can happen at that level, it can happen anywhere. And in a regulated professional services environment, the consequences aren't just reputational. Operating licences, client contracts, and regulatory standing are all on the line.
The governance gap most tax & accounting managed service providers have
The honest picture across most managed service providers right now is that AI use is fragmented and largely ungoverned. Individual staff are using whatever tools are available to them, for whatever they think will save time, with no central visibility into what's being used, how, or on which client work. There's no audit trail. There's no human checkpoint built into the process. There's no way for the firm to evidence, if a regulator or client asks, exactly what happened and who was responsible.
That's shadow IT, and it's rife across the profession. The UK Financial Reporting Council warned in 2025 that Big Four firms were already failing to monitor how AI affected the quality of their audits. For mid-tier and smaller managed service providers operating with less resource and oversight, the exposure is at least as significant.
Meanwhile, clients are getting sharper. AI usage clauses are appearing in service contracts. Questions about methodology and process are coming up in pitches. Firms that can't clearly explain how they use AI, and how they control it, are going to find that a competitive disadvantage.
Why AI investment in the tax & accounting sector keeps underdelivering
There's a broader problem sitting underneath the governance question, and it's one the whole sector is wrestling with. Most AI investment in accounting and tax is going into point solutions: one tool for document processing, another for drafting, another for data extraction. Each one does something useful in isolation. None of them connect. Nobody has figured out how to incorporate AI into the way work actually gets done end-to-end, in a controlled, repeatable, human-in-the-loop way.
The result is underutilisation. Firms are spending on AI and not seeing returns, because they're automating fragments of a process rather than the process itself. The tools work. The workflow doesn't.
This is compounded by partnership economics. Partners don't like spending money without a clear return, and point solutions that don't deliver at scale are a fast way to kill appetite for further investment. The governance problem and the ROI problem are actually the same problem: a missing orchestration layer.
What responsible AI adoption looks like in practice
Governed AI adoption doesn't mean avoiding the technology or slowing things down. It means building the infrastructure that makes AI safe and scalable, so that when a client or regulator asks how the work was done, the answer is ready.
In practice, for managed service providers in tax & accounting, that means:
- A structured workflow layer that controls where and how AI is applied, rather than leaving it to individual discretion
- Audit trails that evidence what happened, when, and who reviewed it
- Human checkpoints built into the process at the right stages, not bolted on as an afterthought
- Centralised visibility into what tools are being used across the operation, and on which work
The trust and corporate services world is already operating this way in parts. Fund accounting has moved toward data integration layers that govern how data comes in and control what happens to it at each stage. General accounting and tax compliance hasn't caught up, but the model exists and the need is identical.
Firms that build this infrastructure aren't just protecting themselves from regulatory exposure. They're building something commercially valuable: a demonstrable, evidenced approach to AI use that clients can trust and that gives the firm a genuine advantage in pitches.
The direction of travel is clear
The regulatory framework around AI in professional services is still being written. But based on what's already happened in Australia, and what the FRC is already flagging in the UK, the direction of travel is obvious. Managed service providers that build those foundations now won't be scrambling when the rules fully land. And they'll be having a very different conversation with clients than the firms still running ungoverned AI off the side of someone's desk.






