Join 250,000+
professionals today
Add Insights to your inbox - get the latest
professional news for free.
Join our 250K+ subscribers
Join our 250K+ subscribers
Subscribe06 APR 2026 / TECHNOLOGY
AI is now integrated in audit files, tax workflows, and compliance systems within CPA firms, with platforms such as EY's Canvas and Diligent’s AuditAI increasing efficiency and risk identification. However, there are concerns about accountability and adaptive regulation, as the quick pace of AI adoption can lead to large-scale mistakes, and regulators struggle to evolve from guidelines written for a human-centric norm.
A senior manager at a mid-sized CPA firm told me last week, half joking, “My junior just reviewed 100% of transactions before lunch.” Then he paused and added, “Well… technically, the software did.” That pretty much sums up where we are. AI is no longer circling the profession; it is already sitting inside audit files, tax workflows, and compliance systems. The real question is not whether firms will adopt it. That ship has sailed. The question is whether regulators, standards, and even firm leadership are ready for what comes next. Let’s unpack what is actually changing, and what still sits squarely on your desk.
The pace of adoption is moving at a clip that would make even seasoned partners raise an eyebrow. EY is rolling out AI-powered enhancements to its Canvas platform that can pre-fill workpapers, surface relevant accounting guidance in real time, and accelerate risk assessments. KPMG is experimenting with orchestration agents that coordinate multiple AI tools across the audit process. This is not an incremental improvement. This is a structural shift. Instead of sampling transactions, firms are now analyzing entire data populations. Instead of periodic reviews, systems are moving toward continuous monitoring. Tools like Diligent’s AuditAI claim to reduce administrative workload by about 70%, while also flagging risks earlier in the cycle.
And it is not just an audit. TaxGPT is pitching a system that can prepare individual returns end-to-end, with claims of up to 90% time savings. Thomson Reuters is applying AI across 19,000-plus U.S. sales tax jurisdictions, automating compliance workflows that used to eat up hours of staff time. On paper, it sounds like a win across the board. Faster work, deeper coverage, fewer manual errors. But here is the catch. When you move this fast, the margin for silent failure increases. AI does not get tired, but it can still get things wrong at scale. And when it does, it does not miss one transaction. It can misinterpret thousands. So yes, audits are getting faster. The open question is whether firms are building enough guardrails to keep that speed from turning into exposure.
The UK’s Financial Reporting Council recently issued what it calls the first formal guidance on AI in auditing. The framework is simple but telling. AI can fail in three ways: bad inputs, misinterpreted outputs, or insufficient work compared to human standards. In practice, that opens a whole can of worms. What qualifies as “sufficient” audit work when AI is doing the heavy lifting? How much validation is enough? And who decides when an AI-generated conclusion crosses the line into audit evidence? Across the Atlantic, the PCAOB has acknowledged the need to upgrade its own tech capabilities. That alone tells you something. Regulators are not just writing rules anymore; they are trying to catch up with the same tools firms are already deploying.
And let’s be honest, most audit standards were written for a world where sampling was the norm. The language still assumes a human is manually selecting transactions. Now we have systems scanning entire ledgers in seconds. That mismatch creates gray areas. Firms are already flagging concerns about wording in standards that require a “person” to perform certain procedures, even when AI can handle them more efficiently with human oversight. No one wants a regulation that freezes innovation. But no one wants a free-for-all either. Right now, it feels like regulators are playing catch-up. The question is how long that gap stays manageable.
Regulators have been crystal clear on one point. You cannot blame the algorithm. The FRC put it bluntly. Audit firms remain responsible for the final outcome, not the software developers, not the model designers, not the vendor. The human signs the opinion. The human owns the risk. That has real consequences. If an AI tool misclassifies revenue, misses a fraud pattern, or generates flawed documentation, the liability does not shift. It lands exactly where it always has, on the auditor. A recent example came from Deloitte’s 2024 Australian welfare review, where AI-linked false citations and invented quotes led to a partial fee refund. This creates a strange dynamic. Firms are racing to adopt tools that promise efficiency gains, yet the accountability structure has not changed one bit.
Think about a real-world scenario. A CPA firm uses an AI system to review a client’s sales transactions across multiple jurisdictions. The system flags no issues. The return gets filed. Months later, a state audit uncovers misapplied tax rates. What went wrong? Bad data? Model error? Misinterpretation? At the end of the day, the partner is still on the hook. This is where professional skepticism becomes more important, not less. AI can flag anomalies, but it cannot assess intent, context, or business nuance the way an experienced auditor can. Or as one partner put it over coffee, “The tool can tell you something looks weird. It can’t tell you why it smells off.”
AI is already shifting the auditor's role. Less time on manual checks, more time on interpretation, risk assessment, and advisory conversations. That is a meaningful upgrade in how firms deliver value. Continuous auditing is starting to replace the old once-a-year scramble. Clients are getting insights earlier. Issues can be addressed before they snowball into material misstatements. That is the upside. But the nature of audit risk is changing. Instead of worrying about missed samples, firms now need to think about model risk, data integrity, and overreliance on automated outputs.
It is a bit like moving from driving a car to flying a plane. You have better tools, more data, and more automation. But when something goes wrong, the complexity is higher, and the consequences can escalate quickly. There is also a talent angle here. Firms are already dealing with staffing shortages. AI helps fill the gap, no doubt. Platforms like Accrual are positioning themselves as solutions to workload pressure during tax season. But that raises another question. If junior staff are no longer grinding through the basics, how do they build the judgment needed to challenge what the system produces later in their careers? No one has a clean answer yet.
AI is not coming to auditing. It is already embedded in how work gets done, from risk assessment to tax compliance to internal audit workflows. The benefits are real. Faster execution, broader data coverage, and earlier detection of issues. Firms that ignore this shift will fall behind, plain and simple. But the fundamentals have not moved an inch. Accountability still sits with the auditor. Professional skepticism still drives audit quality. And regulators are still figuring out how to keep pace with a moving target. So, here is the question worth sitting with. If your audit file includes AI-generated work, how confident are you in what you are signing off on? Because at the end of the day, the opinion still carries your name, not the software’s.
Until next time…
Don’t forget to share this story on LinkedIn, X and Facebook
Subscribe now for $199 and get unlimited access to MYCPE ONE, from CPE credits to insights Magazine
📢MYCPE ONE Insights has a newsletter on LinkedIn as well! If you want the sharpest analysis of all accounting and finance news without the jargon, Insights is the place to be! Click Here to Join
You’ve reached the 3 free-content piece limit. Unlock unlimited access to all News & CPE resources.
Subscribe Today.
Already have an account?
Sign In