Articles

How to Catch AI Errors Before They Become Business Problems

  • By AFP Staff
  • Published: 5/8/2026
AI Error Warning

Generative AI (GenAI) models are known for periodically producing factually inaccurate but confident-sounding content. For finance, these types of mistakes have major consequences.

As organizations expand their use of AI in finance, leaders need to implement controls that help their teams move quickly while preventing bad outputs from entering decision-making.

Finance professionals within the AFP community shared with us the AI errors they’ve encountered and the actions they’ve taken to reduce them. Their experiences point to several best practices organizations can implement as AI adoption expands.


AI in Finance Certificate

No code for finance certificate image

The No Code AI for Finance Certificate program is built for finance professionals who want real, usable AI skills today. Learn to modernize workflows, elevate analysis and lead AI adoption in your finance team with confidence.

Learn More

Keep humans in the loop

Large language models (LLMs) are designed to predict likely responses — not verify whether they’re factually correct, complete or aligned with the business. AI can produce polished answers that appear credible while completely missing important context or fabricating information.

Even asking AI to check its work can still result in mistakes. One finance professional shared that AI incorrectly applied data validation across multiple spreadsheet cells and still missed the mistake when it reviewed its own work.

Human involvement is critical for validating outputs and applying judgment before decisions are made. It becomes even more important when the outputs influence financial reporting, compliance or strategic decisions.

“One time, AI hallucinated a vendor contract clause that didn't exist — it summarized a document and added a detail that sounded plausible but wasn't in the original text,” said Anna Tiomina, Founder, Blend2Balance. “I caught it by cross-checking the summary against the source before sharing it with legal. Now I treat any AI output on contracts or compliance as a draft that requires source verification, not a final answer. That's become a standing rule on my team.”

Rosemary Linden, President of Momentum CFO, encountered a similar issue when testing an AI-powered FP&A platform. While it accurately identified budget variances, it failed to explain the underlying business drivers. “AI successfully automated the routine task, but it required professional judgment to turn the analysis into actionable insight,” she said.

Use trusted data sources

One of the most common errors occurs when users assume the system is working from accurate information. GenAI tools can pull outdated data, misinterpret documents or fabricate information when they lack sufficient source material.

This happened to Barry Huisman, CFO, who explained, “AI brought wrong information into a report. I provided feedback to correct this and avoid the use of that information in future cases.”

For Raymond Cheung, Vice President of Corporate Finance and Strategy, a lack of consistency within the data sources led to forecasting issues: “Inconsistent results from GPT analysis and forecast results led us to standardize the GPT queries and data sources for outcome generation.”

When AI is drawing on your own dataset, how you manage the data becomes central to its effectiveness. Best-in-class data management strives for a single source of truth that is accurate, complete, consistent, coherent, structured and timely.

Build governance guardrails around AI usage

AI governance refers to the set of policies, processes and controls (also known as guardrails) organizations establish to ensure responsible, ethical and effective use of AI tools. Without such frameworks, finance leaders risk relying on "black box" systems that lack the transparency and auditability necessary for compliance and accurate financial reporting.

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) recommends organizations establish governance structures that help identify, assess and manage AI risks.

In the case of one finance professional, his organization’s guardrails flagged a confident but incorrect variance explanation generated by AI and prevented it from influencing decisions. The system’s built-in controls managed what could have resulted in serious errors.

Finance leaders need to understand how AI arrived at its conclusions. Without visibility into assumptions, recommendations can appear credible while being built on incomplete or flawed inputs.

“The biggest error I’ve caught is not a math mistake, but a logic mistake,” said a senior financial professional. “AI gave a recommendation based on incomplete context (one-time pricing impact). I mitigated it by forcing transparency on assumptions, checking outputs against trusted data and trying to avoid the ‘black box.’”

Keep your AI models updated and test regularly

AI tools are not static. They can become less effective over time if they’re not updated to recognize new error patterns.

Industry experts recommend ongoing monitoring to identify emerging risks and improve system performance. That includes reviewing recurring errors, refining prompts, updating workflows and providing feedback when systems generate inaccurate outputs.

Speed matters in finance. So does getting it right. The best practices outlined here aren't obstacles to moving fast — they're what make moving fast sustainable.


Want to learn more about AI-powered finance?

Fill out the form below to download the AFP FP&A Guide to AI-Powered Finance and get guidance for your AI journey.

 

Copyright © 2026 Association for Financial Professionals, Inc.
All rights reserved.