Articles
How Finance Professionals Can Use AI — Without Losing Their Minds
- By Bryan Lapidus, FPAC
- Published: 5/6/2026

Recently, I asked my LinkedIn network to react to a deliberately provocative statement: “In the near future, financial analysis will be a mix of good prompt writing on the front end, and good output checking on the back end.”
But there's a problem embedded in that workflow that rarely gets discussed: leaning on AI to produce the output also means leaning on AI to do the thinking — and that has consequences.
There are trainings and classes for effective GenAI prompt writing everywhere. But all the training in the world won’t help you if you don’t know what to ask. And in any high-thinking profession, that equates with knowing the intricacies of the subject matter. Good prompts create good output, but GenAI is far from flawless. The black box, hallucinations — it’s not a technology that has evolved to the point where we can trust it implicitly.
And yet, people do. Far too many users are allowing it to do the thinking for them, in fact. What this leads to is a degeneration of the mind called “cognitive debt.”
Lance Rubin, Founder of Model Citizn, explained in his LinkedIn post, “The biggest risk with AI in finance isn’t job loss. It’s something far quieter. People trusting answers they no longer understand. … If we outsource too much thinking to AI, our brains gradually stop doing the work. Not because we can’t think, but because the tool makes it so easy not to.”
AI in Finance Certificate

The No Code AI for Finance Certificate program is built for finance professionals who want real, usable AI skills today. Learn to modernize workflows, elevate analysis and lead AI adoption in your finance team with confidence.
Learn MoreCognitive debt — use it or lose it
The term cognitive debt originates with researchers at the MIT Media Lab. It was first cited in their 2025 study, which looked at the effects of ChatGPT use on the brain. What they discovered is that shortcuts taken in the present create issues later. In other words, using AI to shortcut a thinking task today will cost you the mental capacity earned from doing it yourself, i.e., the types of thinking central to learning, judgment and problem-solving.
“That got me thinking about finance teams. Because this isn’t really a technology issue. It’s a modeling issue,” said Rubin. “I’ve seen organizations where spreadsheets run the business, but very few people truly understand how the numbers flow through the model. Assumptions buried three tabs deep. Logic layered over years of quick fixes. Outputs trusted because ‘the model says so.’ Now imagine layering AI on top of that. You don’t just get faster answers. You get faster answers built on foundations nobody fully understands anymore.”
Future finance teams will be orchestrators, not doers
“The future FP&A team may look more like an orchestrator than a doer,” said Anshuman Yadav, Founder, StratiqAI, “and many teams are already moving in that direction. But … deterministic logic still needs human oversight, especially where decisions carry real consequences. LLMs are powerful, but probabilistic outputs (the black box problem) and financial accountability do not always mix well without guardrails.”
In the MIT study, participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Participants were hooked up to EEG headsets, and their outputs were thoroughly tested and evaluated by both AI-assisted tools and human experts. What the research revealed is that LLM users displayed the weakest brain connectivity. Their cognitive activity lessened as external tool use increased.
“It goes beyond a good prompt and involves clear instructions, reference files and the ability to know when to use deterministic automation to generate an answer vs. using probabilistic answers from the LLM itself,” said Paul Barnhurst, Founder, The FP&A Guy. “We are moving slowly toward an era that will require a lot of good output checking and less traditional analysis.”
Early-career professionals will lose what they never had
Assuming the above premises, what does that mean for early-career professionals who are coming up in an age of GenAI? How can you analyze the output if you don’t understand what was being analyzed?
“Something that I see as a big concern with less experienced people using AI is judgment,” said John Sanchez, Communication Consultant, Trainer & Podcast Host, The FPA Group. “You can only develop good judgment through experience. It won't be long before we have finance professionals who have always used AI as their first step. How will they develop their judgment without getting reps at doing the thinking that AI will be doing for them right from the jump? I don't know how someone with no or limited relevant context around an issue can effectively judge output. It's the very reason experts are experts.”
FP&A is critical in any organization, particularly as a business partner to other departments outside of finance. What does that partnership look like when financial analysts aren’t able to achieve the level of understanding needed to be effective?
“When I was a finance analyst many years ago, I moved into the CPG industry and didn’t have a clue how price/volume/mix was calculated or used until I invested time in learning the formulas and calculations,” said Ron Monteiro, Founder, KICT Inc. “After I invested that time, I was so much more effective as a business partner. The danger of using AI before learning is a lack of effectiveness and poor business partnership.”
The skill gap isn't limited to those just entering the profession, though. At every level, the ability to use AI well depends on the same foundation: knowing enough to ask the right questions and knowing enough to catch a wrong answer.
Skills are needed to manage AI — at both ends
“We don’t have a culture of checking AI output because we never built a culture of checking our own output,” said Jason Brisbane, CEO & Co-Founder, FinHelm. “The teams that figure out the back end first are going to be the teams that get to use the front end well.”
The truth is that in order to use GenAI well, you need to thoroughly understand what to ask it to produce — and you need to have the skills and knowledge to review the output for accuracy. Neither of which is developed by leaning on and trusting GenAI completely.
The technical demands of prompting are already more complex than most realize. “On the prompt skill point: To get a process right often requires a long and detailed prompt, an extreme version being Claude Code, which has a 30,000-word system prompt, so having those skills will definitely be part of the armory,” said James Kelly, “especially because there’ll be a risk of hallucinated rationales for variances if not carefully managed.”
“On the prompt skill point: Claude Code ships with a 30,000-token default prompt before you type a word, so prompting skills are definitely going to be part of the armory,” said James Kelly, Co-founder, Your Treasury, “especially because there’ll be a risk of hallucinated rationales for variances if not carefully managed.”
How to use AI without incurring cognitive debt
There is no getting around the next technological advancement for humanity. So, how do we use AI in finance without incurring cognitive debt? And, likewise, how do we ensure early-career professionals gain the skills and knowledge they need to hold and advance the finance profession?
Here are some best practices:
- Write your own first draft. It’s critically important to engage your own brain before turning to GenAI. It exercises those important cognitive skills and makes you better at evaluating AI output.
- Set aside time to work without AI. This is related to the first practice; however, what’s different is the intentionality of exercising needed skills. Think of it like physical fitness; if you never exercise a muscle, it’s going to atrophy. For financial analysts, this could be completing analyses manually several times a week before checking them with AI.
- Regularly review how you’re using AI. This is about assessing cognitive debt risk. Set a time period, say one week, and make note of when you used AI and for what task. Here’s how the risk stacks up:
- Low risk: Mechanical tasks such as formatting or transcription.
- Medium risk: Research and information retrieval. It’s better to read the primary source yourself.
- High risk: Analysis, reasoning and decision support, and learning tasks. This is self-explanatory based on the content of this article. You’re harming yourself by allowing AI to do all the thinking for you.
“AI will be an extraordinary tool for finance teams. But only if the people using it still understand the mechanics underneath the numbers. Because in the end, boards and investors don’t approve AI outputs. They approve decisions. And decisions still require judgment,” said Rubin.
How to write better prompts and interrogate the outputs
Going back to the original thought experiment that prompted this article, if financial analysis increasingly moves toward prompt writing and output checking, we need to know how to do those skills well.
Whereas the old FP&A operating model was about building models, defending assumptions and arguing forecasts, AI-enabled financial analysis is about how you frame the question, generate options and apply judgment.
A strong prompt for financial analysis sets the intent, context and boundaries, such as in the prompt formula below:
You are [ROLE]. This analysis supports [DECISION]. The objective is [OBJECTIVE]. Assume [ASSUMPTIONS], constrained by [LIMITS]. Deliver [FORMAT] for [AUDIENCE].
Below are more detailed examples of what the parameters look like in a finance context.
| Parameter | What to Specify | Finance Example |
|---|---|---|
| 1. Decision Context | What decision this informs | “Capex approval for FY27” |
| 2. Business Objective | What success looks like | “Maximize cash resilience, not just ROI” |
| 3. Audience | Who will use this | “CFO-level, board-ready” |
| 4. Time Horizon | Relevant planning window | “3–5 year outlook” |
| 5. Scope | What to include or exclude | “Exclude M&A synergies” |
| 6. Level of Granularity | How detailed | “By region, not SKU” |
| 7. Assumptions | Starting conditions | “Demand volatility remains elevated” |
| 8. Constraints | Real-world limits | “No headcount growth” |
| 9. Analytical Lens | How to think | “Downside-first, risk-adjusted” |
| 10. Output Format | What to deliver | “3 insights, 1 table, exec takeaway” |
Once you receive output from AI, you need to stress-test the conclusions, not admire them. Below are 10 questions to ask yourself:
| Question | What You’re Testing |
|---|---|
| 1. What assumptions are unstated? | Hidden drivers |
| 2. Is this directionally right? | Judgment vs logic |
| 3. What would break this? | Fragility |
| 4. Cause or correlation? | True drivers |
| 5. Does this pass a CEO sniff test? | Clarity |
| 6. What’s missing? | Analytical gaps |
| 7. Is precision overstated? | False certainty |
| 8. Whose incentives are assumed? | Behavioral realism |
| 9. What frame change alters the answer? | Robustness |
| 10. What’s the next question? | Decision momentum |
If you’re interested in this topic, be sure to mark your calendar for the next AFP FP&A Series, Building Financial Intelligence: Deliberate Practice for the AI Era, on August 26
Copyright © 2026 Association for Financial Professionals, Inc.
All rights reserved.
