Articles
The Role of Prompt Engineering in Finance and Where It Falls Short
- By AFP Staff
- Published: 1/9/2026

Many early adopters of AI raced to master “prompt engineering,” assuming that the key to unlocking AI’s value lay in crafting better, more creative prompts. However, as systems evolve, that mindset is proving insufficient.
Today’s AI platforms are less about point-in-time prompts and more about embedding business logic and structural context, thus allowing AI to act consistently, reliably and in harmony with finance workflows. What matters today isn’t how well you ask a question; it’s how well your systems, data and process knowledge are encoded. The following interview with Ashok Manthena explores what the shift means for finance professionals and which skills will determine success in the AI era.
Ashok Manthena, founder of ChatFin.ai, is a recognized thought leader in AI for finance. He has more than a decade of experience supporting FP&A, accounting and finance teams at global companies such as Google, Gap and Ingersoll Rand.
AFP: For a while, everyone was talking about prompt engineering as the next big skill in finance, but you think that idea is overbought. First, can you define the role of prompt engineering in finance to date?
Manthena: Prompt engineering played a meaningful role in the early stages of AI adoption, when the systems were highly literal and output quality depended almost entirely on how carefully a question was phrased. Omit context and the answer would miss the mark; spell out every step and the result improved. At the time, we concluded that prompt writing itself was a critical skill.
That mindset still shows up today in articles promising the “100 best ChatGPT prompts for finance.” The problem is practical: Where do those prompts live, how do you find the right one when you need it, and how do you know whether it still works as data, systems or policies change? These lists look impressive, but they’re difficult to operationalize and even harder to maintain.
In that early phase, prompt engineering helped finance teams interact with AI systems that had no memory, no structural awareness, and no understanding of the business. Every interaction felt like onboarding a new hire from scratch — explaining how accruals are calculated, how revenue cut-off works and how postings are validated.
Which also helps to explain why many CFOs assumed prompting would become a core skill. Some organizations even ran workshops to teach accountants and analysts how to write better prompts.
AFP: Why do you think we need to update our perception of prompt engineering?
Manthena: AI has reshaped how we think about business logic. What once lived inside rigid ERPs or hard-coded scripts is now moving into more flexible AI systems.
All the “desktop procedures” finance teams have traditionally maintained — the step-by-step guides used to train new hires on booking accruals, validating invoices or reconciling balances — are now becoming input material for AI.
What’s become clearer over time is that the real value was never in the wording itself. Text was simply the mechanism for teaching business logic to the system. In effect, text became a form of code. You didn’t need to be a developer, but you did need a precise understanding of your processes to explain them clearly.
Today, AI platforms rely less on one-off prompts and more on absorbed structure, rules and patterns. When AI is connected to an ERP, a chart of accounts, documented policies, and historical transactions, it can operate with embedded context rather than reconstructed context.
That’s where the durable advantage comes from: teaching AI how the finance system actually works, not maintaining a prompt library that few people can reliably reuse at scale.
AFP: What does this shift mean for finance teams using AI today?
Manthena: Prompt engineering should be understood as a transitional skill. Relying on it today means business logic still lives in user behavior rather than in the system itself. That’s not a sustainable position for finance functions that depend on consistency and control.
The more durable focus is domain and process knowledge. Teams need to be able to articulate how their finance processes actually work — recognition rules, cut-off logic, approval thresholds and exception criteria. That knowledge doesn’t belong in prompts; it belongs in configurations, policies and data structures that the AI can reuse without reinterpretation.
This is where system thinking becomes critical. Instead of asking, “How do I phrase this request?” the better question is, “Where should this logic live?” Finance teams that think in systems look for ways to encode rules once and apply them repeatedly. They design workflows so AI operates within defined boundaries, rather than being guided interactively every time.
Abstraction ties these shifts together.
AFP: You talk about “abstraction” as a key concept. Can you explain what that looks like in practice, and why it matters?
Manthena: In simple terms, abstraction means hiding technical complexity behind a clean, reusable layer of logic.
In the early days of AI, you had to explain everything every single time. It was like onboarding a new hire for every task:
- “Here’s how you calculate deferred revenue.”
- “Here’s how you check a GL posting.”
- “Here’s how you match an invoice to a PO.”
That was because AI didn’t “remember” or understand the structure behind the work — it simply responded to whatever you typed in.
Current AI platforms have changed that. They allow you to embed rules, policies and data once, and then reuse that intelligence again and again. When you open Excel, you don’t think about how the math engine works, you just type =SUM(A1:A5). When you use an ERP, you don’t write SQL queries every time, you just click “Generate Report.” That's abstraction. The complexity still exists, but it’s packaged in a way that humans don’t have to manage directly.
AI is now beginning to work the same way — not just for calculations, but for reasoning. In AI, abstraction happens when the system learns patterns and relationships from examples, so it no longer needs explicit instructions. For example, once an AI model has seen enough invoices, it doesn’t need you to specify, “This is the invoice number” or “This is the due date.” It recognizes those automatically.
The same logic now extends to business rules. If you show an AI system enough examples of how your team applies accrual policies or revenue cut-off rules, it can internalize that logic over time. It learns the structure behind the decisions.
You can think of AI reasoning layers. The lowest layer handles raw data. The middle layer understands relationships, such as customers, transactions and accounts. And the top layer applies your finance logic, such as policy thresholds, timing rules and approval workflows.
For instance, instead of prompting, “Look at these invoices, find the ones that cross months and apply our accrual rule for revenue cut-off,” today’s AI systems can store that accrual logic directly within the platform, including thresholds, cost centers and timing rules. The next time, you simply say, “Run monthly accruals,” and the system knows what to do because the rules live inside it.
The same applies to revenue recognition. Rather than explaining policies in every prompt, you encode them once — defining contract types, recognition triggers and deferral conditions — and let the system apply them consistently.
At that point, the skill shifts. It’s no longer about crafting better prompts; it’s about designing systems so prompting isn’t needed at all. Finance teams that understand abstraction scale faster because they use AI that grows smarter with use, not more dependent on constant input. Abstraction turns AI from a tool you command into a system that understands your business. And that’s where the future of finance is headed.
AFP: If prompt engineering isn’t the future, what skills should finance professionals develop next?
Manthena: The next wave of AI in finance won’t be about writing better prompts — it will be about understanding how finance actually works inside a company.
Finance is no longer just about numbers; it’s a system. Data moves across multiple tools. Entries flow from one process to another. Approvals, exceptions, adjustments and accounting treatments vary by industry and by organization. That kind of domain knowledge isn’t captured in AI training data. Only finance professionals understand how these things truly work in practice.
That’s why deep domain understanding is becoming one of the most valuable skills in finance. If you ever felt boxed into a single role or process, this is the moment to broaden your perspective across the finance function.
AI works best when it’s grounded in that knowledge. When domain expertise is combined with AI tools, the impact goes beyond time savings. Accuracy improves. Friction across teams is reduced. The organization runs more smoothly. Using AI to make your own work easier — and your coworkers’ work cleaner — becomes a genuine asset.
Many CFOs struggle with AI adoption, not because the technology is immature, but because implementation thinking hasn’t caught up. The missing piece is systems thinking. Systems thinking means seeing finance as a connected flow rather than a collection of isolated tasks. Without it, teams automate individual steps but miss end-to-end impact. They gain efficiency, but not transformation.
Prompt engineering showed us how to talk to AI. The next step is teaching AI how finance operates as a system. That’s the skill that will separate finance teams that merely experiment with AI from those who truly integrate it into how finance works.
Copyright © 2026 Association for Financial Professionals, Inc.
All rights reserved.
