Tallie AI
Implementation
6 min read

Skills, Not Prompts: How Forward-Deployed Engineers Codify Finance Processes

A skill is a versioned, reviewable definition of how your team runs a recurring finance process — authored with you, not handed to you in a docs link.

Archie Norman

The most common AI failure mode we see in finance teams is not technical. It is operational. A platform shows up, the team is told to "prompt it," and within a quarter the tooling has been quietly abandoned because nobody can get it to do the same thing twice.

That is the failure mode we designed Tallie to avoid. Two ideas do most of the work: skills and forward-deployed engineers. Together, they replace prompting with something a finance function can actually live with.

The trouble with prompting as a delivery model

Prompting is a fine way for an individual to coax an answer out of a model. It is a terrible way to deliver a finance process. Three reasons:

  1. Variance. The same question, asked twice, can produce two different answers. That is fatal for any output a CFO has to sign.
  2. Tribal knowledge. Whoever wrote the best prompt holds the institutional memory of how that workflow runs. When they leave, the workflow leaves with them.
  3. No review surface. A prompt is a paragraph in a chat box. There is no diff, no version, no approval, no rollback. Compare that to how every other artifact your finance team produces is governed.

If you are running a serious finance function on prompts, you are running it on tribal knowledge in a chat window. That is not a controlled environment.

What a skill is

A skill, in Tallie, is a versioned definition of how a specific finance process runs. Concretely, a skill includes:

  • The trigger. When does this run? On a schedule, on demand, in response to an upstream event?
  • The data sources. Which connectors does it touch? With what scope and what permissions?
  • The procedure. What are the steps, in order, with their dependencies and acceptance criteria?
  • The model policy. Which model providers are eligible to run which steps, and what is the routing logic? See LLM-Agnostic by Design for why we treat that as a per-task decision rather than a platform choice.
  • The outputs. What artifacts does it produce, in what format, with what validation?
  • The audit hooks. What gets logged, who can review it, and how is it stamped to the record?

It is a definition, not a prompt. It is reviewable, diff-able, version-controlled, and ownable by the customer. And it runs the same way every time — because that is the whole point of a finance process.

A skill is not magic. It is what "how we do month-end close here" looks like when you take it out of a runbook and a few people's heads and put it into a system the agent can execute.

Why we send forward-deployed engineers

Skills are the artifact. The hard part is authoring them.

Most finance teams know how their close works in the way that any expert knows their domain — well enough to do it, not necessarily well enough to write it down precisely. Asking the team to author skills from scratch, against an empty platform, is a setup for failure. We have watched it happen at other vendors. The platform is good. The team is good. The translation between them is the bottleneck.

So we send a forward-deployed engineer. They embed with the finance team, often for the first month and intermittently after. Their job is to:

  • Sit through the actual close, the actual board pack prep, the actual reconciliations. Watch what really happens, not what the runbook says.
  • Wire up the data sources and connectors with the right scopes. Read-only first — see Letting an LLM Write SQL Against Your Warehouse — Safely for the safety model we apply at the connector layer.
  • Author the initial set of skills from our templates — adapted to your chart of accounts, your entities, your terminology, your sign-off chain.
  • Hand the skills over to your team in a state where they can be reviewed, modified, and owned internally.

This is not a managed service. It is a one-time investment in translation. After it, your team owns the skills, can edit them, can version them, can decide which model providers to route to. The agent is yours to run.

The economics

People sometimes expect this model to be expensive. In practice it is the cheapest way to actually land an AI deployment in a finance function — because the alternative is not "cheaper deployment," it is "no deployment that survives a quarter."

The right comparison is not "FDE engagement vs. SaaS-only platform." It is "FDE engagement that produces a working, owned set of skills" vs. "SaaS-only platform that gets quietly shelved after the first month-end." We have seen the second outcome enough times to optimise hard against it.

A worked example

A close skill we author with most customers in week three or four typically:

  • Pulls trial balance and sub-ledger data from the warehouse.
  • Reconciles bank, AP, AR, and intercompany against defined tolerances.
  • Flags variances against prior period and forecast.
  • Produces a draft commentary, in your house style, that goes to the controller for review.
  • Logs every step, every model call, every data point used.

The first version is authored by the FDE in collaboration with the controller. By month two, the controller is editing the skill themselves. By month three, the team has authored two more skills on their own — typically board pack prep and a recurring lender update.

That is the path we want every customer on. Skills, owned by you, authored with help, governed by default. Prompting is fine for individuals. Skills are how a finance function actually adopts AI without giving up control.


Coming soon — engineering deep-dive: Skills as runtime artifacts, not prompts. How a skill is actually packaged, shipped with the deploy bundle, and loaded by the worker at runtime — kept structurally separate from application code so it can be reviewed, versioned, and rolled back the same way any other production artifact is.

Frequently asked

What's a 'skill' in this context?
A versioned, reviewable definition of how a recurring process is executed end to end — month-end close, pipeline hygiene, deal-desk approvals, incident triage. It encodes the data sources, the steps, the validation rules, and the expected outputs, and it runs the same way on every cycle. A skill is to a prompt what a SQL view is to an ad-hoc query.
What does a forward-deployed engineer actually do?
They sit alongside the team for the first cycles of an engagement, wire up the data sources, encode the team's actual process into skills, and hand back something that runs deterministically. The deliverable is the skill, not a slide deck — and because it's versioned, the team can review and modify it without going back to the vendor.
Why not just give finance teams a prompting interface and let them build it?
Three reasons: variance (the same prompt produces different answers), invisibility (no one can review or audit the prompt that ran last month), and skill drift (the institutional knowledge of how a process actually works lives in chat history nobody reads). Skills replace all three with a versioned artifact a CFO can sign off.
#skills#forward-deployed-engineering#finance-ops#implementation
Talk to us

Your data. Your model. Your infrastructure.

Bring AI productivity to your finance, operations, and sales teams without handing over your data estate, your deployment posture, your model strategy, or the cost of your stack. Your processes encoded as skills, authored with you by our engineers — on-prem or VPC, LLM-agnostic, governed by default.