What 'Customer-Controlled AI' Actually Means for Finance
Most AI tools ask finance leaders to give up the data, the model, and the deployment posture in one go. There is another way.
When a CFO asks an AI vendor "where does our data go?" they usually get one of three answers. None of them are very good.
The first is "trust us, it's encrypted in transit and at rest." The second is "we have SOC 2." The third — most honestly — is "it goes to whichever model provider we route to that day, and we can't really tell you what they do with it."
For a finance estate, none of those rise to the bar. The general ledger, payroll, customer contracts, and forecasting models are not just sensitive data — they are the data that determines whether the rest of the business is viable. They do not belong in someone else's training corpus, and they do not belong in a vendor's data lake.
So we built Tallie around a different default: customer-controlled execution.
What "control" means in three planes
It is easy to use the word "control" loosely. We try to be specific. There are three things a finance team needs control over, and most AI products give you none of them.
- The data plane. The agent reads from your warehouse, your lake, and your accounting and ops systems in place. There is no central AI lake. No nightly sync into a vendor cluster. No prompt-by-prompt copy of your trial balance into a model provider's logging pipeline.
- The model plane. You decide which model handles which task. OpenAI for one workload, Anthropic for another, an open-weights model for the regulated ones. Switch when pricing or capability changes — without re-platforming. We make the architectural case for that in LLM-Agnostic by Design.
- The execution plane. The agent worker runs where you say it runs. Managed cloud, your VPC, on-prem. The compute boundary is yours. The phased path for getting there is in On-Prem AI for Finance.
A "copilot" gives you exactly zero of these. You get a chat box and a prayer. That is fine for drafting an email. It is not fine for closing the books.
The CFO's instinct is right
When finance leaders push back on AI, they are often dismissed as cautious or behind. We think the instinct is correct. The job of a finance function is to maintain a defensible record of what is true. AI tooling that obscures where the data went, which model produced an answer, and what the model was actually allowed to do is not compatible with that job.
Customer-controlled AI is not a productivity sales pitch. It is a way to make the answer to "what did the agent do, on which data, with which model" inspectable — by you, by your auditors, by your regulators.
What this looks like in practice
Concretely, a customer-controlled deployment of Tallie has a few shapes:
- The agent worker runs inside your VPC or on-prem environment. Data and compute stay inside your perimeter.
- Connectors are scoped. Read-only by default. The agent can read the warehouse, but it cannot create journal entries unless you have explicitly approved that capability.
- Every run is logged: the prompt, the tool calls, the data the agent saw, and which model produced the output. Finance and IT can replay any decision.
- Skills — the encoded, versioned definitions of how your team runs a recurring process — make the agent's behaviour predictable. The same close runs the same way each month. More on that delivery model in Skills, Not Prompts.
None of this slows the team down. If anything, it is the lack of these guarantees that has been keeping finance teams from using AI seriously.
Where to start
If you are evaluating AI for finance, ask the vendor three questions:
- Can the worker run inside our environment?
- Can we choose the model provider per workload?
- Where do prompts and outputs land, and for how long?
If the answers are "no, no, and on our servers," you are not buying a control layer. You are buying a copilot, and copilots are not the right shape for a finance function.
The bar is rising. Customer-controlled AI is what finance leaders should expect — and what their auditors and CISOs will start to require. We would rather build to that bar from day one than retrofit it later.
Coming soon — engineering deep-dive: Separating the control plane from the execution plane. How we split orchestration, state, and audit (the control plane) from the long-running workers that actually call models and touch your data (the execution plane) — and why that split is what makes a "your VPC" or "on-prem" deployment a configuration choice rather than a re-architecture.
Frequently asked
- What does 'customer-controlled AI' actually mean?
- An AI deployment where the customer retains control of three things at once: where the agent runs (their cloud, VPC, or on-prem), which model the agent routes to per task, and how the agent touches data (read-only by default, no egress to a vendor data lake). Most enterprise AI tools collapse all three decisions into a single procurement; customer-controlled AI keeps them independent and reversible.
- Does my finance data leave our environment?
- No. The control plane runs inside the customer's perimeter — managed cloud, VPC, or on-prem — and reads existing systems in place rather than copying them into a vendor lake. Model calls can be routed to a self-hosted open-weights model so prompts and outputs never leave the perimeter at all.
- How is this different from giving Copilot or ChatGPT access to our ledger?
- Copilot and ChatGPT are end-user assistants: the customer sends prompts, the vendor decides which model runs them, and the vendor controls the data path. Customer-controlled AI inverts that — the customer chooses the model, owns the data path, and the vendor provides the runtime, the skills, and the deployed engineer to make it work.