# What 'Customer-Controlled AI' Actually Means for Finance

> Most AI tools ask finance leaders to give up the data, the model, and the deployment posture in one go. There is another way.

Published: 2025-11-23
Updated: 2026-04-22
Author: Archie Norman (Founder, Tallie AI)
Category: Strategy
Tags: customer-controlled-ai, finance-ai, data-sovereignty, cfo
Canonical URL: https://tallie.ai/blog/customer-controlled-ai

## TL;DR

- 'Customer-controlled AI' means the customer keeps three things: where the agent runs, which model it routes to, and how it touches their data — not just one of the three.
- Most enterprise AI tools collapse all three decisions into a single procurement, which is why CFOs end up with vendor lock-in, data egress they can't audit, and a per-seat bill that escalates with usage.
- The right default is read-only access against existing systems, deployment inside the customer's perimeter, and per-task model routing the customer can change without re-platforming.

---

When a CFO asks an AI vendor "where does our data go?" they usually get one of three answers. None of them are very good.

The first is "trust us, it's encrypted in transit and at rest." The second is "we have SOC 2." The third — most honestly — is "it goes to whichever model provider we route to that day, and we can't really tell you what they do with it."

For a finance estate, none of those rise to the bar. The general ledger, payroll, customer contracts, and forecasting models are not just sensitive data — they are the data that determines whether the rest of the business is viable. They do not belong in someone else's training corpus, and they do not belong in a vendor's data lake.

So we built Tallie around a different default: **customer-controlled execution**.

## What "control" means in three planes

It is easy to use the word "control" loosely. We try to be specific. There are three things a finance team needs control over, and most AI products give you none of them.

1. **The data plane.** The agent reads from your warehouse, your lake, and your accounting and ops systems *in place*. There is no central AI lake. No nightly sync into a vendor cluster. No prompt-by-prompt copy of your trial balance into a model provider's logging pipeline.
2. **The model plane.** You decide which model handles which task. OpenAI for one workload, Anthropic for another, an open-weights model for the regulated ones. Switch when pricing or capability changes — without re-platforming. We make the architectural case for that in [LLM-Agnostic by Design](/blog/llm-agnostic-by-design).
3. **The execution plane.** The agent worker runs where you say it runs. Managed cloud, your VPC, on-prem. The compute boundary is yours. The phased path for getting there is in [On-Prem AI for Finance](/blog/on-prem-ai-for-finance).

A "copilot" gives you exactly zero of these. You get a chat box and a prayer. That is fine for drafting an email. It is not fine for closing the books.

## The CFO's instinct is right

When finance leaders push back on AI, they are often dismissed as cautious or behind. We think the instinct is correct. The job of a finance function is to maintain a defensible record of what is true. AI tooling that obscures where the data went, which model produced an answer, and what the model was actually allowed to do is not compatible with that job.

Customer-controlled AI is not a productivity sales pitch. It is a way to make the answer to "what did the agent do, on which data, with which model" inspectable — by you, by your auditors, by your regulators.

## What this looks like in practice

Concretely, a customer-controlled deployment of Tallie has a few shapes:

- The agent worker runs inside your VPC or on-prem environment. Data and compute stay inside your perimeter.
- Connectors are scoped. Read-only by default. The agent can read the warehouse, but it cannot create journal entries unless you have explicitly approved that capability.
- Every run is logged: the prompt, the tool calls, the data the agent saw, and which model produced the output. Finance and IT can replay any decision.
- Skills — the encoded, versioned definitions of how your team runs a recurring process — make the agent's behaviour predictable. The same close runs the same way each month. More on that delivery model in [Skills, Not Prompts](/blog/skills-not-prompts).

None of this slows the team down. If anything, it is the lack of these guarantees that has been keeping finance teams from using AI seriously.

## Where to start

If you are evaluating AI for finance, ask the vendor three questions:

1. Can the worker run inside our environment?
2. Can we choose the model provider per workload?
3. Where do prompts and outputs land, and for how long?

If the answers are "no, no, and on our servers," you are not buying a control layer. You are buying a copilot, and copilots are not the right shape for a finance function.

The bar is rising. Customer-controlled AI is what finance leaders should expect — and what their auditors and CISOs will start to require. We would rather build to that bar from day one than retrofit it later.

---

**Coming soon — engineering deep-dive: *Separating the control plane from the execution plane.*** How we split orchestration, state, and audit (the control plane) from the long-running workers that actually call models and touch your data (the execution plane) — and why that split is what makes a "your VPC" or "on-prem" deployment a configuration choice rather than a re-architecture.
