# Tallie AI > An AI control layer that runs on your infrastructure, against your data, with the model you choose. Built for finance, operations, and sales teams that want governed AI execution — not another generic chat assistant — at a stack cost they actually control. Tallie is an AI platform aimed at finance, operations, and sales teams at mid-market and enterprise organisations. It deploys on the customer's infrastructure (managed cloud, customer VPC, or on-prem), reads from existing data systems in place rather than copying them into a central AI lake, and is LLM-agnostic — customers choose the model and can switch providers without rebuilding the platform. Every customer is paired with a forward-deployed engineer who embeds with the team to wire up data sources and author the skills that encode that team's specific processes. Because customers route per task across hosted, open-weights, and self-hosted models, the marginal cost of running a workflow is set by the customer, not by a single vendor's premium SaaS line item. ## What Tallie does - **Reads your data in place.** Connects to the warehouse, lake, accounting (Xero, QuickBooks, SUN), ERP, CRM (Salesforce, HubSpot), help desk, and operational systems the team already runs. Read-only by default. No central AI data lake to manage, no rip-and-replace, no copying sensitive data into a vendor-controlled environment. - **Runs on the customer's infrastructure.** The agent worker is designed for customer-controlled execution: managed cloud, customer VPC, or on-prem. The same control plane works across deployment models, which is the only credible path for regulated environments. - **Stays LLM-agnostic.** Use OpenAI, Anthropic, open-weights, or a self-hosted model — and route per task. When pricing or capability shifts, customers switch providers without rebuilding workflows. No model lock-in is part of the contract, not a future promise. - **Cost-efficient by design.** Per-task model routing means cheap models for high-volume work and frontier models only where they earn it. Customers can run open-weights or self-hosted models against the same skills to drive the marginal cost of a workflow toward zero — without rewriting anything. - **Encodes processes as skills.** A skill is a versioned, reviewable definition of how a team runs a recurring process — month-end close, pipeline hygiene, deal-desk approvals, incident triage, board prep. Authored once, the agent runs the process the same way every time. Starter templates ship in the box; the rest are customer-authored, with the forward-deployed engineer building them alongside the team. - **Governed by profiles and audit.** Every agent action is scoped to a profile, executes against approved data sources only, and is recorded in a full audit trail. Broader (write) capabilities are opt-in per profile, never on by default. - **Forward-deployed engineer per customer.** Not a self-serve platform with an "empty box" hand-off. A real engineer embeds with the team for the first weeks of deployment to connect data, author skills, and prove the workflows before going live. ## Core concepts - **Skills**: customer-authored, versioned definitions of recurring processes the agent can run repeatedly. Equivalent to runbooks for the agent. Skills span finance, ops, and sales work. - **Profiles**: scoped contexts that bind a skill to specific data sources, capability levels (read vs. write), and approver chains. Govern who and what the agent can act on. - **Forward-deployed engineers (FDEs)**: Tallie engineers who embed with the customer's team during deployment to wire up integrations and author skills. Replaces the typical "platform shipped, you figure it out" model. - **Read-only by default**: the agent's initial capability set is analysis and answer generation against approved data, not autonomous actions on customer systems. Write capabilities are scoped, audited, and opt-in per profile. - **Customer-controlled execution**: the agent worker can run in Tallie's managed cloud, the customer's VPC, or fully on-prem. - **Per-task model routing**: each skill specifies which model it runs against, so customers compose cheap and frontier models per workflow and rebalance as prices move. ## How a deployment works Four-week implementation, run with a forward-deployed engineer: 1. **Week 1 — Scope.** FDE meets the team, identifies the recurring processes worth encoding as skills (finance close, pipeline hygiene, RevOps reporting, ops incident triage, board prep), maps required data sources. 2. **Week 2 — Deploy.** Stand up the agent worker on the chosen infrastructure (managed, VPC, on-prem) with the chosen LLM provider — or providers, if routing per task. Connect data sources read-only. 3. **Week 3 — Author skills.** FDE encodes the team's processes as skills, starting from Tallie's starter templates and extending into the customer's specific workflows. Each skill is reviewed and approved by the team that owns the process. 4. **Week 4 — Go live.** Skills run in production against live data, scoped by profile, with full audit. The team adopts the agent for the workflows that were proven during deployment. ## Common questions - **Where does our data go?** It stays in the customer's systems. The agent reads from the warehouse, lake, accounting, CRM, and ops systems in place. Tallie does not copy data into a central AI lake, does not train on customer data, and does not share data across tenants. - **Can we run this on-prem or in our own VPC?** Yes. The agent worker is designed for customer-controlled execution. - **Are we locked into one LLM provider?** No. Tallie is LLM-agnostic. Customers can route per task and switch providers without rebuilding. - **What does this cost to run?** Customers control the per-task model choice, so the marginal cost of running a workflow is set by the customer's routing strategy — not by a vendor's premium per-seat SaaS curve. Open-weights and self-hosted models are first-class. - **Why not just use ChatGPT or Microsoft Copilot?** Those are broad assistants. Tallie is governed workflows over approved data, with tighter control over access, deployment, and auditability. The agent only does work the customer has sanctioned, on data sources the customer has connected. - **We're regulated. Can we use this?** The combination of customer-controlled deployment, read-only defaults, and per-profile capability scoping is built specifically for regulated environments — not retrofitted onto a consumer AI assistant. - **Will it force a rip-and-replace of our data infrastructure?** No. Tallie connects to existing systems (warehouse, lake, accounting, CRM, ops). There is no central AI data lake to manage on top of existing investments. - **Are we on our own to make this work?** No. Every customer gets a forward-deployed engineer who embeds with the team during deployment. ## Who Tallie is for - **Finance, operations, and sales leaders** at mid-market and enterprise organisations who want AI productivity gains without giving up data control, model choice, regulatory posture, or cost predictability. - **Cross-functional teams** running recurring work that's repeatable enough to encode but specific enough that off-the-shelf tools don't fit — month-end close, FP&A workflows, pipeline hygiene, RevOps reporting, deal-desk approvals, incident triage, board prep. - **Regulated industries** (financial services, healthcare, public sector, professional services) that need on-prem or VPC deployment and a clear data perimeter. - **Teams that already own their data infrastructure** and want an AI execution layer on top — not a new vendor data lake to manage, and not a per-seat SaaS bill that scales faster than the value. ## Status Early access. Tallie is engaging with design-partner customers ahead of general availability. The waitlist captures qualifying interest from finance, operations, and sales leaders evaluating governed AI for their team's work. ## Writing Most recent first. Each post has a corresponding markdown rendering at `/blog/{slug}/markdown`; the full corpus is at `/llms-full.txt`. - [CubeSandbox Lands — the Other Half of Customer-Controlled AI](https://tallie.ai/blog/cubesandbox): Two weeks ago, the open-source agent stack was missing a credible in-your-environment sandbox. With Tencent's CubeSandbox release, the gap is closed — and customer-controlled finance AI just became a much shorter procurement conversation. - [Agentic Finance Workflows on SunSystems: A Forward-Deployed Pattern](https://tallie.ai/blog/agentic-workflows-on-sunsystems): SunSystems is exactly the kind of system where agentic finance workflows pay off — a stable, structured ledger sitting underneath brittle, swivel-chair processes. Here is the pattern we use to layer agents on top of it without breaking anything. - [Agentic Finance Workflows on Xero: A CFO and Project-Lead Pattern](https://tallie.ai/blog/agentic-workflows-on-xero): Xero is the cloud ledger most growth-stage CFOs and project-led businesses actually run on. The pattern we use to layer agents on top — for cash, runway, and margin answers in minutes, and live budget variance across a portfolio of projects — without ever owning the keys. - [Kimi K2.6 Lands — and Why Open-Weights Frontier Models Change the Finance AI Calculus](https://tallie.ai/blog/kimi-k2-6): An open-weights model that is competitive with the closed frontier on agentic coding is exactly the event LLM-agnostic, customer-controlled architectures were designed for. Here is what it means for finance buyers — and the honest caveats that come with it. - [Letting an LLM Write SQL Against Your Warehouse — Safely](https://tallie.ai/blog/warehouse-sql-safety): If your agent can read the warehouse, the right question is not 'can it answer the question?' but 'what is the worst query it could run, and what stops it?' A practical model for thinking about LLM-generated SQL in finance environments. - [What 'Customer-Controlled AI' Actually Means for Finance](https://tallie.ai/blog/customer-controlled-ai): Most AI tools ask finance leaders to give up the data, the model, and the deployment posture in one go. There is another way. - [Skills, Not Prompts: How Forward-Deployed Engineers Codify Finance Processes](https://tallie.ai/blog/skills-not-prompts): A skill is a versioned, reviewable definition of how your team runs a recurring finance process — authored with you, not handed to you in a docs link. - [On-Prem AI for Finance: A Practical Path for Regulated Teams](https://tallie.ai/blog/on-prem-ai-for-finance): VPC and on-prem are not exotic any more. Here is what a phased deployment actually looks like for a finance function with real residency, regulator, and audit constraints. - [LLM-Agnostic by Design: Why Finance AI Shouldn't Be Locked to One Vendor](https://tallie.ai/blog/llm-agnostic-by-design): Routing per task — not per platform — is how you keep the cost curve, the capability curve, and the procurement story under your control. - [Why Your Operational Reporting Is Lying to You](https://tallie.ai/blog/operational-reporting-disconnect): Ledger reality, pipeline reality, and ops reality each live in their own silo. The disconnect between them is what breaks your reporting — across finance, sales, and operations. ## Links - Website: https://tallie.ai - Blog: https://tallie.ai/blog - Blog feed (RSS): https://tallie.ai/blog/rss.xml - Full blog corpus (markdown): https://tallie.ai/llms-full.txt - Per-post markdown: https://tallie.ai/blog/{slug}/markdown - About: https://tallie.ai/about - Privacy: https://tallie.ai/privacy - Terms: https://tallie.ai/terms ## Contact - General: hello@tallie.ai - Sales: sales@tallie.ai - Support: support@tallie.ai