# Agentic Finance Workflows on SunSystems: A Forward-Deployed Pattern

> SunSystems is exactly the kind of system where agentic finance workflows pay off — a stable, structured ledger sitting underneath brittle, swivel-chair processes. Here is the pattern we use to layer agents on top of it without breaking anything.

Published: 2026-04-21
Updated: 2026-04-21
Author: Archie Norman (Founder, Tallie AI)
Category: Implementation
Tags: sunsystems, infor, ssc, agentic-finance, erp, month-end, implementation
Canonical URL: https://tallie.ai/blog/agentic-workflows-on-sunsystems

## TL;DR

- SunSystems is a near-perfect substrate for agentic finance: a stable XML payload API (SunSystems Connect), a fully-typed component surface, and a ledger model that has not changed shape in twenty years.
- The right unit of work is a `skill` mapped to one or more SSC components — not a free-form prompt that 'calls SunSystems'. Skills make every agent action reviewable, scoped, and reproducible against a given Business Unit.
- Mutation safety on a ledger API is the same problem as warehouse SQL safety, with one extra rule: every write goes through `ValidateOnly` first, every Journal Import carries a deterministic `MethodContext`, and every payload is logged against the originating user's `<User><Name>`.
- The deployment posture that actually clears procurement is the agent runtime sitting next to SunSystems — same VPC or on-prem segment — talking to SSC over the loopback, with the LLM call routed wherever the customer wants it routed.

---

Most of the agentic-finance conversation in 2026 is happening on top of NetSuite or some Snowflake-shaped data lake. That is not where most finance teams actually live. A surprising number of them live on SunSystems — Infor's mid-market ledger that quietly runs the books for a long tail of hospitality groups, NGOs, real-estate operators, multi-currency professional services firms, and shipping companies. Most of these teams have a stable ledger, an unhappy operations team, and a `Transfer Desk` window left open on someone's second monitor.

We have done a few of these now. The pattern is consistent enough to write down. (For the cloud-native sibling of this post — same shape, different plumbing — see [Agentic Finance Workflows on Xero](/blog/agentic-workflows-on-xero).)

## Why SunSystems is a good substrate for agents

The instinct, when you hear "AI for SunSystems," is sympathy. SunSystems is old. The UI is older. There is a `.NET` thick client involved somewhere. Surely the *modern* stack is the place to start.

That instinct is wrong. SunSystems is, in practice, an unusually good substrate for agentic workflows, for three structural reasons:

1. **The integration layer is fully typed and payload-driven.** SunSystems Connect (SSC) is an XML-payload API that exposes the entire ledger surface — every component you can drive in the UI is reachable as a `<Payload>` against the same component. Each component has a small set of methods (`Add`, `Update`, `Delete`, `Query`, plus component-specific ones like `Import` and `ValidateOnly`), and each method has an explicit, documented payload schema. For an agent, this is closer to a well-designed SDK than to "an ERP integration."
2. **The ledger model has been stable for twenty years.** Chart of accounts, journal lines with up to ten analysis dimensions, business units, budgets, allocation rules, value labels — the shape has not meaningfully changed since the 5.x line. Skills built against the SSC component surface today will still work after the next upgrade, because Infor maintains backward compatibility on the payload shape with religious discipline.
3. **Security is record-level and pre-existing.** Data Access Groups, Miscellaneous Permissions, Business Unit Administration — the role model SunSystems shipped before "principle of least privilege" was a phrase a vendor would use is exactly the role model an agent should run inside. We do not need to invent an authorisation system. We need to honour the one already in production.

A modern agent runtime talking to a 25-year-old ledger sounds like a mismatch. It is not. It is a stable, structured back end finally getting an interface that can talk to its actual users — through their words, on the screen they are already looking at.

## The wrong way to do this

The tempting first move is "let the LLM call SunSystems." A model with tool access, a tool that wraps the SSC HTTP endpoint, and a system prompt that says "be careful." Demos beautifully. Survives no procurement cycle. Falls over the first time a user pastes a journal description that contains the word "delete."

There are three failure modes that show up in week one:

- **Unbounded mutation surface.** The model can, in principle, emit a payload against any component. The blast radius of a single confused turn includes the chart of accounts, supplier records, allocation rules, and the actual ledger.
- **No reproducibility.** A free-form tool call is a one-off. There is no artifact a controller can review, edit, version, or hand to an auditor. "What did the AI do last quarter?" has no answer.
- **No respect for SunSystems' own controls.** If the agent connects as a single service account, every Data Access Group and Miscellaneous Permission you have spent a decade configuring is bypassed. This is the part that ends the procurement.

The fix is not "a more careful prompt." It is the same shift we apply elsewhere: stop letting the model decide *what kind of action* is happening, and only let it decide *the parameters within an action*. The thinking is the same as the four-layer model we use for [LLM-generated SQL against a warehouse](/blog/warehouse-sql-safety) — extended one extra step, because here the agent can mutate the system of record, not just read from it.

## Skills, not free-form tool calls

The unit of work in our SunSystems deployments is a **skill**: a versioned, code-reviewed artifact that describes one finance operation, the SSC component(s) it uses, the parameters the agent is allowed to set, and the validation that runs before any payload reaches `Import`.

![Skill execution flow: the agent proposes a payload, the skill runs SSC ValidateOnly, the response is shown to the user, and only on user approval does the skill issue the real Import.](/blog/diagrams/sunsystems-skill-flow.png)

A skill is not a prompt. It is a small bundle, deployed alongside the agent runtime, that contains:

- **The component method it wraps.** For example: `AccountAllocations.Update` for re-coding analysis on existing transactions, or `Journal.Import` for posting a correcting journal.
- **A scoped parameter schema.** The skill declares which fields the agent is allowed to populate (e.g. `AnalysisCode3`, `Description`, `AccountRange`) and which are fixed by the skill itself (e.g. the `MethodContext` block, the `BusinessUnit`, the `JournalType`, the suspense account).
- **A `ValidateOnly` step.** Every mutation skill emits its payload against the component's `ValidateOnly` (or, for Journal, the explicit `ValidateOnly` method) before the real `Import`. The validation response is fed back to the agent, which has to acknowledge it before the real call is allowed.
- **A user identity contract.** The `<User><Name>` element on every payload is the SunSystems identity of the *person* the agent is acting on behalf of — never a service account. If that user does not have permission for the operation, SunSystems rejects it. We do not have to build a parallel permissions layer; we just have to honour the existing one.
- **A deterministic audit log line.** Every skill execution writes a structured record: who asked, which skill ran, what payload was sent, what `ValidateOnly` said, what `Import` returned, and which model produced any free-text fields.

This is the part where SunSystems actually helps. The component model is small enough — a few dozen genuinely useful components for a typical deployment — that authoring skills is a tractable, week-of-work exercise rather than an indefinite engineering programme. Most of the ones a finance team needs already exist by the time we are talking to them.

## The four workflows that pay off first

Across the engagements, four shapes show up over and over. They are the ones we recommend starting with:

### 1. Re-coding analysis on already-posted transactions

The single most common operational ask: "we mis-coded a chunk of transactions to the wrong project / cost centre / dimension; can we fix them without reposting." This is what `LedgerAnalysisUpdate` and `AccountAllocations.Update` exist for, and SunSystems users with the rights have always been able to do it via Transfer Desk. The friction is *finding* the transactions to update — usually via a Q&A pass against the ledger, an export to Excel, a manual review, and a re-import.

The skill version, end-to-end:

- A project lead asks: "move all March travel expenses for the legal team off project `LEG-22` and onto `LEG-23` — that work was rebilled."
- The agent runs a `Journal.Query` filtered by `AccountRange = 6700-6799`, `AnalysisCode1 = LEG`, `AnalysisCode2 = LEG-22`, period `202503`, and presents the 47 matched lines back to the user with the proposed `AnalysisCode2` change inline.
- The user spots two lines that should not move (a re-billable expense already invoiced under `LEG-22`) and excludes them with a click.
- The skill issues a single bounded `LedgerAnalysisUpdate` against the remaining 45 lines, with a `ControlTotal` that guarantees the count and value match what was approved. The skill physically cannot operate outside the user's Business Unit, the Budget Code is fixed by the skill author, and every line touched carries the user's `<User><Name>` in the audit trail.

What used to be a 90-minute Excel-and-Transfer-Desk ritual becomes a two-minute conversation. The agent did the boring part — the search, the materialisation, the bounded update. The control regime did not change.

### 2. Driving period-end allocation runs

Allocation rules in SunSystems are powerful and unloved. The rules exist; nobody remembers which ones to run, in which order, against which period. The `AllocationRun` component is built for exactly this orchestration, and a skill can wrap it.

A typical engagement: a hospitality group with eleven properties and a shared services overhead pool. Every month-end a controller has to run six allocation rules in a specific order — first the head-office overhead split, then the regional manager apportionment, then the FX hedge cost reallocation — against each of three Business Units, in dry-run mode first, then for real. The current process is a handwritten checklist taped to a monitor.

The skill version: the controller types "run the standard Q1 month-end allocations for Properties UK, Properties EU, Group Services." The agent walks the documented sequence, fires `AllocationRun` in dry-run mode for each, surfaces the proposed entries with totals per allocation, and waits for approval before issuing the real run. If a rule errors (a missing source-account balance, a denominator that resolved to zero), the agent surfaces the SSC error message verbatim, not a paraphrase.

The skill is mutation-light because the allocation rules themselves are unchanged — the agent is firing pre-existing engines, not writing journal lines from scratch. The win is removing the "did I run rule 4 before rule 5 against EU?" anxiety from the close.

### 3. Validated correcting journals from natural language

The "post a journal" use case is the one that makes auditors nervous. The trick is to never let the model post anything; only let it *propose* something. The flow is:

1. The agent gathers the proposed journal as an SSC `Journal` payload.
2. The skill runs `ValidateOnly` against SunSystems and returns the response — every error, every warning, every substituted value — verbatim to the user.
3. The user, looking at exactly what SunSystems said, approves or rejects the post.
4. Only on approval does the skill issue the real `Import`, with a `MethodContext` that the *skill author* defined: posting type, suspense account, layout code, balancing options. Not the model.

A concrete shape: a finance manager at an NGO needs to reclassify £18,400 of grant income that was posted against the wrong donor analysis code in March. She types "draft a correcting journal moving the £18.4k from donor `D-2024-FCO` to `D-2024-FCDO` for March, narrative 'donor recoded — see ticket FIN-1184'." The agent builds a balancing two-line journal payload, runs `ValidateOnly`, and surfaces SunSystems' response: the proposed lines, the GBP total, the period, the analysis substitutions, and one warning that `D-2024-FCDO` is flagged as restricted-fund — would she like the skill to also tag the recipient analysis code as restricted? She approves; the skill `Import`s the journal with the controller's pre-fixed `MethodContext` and writes the SSC response (including the assigned `JournalNumber`) into the audit log.

The model never decides whether a journal is valid. SunSystems does. The model is in charge of phrasing the question.

### 4. Drillable narratives for management accounts

The lightest-mutation workflow and often the most popular: take a `Journal.Query` or `AccountBalance` result, fold in the Business Unit's analysis dimensions, and produce a narrative — by department, by project, by analysis code — that a finance lead can scan in two minutes instead of building a pivot for thirty.

A typical case: a multi-site operations director wants a Monday morning summary of "what moved in week 16 across the regions." The skill pulls `AccountBalance` for the operating cost accounts across each region's Business Unit, joins to the regional analysis dimension, computes period-on-period and budget-vs-actual deltas, and produces a one-page narrative: "EU region trade spend up £42k week-on-week, driven by three Tier-1 promotional campaigns in DE; UK region utilities down £8k vs last week as the new contract took effect from week 14; Asia region travel costs above budget by £11k, attributable to the Singapore conference (project `MKT-SGAPAC`)."

Every figure in that narrative is a click away from the underlying SSC payload — the journal lines, the analysis codes, the user who posted them. The agent did not invent the numbers. It composed them.

## The deployment posture that actually clears procurement

SunSystems is, almost by definition, not internet-facing. It runs in a private hosted environment or on-prem; SSC sits behind the same firewall as the database. Any integration architecture that involves "let our cloud reach into your network" stops at the security review.

We support two deployment shapes, and both honour the same control regime:

- **Customer-perimeter deployment.** The agent runtime runs inside the customer's own perimeter — same VPC, same datacentre segment, sometimes literally the same Windows Server hosting the SunSystems application service — talking to SSC over the loopback or a private VLAN. The model call is the only thing that crosses the boundary, and it does not have to: for sensitive workloads we route to a self-hosted open-weights model on the same infrastructure as the agent, with no prompt or response ever leaving the perimeter.
- **Tallie-managed cloud warehouse.** For teams that want the operating model without operating the infrastructure, the agent runtime and a SunSystems-aware warehouse run inside Tallie's managed cloud — segregated per customer, with SSC traffic over a dedicated private link to the customer's SunSystems environment. The customer still picks the LLM: their own OpenAI / Anthropic / Bedrock contract, a self-hosted open-weights model in their cloud, or a model hosted by us on their behalf. Routing is per-task and reversible.

In both shapes, three things remain the same:

- **The LLM is a customer choice.** Per task. Reversible. Including "use ours, not yours."
- **Skills, prompts, and the run log are owned by the customer** and stored where they want them stored — their object storage, ours, or both.
- **Every SSC payload still carries the user's `<User><Name>`** and is subject to their existing Data Access Groups and permissions. The deployment shape changes; the control regime does not.

This is the consistency that matters. "Customer-controlled AI" is not a deployment claim — it is a claim about who decides where the agent runs, which model it routes to, and how it touches data. Both deployment shapes preserve all three decisions for the customer.

## What this looks like on day 90

A finance team using SunSystems with this pattern in place is not doing AI. They are doing finance, slightly faster. The visible changes are small:

- A pane next to Transfer Desk, in their existing SunSystems hosted environment, where they can ask for transactions, propose corrections, draft journals, and run allocations in plain English.
- Every action they take with that pane shows up in their existing SunSystems audit, against their existing user identity, scoped to their existing Business Unit.
- Their CISO has a one-page architecture diagram that contains the words "no data egress," "read-only by default," and "every mutation goes through `ValidateOnly`." Their auditor has the run log.

The general ledger is still in SunSystems. The chart of accounts has not moved. The integrations to Cognos, the bank statement loaders, and the consolidations engine still work exactly as they did. Nothing was rebuilt.

That is what "AI for SunSystems" looks like when it is taken seriously: not a replacement for the ERP, not a wrapper that ignores its controls, but a thin agent layer that finally lets a structured, twenty-year-old back end be operated through a 2026 interface — without breaking the procurement, the audit, or the upgrade path.

> **Building on SunSystems and want a closer look at this pattern?** This is the kind of engagement we are explicitly built for: forward-deployed engineers, customer-controlled AI runtime, skills authored against your own component surface and your own Business Units. [Get early access](/#join) and we will set up an architecture call.
