Fora scaled 3x in a year, but their finance team didn't

Fora's small team was managing millions in spend and hundreds of thousands of transaction lines a month. The close process they inherited had no shot at keeping up.

So they rebuilt it. 

Join Campfire, Ramp, and Fora's Controller for a behind-the-scenes look at where they automated, what they cut, and how they trimmed close time by 20% without adding headcount.

Save your spot to see exactly how a lean team runs at hypergrowth speed

Are you considering a new ERP implementation? Or maybe want to understand better what an AI-native ERP actually means?

I want to create more content to help you with that. Take this two minute survey to tell me your biggest questions about implementing a new ERP solution.

Welcome to part 3 of this four-week series: The No BS Guide to AI for CFOs

In the first week, we made the case that it's time to move beyond small-scale experiments and into widespread AI adoption across the finance function. 

Last week we got under the hood; what AI is actually good at, where it breaks, and why you must not confuse AI with automation.

This week, we get practical and into the key decisions: what to buy, what to build, and what to leave alone.

The last thing you need is to spend 3 years locked into the wrong tools, the wrong data architecture, and the wrong control environment.

How to use this post …

This Playbook is a little different from most.

The success of these Saturday newsletters has been built on experience-led insights. I share what has actually worked for me, based on too many years in boardrooms, spreadsheets, and shop floors.

There is no agreed playbook for implementing AI at scale in finance… yet.

The tools are developing too quickly. The implementation models are still forming. And even if they weren't, nobody has done this enough times to have the answers locked down.

I'm not going to preach false certainty, this topic is too big and too important for that.

But I am committed to building that playbook it alongside you, as we figure out the right path together.

So today, we’ll get into:

  • A framework for cutting through the tool noise and making the right architecture decisions

  • The no-regrets moves you can make right now regardless of where you sit on the adoption ladder

  • The data security landmines to avoid

  • What you definitely should, and should not, be doing right now

“What’s AI about that…?”

You are likely snowblind with AI promises from software providers. Me too…

So, it’s important to be able to understand the precise role AI is playing in the software promise. What I see today are five different types of ‘AI’ products you need to think about:

  1. Foundation Models

  2. Foundation Model Extensions

  3. AI-wrappers

  4. AI-native platforms

  5. Custom built 

Let’s dive into each one in more depth.

  1. Foundation Models (Direct); e.g. ChatGPT, Claude, Gemini

Pure AI reasoning power, direct from the underlying model. The raw honey straight from the hive…

This is when you throw a problem directly into a chat interface. As we covered last week, that pure reasoning power is right for some problems and wrong for others.

The only guardrails here are the model's own judgment and the quality of your prompt.

That's not to say it's unsuitable. It can work really well for finance problems like:

  • Drafting the narrative for a board pack variance analysis

  • Synthesizing investor questions ahead of an earnings call

  • Reviewing a supplier contract clause against your standard terms

Anything that requires analysis and true thinking.

But there is high exposure to user error here; from using AI where it shouldn’t be used, to poor prompting and missing context.

Not to mention inconsistency risk (same prompt ≄ same answer), data privacy concerns, and cost creep.

These issues are manageable with a trusted single operator, but very difficult to control at scale.

  1. Foundation Models Extensions; e,g, NotebookLM, Claude Code, Cowork, Claude for MS Office, Deep Research, Copilot, Perplexity, etc

This is still leaning on the raw models, but this time calling on tools that put some familiar guardrails around their power.

The key difference vs using the model directly is that the AI is now operating inside a defined tool, with access to specific data and actions. Each is designed for a narrower set of tasks.

This is like AI with a job description rather than a blank canvas.

The capabilities here are growing all the time. So this might read like ancient history 3 months from now. But some interesting finance use cases today:

  • Uploading a competitor’s earnings call transcript and deck into NotebookLM and asking it to summarize the takeaways and mood. Can even be delivered as a conversational AI-generated podcast. (These are great. My kids are using it for exam study too.)

  • Using Claude for Word (launched this week) to work through complex technical Word documents in a track changes format using natural language. Will be amazing for managing back-and-forth on legal documents

  • Using tools like Perplexity or Deep Research to quickly build a sourced view on a market, competitor set, or transaction.

  • Using Skills, Projects, Cowork, etc to build simple workflow automations and put controls around the model inputs and outputs.

  • Using Claude Code to built custom applications

While these tools introduce guardrails around how the AI can be used (and what it’s allowed to do), that can lead to false user confidence. It is still heavily dependent on the quality of your prompt, and workflow design.

They can help keep your data more contained, but that depends entirely on how the tool you use is configured.

  1. AI-Wrappers; e.g. A ChatGPT bot baked into your old ERP

A swarm of startups has emerged over the last couple of years built as a thin layer on top of the LLMs.

So, it’s been quite funny (I’m probably a bit of a sadist) watching Anthropic release new features every day that vaporizes a whole new cohort of startups.

AI plug ins for Excel, Legal Tech produces, AI design tools, are all examples of whole categories that attracted a lot of funding, and in the last couple of months have been crushed by a new Anthropic release. So, beware committing to shiny new ‘AI’ tools that are just thin layers on top of an LLM.

The more interesting category is AI wrappers on legacy tools. This is how deeply embedded legacy software shows it’s “doing AI.” Most commonly, that means adding a chat interface on top of the product, powered by a foundation model. If your ERP doesn’t already have one, I’m sure it won’t be long.

That’s not to dismiss these. This could be a genuinely useful short-term productivity layer, and a gateway drug to AI for less tech-savvy businesses to build familiarity with AI inside tools they already trust.

Some obvious use cases:

  • Making more of the business self-serve on factual information. For example, read-only access to parts of the ERP via a chatbot. If a CFO wants detail on an invoice or a debt, but doesn’t want to go near the ERP, this cuts out layers of internal back-and-forth.

  • Providing a more flexible interface to query and interact with systems that are otherwise clunky or hard to navigate

  • Acting as a light integration layer across systems that don’t talk to each other particularly well

The risk here is that you get trapped in a kind of ‘AI cosplay’. It makes you feel like you are ‘doing AI’ when you aren’t… not really. AI is a generational opportunity to change how finance work gets done, and thin layers won’t do that.

  1. AI-native platforms; e.g. Campfire, Aleph, Ramp, Datarails, Stuut, Ledge etc.

This is the antidote to the above.

Software built to be AI-native; either because it was founded in the AI era, with workflows, data models, and user experience designed around AI from day one (like Campfire).

Or because it moved early and hard enough to rebuild its product around AI (Datarails’ recent release of FinanceOS is a good example.)

In both cases, AI is baked into how the system actually works, rather than an AI layer on top.

These platforms are typically built on domain-specific data and workflows, which materially reduces hallucination risk compared to general-purpose models.

They also keep your data native inside a controlled environment, avoiding the need to move it in and out of different tools, and reducing reliance on fragile connectors.

And once you’re in this world, it’s not a big leap to see how this evolves:

  • Full workflow ownership. The system doesn’t assist the close or reporting cycle, it runs it. Data flows in, built-in quality checks, outputs get generated, and exceptions are surfaced with minimal human intervention.

  • Agents operating inside the system. Discrete agents handle defined finance tasks (e.g., close, variance analysis, credit control), working off the same underlying data and rules, not reliant on manual prompts.

  • Learning your business over time (compounding effect). The system absorbs your policies, judgments, and historical decisions, improving accuracy, consistency, and speed with each cycle, rather than resetting to zero.

For most, naturally this will mean new implementations (groan), but AI itself helps make those implementations easier than ever, crunching through the heavy lifting of master data issues and process mapping.

So the real watch out here risk is to make sure you design the architecture at the right level.

The key is to anchor around a small number of core platforms that are not overly business-specific, systems that can support multiple workflows and act as a stable foundation.

  1. Custom-build; e.g… anything you want!

Custom-built software inside finance isn’t new. Finance teams have always built their own tools. After all, if a spreadsheet is complex enough, it qualifies as software in my book…

A young Secret CFO’s early career was supercharged by VBA-powered spreadsheets. I was a relentless automator before I knew that’s what it was called. Unfortunately, my tech skills are not what they once were…  

Custom-software often has a cockroach like survival rate in businesses because they do something critical and deliver real value. But, you don’t need me to tell you, they’re also fragile; security risks, key-person dependency, zero documentation, etc.

And almost always accompanied by the familiar line: “We really should replace this one day.”

But this world has turned upside down in the last 6 months.

Custom software has become dramatically easier to build, and arguably easier to maintain. Although I’d argue no less risky.

With the rise of vibecoding, almost anyone can now build software. Note: I didn’t say anyone can build good software (or should).

There is now an army of have-a-go heroes building tools. Everything from lightweight workflow automations to people attempting to build their own ERP systems (yes, really).

AI has made everyone a builder, but it will not make everyone an engineer. And good software does need an engineering mindset.

Where custom building can work:

  • High capability environments. Businesses with strong engineering teams, product culture, and budget can build serious infrastructure (see Ramp last week). Done well, this is a genuine competitive advantage.

  • Gaps the market doesn’t serve. Highly specific workflows that no off-the-shelf tool handles well. Example: managing a niche operational process like 3PL consignment inventory with unique commercial logic.

  • Low-risk layers of the stack. Presentation, reporting, or workflow orchestration at the edge, where failure is visible and contained, and all the hard work on the data has been handled by something suitable deeper in the tech stack. E.g., dashboarding, scenario tools, etc.

Custom build is seductive, because it feels like control and speed. But it amplifies risk too, often becoming unscalable, poorly governed, dependent on individuals, and difficult to audit.

So, while the risk and cost of any single custom-built tool may be coming down. The volume is exploding. And that means the total risk in your system from custom-builds will likely go up, not down.

Unchecked, this gives you a sprawling, ungoverned estate of “stuff that sort of works,” until it really doesn’t.

You the… architect?!

You’re probably looking at those five categories and thinking: which one am I?

That’s the wrong question. Every business, from the most tech-phobic to the most tech-forward, will end up with all five in the mix.

So it’s not about choosing one.

It’s about the combination and the approach you take for each problem.

And to match the right type of tool to the right problem you should ask three questions.

Question 1: How sensitive is the data being processed?

This determines which of the five categories is even available to you. As a rule of thumb, the more sensitive the data, the further you move toward governed or embedded solutions, where you are placing reliance on domain experts within that workflow. And bias away from the rawer processing of the AI blackbox.

Question 2: Does the output feed back into the books and records?

If the output of a workflow feeds back into systems of record or your integrated data layer either directly or indirectly, you should have a much lower risk tolerance. You intuitively understand that a vibe coded ERP (at an extreme end) is a terrible idea, and this is the reason why.

In practice for workflows that feed into the systems of record that means:

  • preferring deterministic tools over probabilistic ones

  • tighter guardrails

  • clearer audit trails

  • more rigorous human approval loops.

All of which is better done in the confines of properly engineered software.

Raw, uncontrolled AI reasoning has no business near master data or payment instructions.

The risk profile towards the end of the workflows that are only ‘reading’ from the system of record. Errors stay closer to the surface, they are more visible, easier to catch, easier to fix. That’s where AI’s strengths in synthesis, analysis, and reasoning make more sense.

Question 3: What is your internal capability to build and maintain?

This is the hold-a-mirror-up moment.

At one end, you have businesses like Ramp, with a world-class engineering culture, minimal legacy tech debt, crashing headlong into org-wide custom build. This week they revealed Ramp Glass, a custom canvas for developing in-house AI apps. (And yes, I keep referencing Ramp… because they’re best-in-class and unusually public about how they’re doing it.)

At the other end of the scale (where most businesses live), you may as well be asking your IT team to orchestrate a moon landing.

That doesn’t mean you rule out custom building software with AI. It just means your risk tolerance and your approach need to reflect reality.

So… buy, build, or borrow?

You can break each individual AI-tech commitment down into three options: buy, build, or borrow.

Buy means installing specialist AI-native software. This might not even feel like “doing AI.” To most people, it will just look like much better software; faster, more intuitive, doing more of the work for you. The AI itself is just the clever stuff in the background. Kind of like a SaaS 2.0

For finance, this should form the spine of your architecture; your accounting system of record, billing, procure-to-pay, inventory, your data layer.

I’ll say this until I’m blue in the face: the biggest wins from AI in finance will come from aggressively implementing AI-native core infrastructure.

Build means the things you custom-build yourself. Claude-powered financial models, vibe-coded dashboards, custom agents built to your specific workflows. Great power, great responsibility. And remember…  

Unsolicited Jeff Pic

Borrow sits in the middle. Custom-built, but by someone else. You get the fit of something bespoke without needing the internal capability to deliver it. And crucially, you get to outsource a lot of the pain - access controls, data security, patching, maintenance, etc. - all the things that would normally stop you getting started. This market is exploding with agencies that can spin up working prototypes of custom apps in days and iterate fast. Meaning custom software can be built to your spec by specialist, cheaper and faster than ever possible before.

Even the Big 4 are moving into this space. EY recently announced a new venture here with Chamath Palihapitiya of all people (I’m sure they did their DD…) 

My current view (and I reserve the right to change it) is that the core of most finance stacks will land on a mix of buy and borrow:

  • A spine of AI-native specialist tools forming the core architecture.

  • Supplemented by custom-built apps that are perfectly fit for purpose. Either delivered by a third party for more complex workflows, or built in-house for simple, single-purpose use cases.

One of the most exciting things about AI is how it democratizes software access. Small businesses will now be able to access the kind of capability they would have been priced out of before.

Is Your Data Safe?

Well… this is the big question. Depending on who you ask, it's anything from a disaster waiting to happen to just laggards looking for an excuse to do nothing.

I've spoken to a lot of people on this who are smarter than me. Honestly, nobody has given me even a half-satisfying answer yet, which tells me there isn’t one.

But here are a few things I have discovered:

  1. The enterprise tier claim is real, but it’s not a guarantee. All the major model providers offer enterprise contracts that explicitly exclude your data from model training. Your inputs don't improve their models. But given Big Tech's history with data, whether that claim is trustworthy enough for your risk appetite is a question only you can answer. And, when you read the contracts carefully, you'll find them lacking from a liability and remedy perspective. You're taking them at their word.

  2. You've been here before. If you're old like me, you'll remember exactly the same conversation about cloud computing. Taking your data off-premises fifteen years ago felt just as unthinkable as AI data risk feels now. We all survived that. That's not a reason to be complacent, but it is a reason not to be paralyzed.

  3. The model isn't the only risk. Just because your foundation model access has data protections, doesn't mean every tool in your stack does. The extension your team downloaded last Tuesday. The plug-in someone connected to your CRM. A F100 Chief Data Officer told me they worked through their entire supplier contract base after realizing almost none of them had any limitations on using client data to build their own models. 

  4. Think beyond databases. Data protection conversations tend to default to structured databases and ERP systems. But your sensitive data also lives in board packs, spreadsheets, call recordings, meeting minutes, and images. Expand your definition of what needs protecting before you expand your AI deployment.

  5. Your own team is a risk vector. Enterprise tier protections won’t stop an employee from pasting a sensitive board paper into a prompt. The digital equivalent of leaving the redundancy list on the printer. (While I have done things nearly as bad, I have never done this.) A governance policy and clear team training matters as much as the contract you sign with the model provider.

  6. On-premises may come back. Some serious commentators have speculated that data sovereignty concerns will ultimately push AI back on-premises for regulated industries. And who knows, maybe wider. When you look at the current geopolitical environment, you can see how that conversation could accelerate quickly…

  7. Protect your ideas, not just your data. As execution alpha gets obliterated by AI, ideas become more valuable. No… I’m not talking about your cousin's idea to harvest fish farts to use in spirit levels. But genuinely differentiated thinking. In a post-AI world, proprietary ideas will be worth more than ever.

Net-net

The tool landscape is confusing for CFOs.

But when the dust settles, the decision is simpler than it looks: put most of your energy into getting  your core infrastructure right. Use modern AI-native CFO Tech platforms for the spine, then build or borrow only where the market doesn't serve you.

Next week, we close the series with the question every CFO will eventually face across the board table: what is all this AI deployment actually worth, and how do I prove it…?

:::::::::::::::::::::::::::::::::::::::::::::::::::
:: Thank you to our sponsor ::
:: Campfire ::
:::::::::::::::::::::::::::::::::::::::::::::::::::

If you enjoyed today’s content, don’t forget to subscribe.

Disclaimer: I am not your accountant, tax advisor, lawyer, CFO, director, or friend. Well, maybe I’m your friend, but I am not any of those other things. Everything I publish represents my opinions only, not advice. Running the finances for a company is serious business, and you should take the proper advice you need.

Reply

Avatar

or to participate

Keep Reading