blogs

How to Audit Your Data Stack: A Five-Step Guide for the C-Suite

May 5, 2026
— min read
Tom Fordyce
Principal Analytics Consultant
icon

Ask every department head for the company's revenue number – If you get different answers, your data foundation has a problem. If you get the same answer but nobody is acting on what the data team produces, you have a buy-in problem. The two are different and need different fixes. Tom Fordyce, Principal Analytics Consultant at Kleene.ai, walks through the most common failure modes he sees when auditing data stacks: poor data mastering between systems, source data that was never clean to begin with, and stakeholder adoption that collapses because the insights were never connected to what individuals personally care about. The underlying point across all of it is that the technology is rarely where things go wrong.

The question that matters more than any technical audit

Before looking at any infrastructure, Tom asks a simpler question: is the data function actually changing what people do in the business?

"The most common red flag is when you speak to the head of marketing or commercial finance and they say, 'oh yeah, there's this thing that happens over there with that team, but I don't really interact with it, I can't really get anything useful from it.' That tells you something has gone wrong regardless of how technically sound the stack is."

A data function that produces outputs nobody acts on is not a data problem. The technical layer can be perfectly built and still deliver no value if the people who are supposed to use it have not bought into it.

Step 0 - If you have a data stack, make sure you’re actually using it

The revenue test

The single fastest diagnostic Tom runs with a new client is what he calls the revenue test.

"Go around different departments and ask each one what their revenue number is. In an organization that is siloed and has not been through this process, everyone is going to come back with a different number."

Each department is usually doing something defensible within their own system. Finance calculates from closed invoices. Marketing counts from campaign attribution. Sales works from pipeline close dates. Nobody is necessarily wrong within their own logic, but nobody has ever been forced to agree on the same definition. So when cross-functional analysis or board-level reporting gets produced, it is already sitting on inconsistent data before anyone has started interpreting it, and most people in the room do not know that.

If every department comes back with the same number, your data foundation is probably in reasonable shape. If they come back with different numbers and can explain why, you likely have a definition problem that is solvable. If they come back with different numbers and cannot explain why, or if one or more departments genuinely does not know, you have a more significant problem worth investigating before investing further in the analytics layer.

Step 1 - Run the revenue test: ask every department for the same number and see what comes back

Data mastering: the problem nobody talks about until it breaks everything

Beyond the revenue test, the most common technical failure Tom encounters is what data teams call mastering: the process of correctly linking the same entity across different systems.

"You ideally want to know that customer A in Salesforce is the same customer in NetSuite. Often you find there are a million customers in one system, half a million in another, some that are just in one side, some that are just in the other, and there is no proper mapping between them. Then any kind of in-depth customer segmentation or revenue modeling just becomes a bit pointless."

For a CFO, this surfaces as unexplained discrepancies between what the CRM says and what the finance system says. For a CMO, it surfaces as an inability to track a customer's journey from acquisition through to revenue contribution. For a CEO, it surfaces as reports that tell different stories depending on which team produced them.

If your data team has not tackled mastering explicitly, the reliable outputs you think you have probably have a fragility underneath them that has not been discovered yet.

Step 2 - Check whether your systems have been properly mastered so customers look the same everywhere

Why stakeholder buy-in is almost always the hardest part

The instinct when a data initiative is not being adopted is to improve the output: better dashboards, cleaner reports, faster queries. That rarely fixes the problem, the people who are not engaging usually have a reason, and it is almost never that the charts are not pretty enough.

"Getting engagement and buy-in from stakeholders to actually change their process and go and do something differently than they were doing before — that is almost always the hardest but most valuable point. A lot of projects struggle because they just do not have buy-in from people who need to buy into it. Here is this amazing thing. And they say: we do not want it."

What Tom has found works is going one level more specific than most organizations are comfortable with. Rather than presenting a team or company-level metric, find out what the individual responsible for a decision actually cares about personally.

"If you can find that thing they really care about and say, if we do this, we can improve that metric by 10% or 50% or whatever, then you can get them to buy into it. It is when you try and present a whole company North Star metric to an individual that they say, well, that is not my day to day. Revenue is not my day to day. My day to day is improving our cost per unit with these suppliers."

For senior leaders trying to drive data adoption across departments, this is a more useful framing than the typical top-down mandate. The company-level ROI case gets the project approved, and individual-level value case gets it used.

Step 3 - Set KPIs at the business, team, and individual level so people have a personal reason to engage with the data

The source data problem

One of the less visible failure modes is poor data quality at the source, before it ever reaches the analytics layer. The analytics stack can be impeccably built and still produce unreliable outputs if the systems feeding it have not been maintained properly.

"If you cannot even report well in a single platform like HubSpot because your pipelines are not well managed, your deals are not well managed, people are not using it properly, then even if you take that into a complex data stack, it is still not going to work. You need robust platforms and good processes in place for your source data to be of value downstream."

For a CFO or CEO, the signal to look for is whether the teams using the source systems (sales using the CRM, finance using the ERP, marketing using their ad platforms) are disciplined about data entry and pipeline hygiene. If they are not, no amount of investment in analytics infrastructure changes the quality of what comes out.

This is a management and process problem as much as a technical one, and it is worth addressing before, not after, a data infrastructure investment.

Step 4 - Make sure the data going into your models is clean at source, before it hits the analytics layer

On trusting AI outputs

The question of whether to trust what an AI-powered analytics tool tells you is one Tom gets asked regularly:

"People are always going to ask what my revenue by day is for a given time period. You can tag in your data sets: this column in this table is the verified metric to use for this. But beyond that, a lot of it is knowing what the number should be roughly. If you did not know what the number is, being able to trust it is difficult."

The practical implication for senior leaders is that AI-assisted analytics does not remove the need to understand your business well enough to sense-check what you are being shown. Verified data sets, where specific metrics are tagged as the authoritative source, reduce the risk of a tool pulling from the wrong place. But the question of whether an output looks right still requires someone who knows what right looks like.

Tom draws a direct parallel to earlier generations of tooling: "Five years ago with Tableau: nice report, shows me what I need. How do I know it is right? Unless you know the underlying source has been validated, or you know what the number should be roughly, then you are guessing that it is right."

The standard for trusting AI outputs is the same standard that always applied to dashboards and reports. Has the underlying data been validated? Does someone who knows the business believe the number?

Step 5 - Before trusting AI outputs, make sure someone in the business knows what the right answer should look like

The goal, as Tom puts it, is not a technically impressive data stack. It is a culture where people engage with data and change what they do because of it.

If you want to begin auditing your own data stack, our data maturity quiz is as good a place to start as any! Find out where you sit on the data maturity curve, and what it takes to move forward.

Stay in the loop

Subscribe to the Kleene.ai newsletter and be the first to hear about new guides, data trends, and product updates.

Sign up to the Kleene.ai newsletter
A short read on what’s changing in AI, data and decisions - and why it matters.
icon
start your journey

Power your data with AI

Join leading businesses with modern data stacks who trust Kleene.ai
icon

Take a quick look inside Kleene.ai app

Watch a product walkthrough and see how Kleene ingests your data, builds pipelines, and powers reporting – all in one place.
icon