Kleene.ai was built around a straightforward idea: you shouldn't need a sprawling stack of tools to turn raw data into business-ready outcomes. But as our customers' data estates grew, a pattern kept emerging. Teams were spending hours writing the same SQL from scratch, hunting through transform groups for the right version, manually digging through logs to debug issues, and bouncing between documentation tabs and Stack Overflow.
That's the problem KAI Assistant was built to solve. It's a native AI experience inside Kleene.ai that lets teams generate and optimise SQL transforms, navigate pipelines, search logs, understand table schemas, and get documentation answers, all without leaving the platform.
To do that safely and at enterprise scale, we chose Google Vertex AI, running Gemini models, as our foundation. Here's the thinking behind that.
KAI Assistant sits directly inside data workflows, where accuracy and auditability aren't optional. In Phase 1, that means handling SQL generation and transform optimisation, transform and group search, table schema retrieval and previews, log search, documentation Q&A using RAG-style retrieval, and synthetic data previews for testing.
Running all of that in production requires more than raw model access. Vertex AI gives us a managed path to Google's models within a platform built for production workloads, not just experimentation.
When AI touches data operations, customers ask the right questions: Where is data processed? How is it protected? Can we meet our internal and regulatory requirements?
Vertex AI is built on Google Cloud's enterprise controls. That includes data residency at rest, encryption at rest and in transit, and Access Transparency. Importantly, data stored at rest stays in the customer-selected region, and Google does not use customer data to train their models. We selected a European region for our Vertex AI deployment and can extend to additional regions as needed.
On our side, KAI's architecture is deliberately scoped. It works only with user prompts, warehouse metadata (schemas, tables, columns, relationships), and synthetic sample data. No raw customer data is sent to Vertex AI. We don't store or use customer data to train any models.
In data work, an AI that sounds right but isn't is worse than no AI at all. That's why KAI includes documentation intelligence using RAG-style search across Kleene's documentation, so answers are grounded in what the platform actually supports rather than what a model might plausibly infer.
Vertex AI's approach to grounding connects model outputs to verifiable sources, reducing hallucinations and improving auditability. That combination of Kleene-specific context and grounded retrieval was a core reason we chose it.
KAI Assistant isn't a chatbot bolted onto the product. It's the start of an AI-native workflow inside Kleene.ai, and the roadmap reflects that.
Phase 2 covers building full ELT pipelines (Extract, Transform, Data Product) with deeper vertical integration and more analytics context. Phase 3 moves into analytics model context and natural language interaction with model outputs, including forecasting, segmentation, and optimisation.
Vertex AI is a platform designed to build and scale generative AI applications, with tools like Agent Builder and broad model access through Model Garden. The decision wasn't just about what KAI needs today. It was about picking a foundation that scales as KAI moves from transform assistance into decision intelligence.
KAI operates within Kleene.ai and accesses Gemini through Google Vertex AI. There are two pathways: standard interactions, and an optional route when synthetic sample data preview is enabled. When sample data is used, it's converted to synthetic before any LLM processing takes place. We don't store or use customer data to train models.
Vertex AI provides the secure model layer. Kleene.ai provides the platform context, controls, and user experience.
The choice of Vertex AI is a practical one that supports what customers should expect from Kleene.ai:
Speed: less time writing repetitive SQL, faster debugging, faster navigation across the platform.
Trust: answers grounded in real documentation and a tightly scoped architecture that limits what KAI can access.
Governance: an enterprise-grade AI foundation with documented security controls and data residency options.
Future value: a platform aligned with where KAI is heading, into pipeline building and model-aware decision workflows.
KAI Assistant is designed to remove friction from everyday data work. Vertex AI is how we do that in a way customers can actually trust.