Most AI assistants are bolted onto products as an afterthought. A chat widget that searches a knowledge base, maybe generates a snippet. Useful, sometimes. Transformative, rarely.
KAI Assistant was built differently. It sits natively inside Kleene.ai, with direct access to your warehouse metadata, transforms, pipeline logs, and documentation. It doesn't just answer questions — it understands context, routes requests intelligently, and keeps your data private while doing it.
Here's exactly how it works.

KAI Assistant acts as a natural language interface on top of Kleene.ai's existing data infrastructure. Instead of navigating menus, writing SQL from scratch, or combing through documentation manually, you describe what you want in plain English — and KAI Assistant handles the rest.
That sounds simple. The engineering behind it isn't.
For KAI Assistant to be genuinely useful, it needs to understand more than your words. It needs to understand your role, the current context you're working in, and what data it's allowed to reference. That's where the architecture comes in.
Not every prompt is the same. Asking "show me the last ten runs for this transform" is a different kind of request than "generate a SQL join for these two tables" — and it should be handled differently.
KAI Assistant routes requests intelligently depending on what you're asking and where you're asking it from. Context awareness is built in, so the assistant already knows whether you're working inside a transform, browsing groups, or troubleshooting a failed pipeline run.
This isn't a single query sent to a model. KAI determines the right pathway for your request before any LLM processing begins.
For the majority of interactions — documentation questions, SQL generation, transform searches, log queries — KAI connects directly to Google Gemini, accessed via Google Vertex AI.
Vertex AI was chosen deliberately. It gives Kleene.ai a managed, enterprise-grade AI infrastructure with auditability, data residency controls, and model reliability built in. Gemini running on Vertex provides grounded, accurate responses rather than freeform generation, which matters when you're working with production data pipelines.
When sample data previews are enabled, the process works differently. Before any data reaches an LLM, Kleene.ai converts it to synthetic data using AWS Bedrock.
This is a deliberate privacy-first design decision. Real row-level data never gets passed to a model. Instead, AWS Bedrock generates a statistically representative synthetic equivalent, which KAI then uses to answer questions about schema structure or data shape — without exposing the underlying customer data.
The two-pathway model means KAI handles sensitive and non-sensitive interactions appropriately, without requiring users to manage any of that complexity themselves.
This is probably the most important question to answer clearly.
KAI Assistant only works with:
Raw customer data is never sent to any LLM. Kleene.ai does not use any customer data to train models. What you put in your warehouse stays in your warehouse.
KAI Assistant in Phase 1 is focused on the tasks that take up the most time for data teams and analytics users:
Ask questions about Kleene.ai features. Get instant answers grounded in the documentation, without searching manually.
Search, fetch, and inspect transforms. Find transforms and groups by name or description without navigating the UI. Inspect what's inside them.
Preview table schemas in plain English. Ask what columns a table has, what types they are, and how they relate — without writing a query.
Generate and optimize SQL transforms. Describe the logic you need and get working SQL back. Refine it by describing what you want to change.
Debug pipeline issues using logs. Search run logs with a plain language question rather than scrolling through raw output.
KAI Assistant is role-aware. What a data engineer sees and can do is different from what a business analyst sees and can do. Access is gated by user role, so the assistant behaves appropriately for whoever is using it.
This matters for teams operating at enterprise scale, where data governance and access controls aren't optional.
KAI Assistant is available across Kleene.ai plans with token usage scaled by tier. Initially however, access will be free and unlimited for all tiers.
It's fully opt-in. No team is required to enable it, and enabling it doesn't change anything about how your underlying data pipelines run.
Data platforms have always required technical expertise to extract full value from. SQL, pipeline configuration, transform logic, log parsing — each one is a skill that takes time to develop and slows down anyone who doesn't have it.
KAI Assistant doesn't replace that expertise. It makes it more accessible. Engineers move faster. Analysts work more independently. Onboarding takes less time. Support queries go down.
That's the practical value. And it's only Phase 1.
Learn more at docs.kleene.ai, or read about why we built KAI on Google Vertex AI.