From governance and routing to embeds and audit trails — every feature is designed to give you control over what your AI says, where it runs, and who can access it.
Define what your AI can do — and what it can't.
Capability Profiles are the core governance primitive. Each profile declares exactly which intents the AI can handle, which data sources it can query, what schema the response must follow, and which disclaimers to inject. Nothing runs outside the profile boundary.
Personas sit on top of Capability Profiles and shape how the AI responds — not what it can access. The same product data can power a friendly FAQ bot for consumers and a technical support agent for partners, each with different tone, constraints, and response styles.
"Our Pro plan starts at $99/mo and includes..."
"I understand the confusion. Let me walk you through..."
"Per our pricing schedule, Section 3.1 states..."
Every incoming message is classified against declared intents before any data is fetched. The router matches the question to the right data source, ensuring the AI only sees information relevant to the query. No broad context windows, no data leakage between intents.
The runtime processes every chat request through a strict pipeline: validate token, check domain, classify intent, enforce capability profile, fetch bounded data, call LLM with constraints, apply persona rules, and log everything. No shortcuts, no bypasses.
Get your AI onto partner domains in minutes.
The embed widget is a lightweight, framework-agnostic script that loads asynchronously. Partners copy a single snippet into their HTML and the chat interface appears — styled, governed, and ready. No npm install, no build step, no framework dependency.
Every embed token is scoped to a specific domain. At runtime, the Origin header is validated against the token claim and cross-checked with your allowlist. Unapproved domains are rejected before any data is processed — no exceptions, no fallbacks.
The runtime API is frontend-agnostic. Use the provided embed widget for quick deployment, or build your own UI with any framework. The same capability profiles, persona rules, and audit trails apply regardless of which frontend sends the request.
Fail closed. Log everything. Prove it.
The security model validates requests in strict order: parse origin, verify token signature, compare domain claims, check the allowlist, then apply rate limits. Any failure at any step returns a generic error — no information leakage, no partial access.
Every chat message, intent classification, data source query, and LLM response is logged with timestamps and conversation context. When compliance asks "what did the AI say to that customer?" — you have the answer, with the full chain of decisions that led to it.
Talk to our team and see how Ilana AI fits your use case.