Everything you need to distribute AI safely

From governance and routing to embeds and audit trails — every feature is designed to give you control over what your AI says, where it runs, and who can access it.

AI Governance

Define what your AI can do — and what it can't.

Capability Profiles

Capability Profiles are the core governance primitive. Each profile declares exactly which intents the AI can handle, which data sources it can query, what schema the response must follow, and which disclaimers to inject. Nothing runs outside the profile boundary.

Capability Profile: Product Info
pricing_inquiry/api/data/pricingPricingResponse
feature_question/api/data/featuresFeatureResponse
support_request/api/data/supportSupportResponse
Disclaimer: "AI-generated. Verify with official docs."

Persona Boundaries

Personas sit on top of Capability Profiles and shape how the AI responds — not what it can access. The same product data can power a friendly FAQ bot for consumers and a technical support agent for partners, each with different tone, constraints, and response styles.

Same question: "What does Pro cost?"
FAQ ExplainerTone: Friendly, concise

"Our Pro plan starts at $99/mo and includes..."

Support ClarifierTone: Empathetic, thorough

"I understand the confusion. Let me walk you through..."

Policy InterpreterTone: Precise, formal

"Per our pricing schedule, Section 3.1 states..."

Intent-Based Routing

Every incoming message is classified against declared intents before any data is fetched. The router matches the question to the right data source, ensuring the AI only sees information relevant to the query. No broad context windows, no data leakage between intents.

"What security certifications do you have?"
Intent Classifier
pricing_inquiry
security_question
feature_question
/api/data/security

Governance Pipeline

The runtime processes every chat request through a strict pipeline: validate token, check domain, classify intent, enforce capability profile, fetch bounded data, call LLM with constraints, apply persona rules, and log everything. No shortcuts, no bypasses.

1
Validate Token
2
Check Domain
3
Rate Limit
4
Classify Intent
5
Enforce Profile
6
Fetch Data
7
Call LLM
8
Apply Persona
All steps pass before response is sent

Distribution

Get your AI onto partner domains in minutes.

Embeddable Chat Widget

The embed widget is a lightweight, framework-agnostic script that loads asynchronously. Partners copy a single snippet into their HTML and the chat interface appears — styled, governed, and ready. No npm install, no build step, no framework dependency.

<!-- Add to partner site -->
<script src="cdn.getilana.ai/embed.js"
data-bot-id="bot_abc123"
data-token="eyJhbG..."
/>
AI Assistant
How can I help you?
What plans do you offer?

Domain Allowlisting

Every embed token is scoped to a specific domain. At runtime, the Origin header is validated against the token claim and cross-checked with your allowlist. Unapproved domains are rejected before any data is processed — no exceptions, no fallbacks.

Allowed Domains
partner-a.comActive
partner-b.ioActive
partner-c.devActive
unknown-site.xyzBlocked

Bring Your Own Frontend

The runtime API is frontend-agnostic. Use the provided embed widget for quick deployment, or build your own UI with any framework. The same capability profiles, persona rules, and audit trails apply regardless of which frontend sends the request.

Chat Widget
Embed
Search UI
Custom
React App
SDK
Governed
Same runtime API, same governance engine

Security & Compliance

Fail closed. Log everything. Prove it.

Domain Security

The security model validates requests in strict order: parse origin, verify token signature, compare domain claims, check the allowlist, then apply rate limits. Any failure at any step returns a generic error — no information leakage, no partial access.

Allowed
Origin: partner-a.com
Token: valid
Domain claim: match
Allowlist: approved
200 Response
Blocked
Origin: rogue-site.xyz
Token: valid
Domain claim: mismatch
Allowlist: skipped
401 Unauthorized

Complete Audit Trail

Every chat message, intent classification, data source query, and LLM response is logged with timestamps and conversation context. When compliance asks "what did the AI say to that customer?" — you have the answer, with the full chain of decisions that led to it.

Audit Log
14:23:01chat.messageintent: pricing_inquiry
14:23:01data.fetchsource: /api/data/pricing
14:23:02llm.responsetokens: 142, model: gpt-4o
14:23:02audit.loggedconversation: conv_8f2a...

Ready to distribute AI with confidence?

Talk to our team and see how Ilana AI fits your use case.