Skip to main content

AI Integration

The EnterpriseAI platform provides AI capabilities through the AICore service, including RAG-powered chat with real-time streaming, document upload and classification, and document indexing for retrieval-augmented generation. All AI features are accessed through the same BFF proxy pattern as other platform services.

Chat with SSE Streaming

Chat is the primary AI interaction model. Users send messages and receive streaming responses via Server-Sent Events (SSE). Each chat interaction is scoped to a workflow and stage, which determine the AI's behavior, system prompt, and RAG context.

How Chat Streaming Works

Chat streaming uses a dedicated stream proxy (/api/eai/stream/[[...rest]]) that is separate from the standard BFF proxy. This ensures that SSE responses are forwarded with the correct headers (Content-Type: text/event-stream, Cache-Control: no-cache, Connection: keep-alive) and are not disrupted by content encoding or response buffering.

Chat Request Format

{
message: string; // The user's message (NOT "chat_input")
conversation_id: string; // REQUIRED -- UUID for conversation continuity
params: Record<string, any>; // REQUIRED -- use {} if no params needed
}
caution

The field names are message (not chat_input), conversation_id (required, not optional), and params (required, use {} if empty). Using incorrect field names will result in a 422 Unprocessable Entity error.

Using the Platform SDK for Chat

import { EAIPlatformClient } from '@enterpriseaigroup/platform-sdk';

const client = new EAIPlatformClient({ tenantId: 'my-tenant' });

// Stream a chat response
const stream = await client.chat.stream({
message: 'What permits are required for a commercial renovation?',
conversationId: crypto.randomUUID(),
params: { context: 'permits' },
workflowId: 'my-workflow',
stage: 'chat',
});

// Read SSE events from the stream
const reader = stream.getReader();
const decoder = new TextDecoder();

while (true) {
const { done, value } = await reader.read();
if (done) break;

const text = decoder.decode(value);
// Process SSE event text (e.g., append to UI)
console.log(text);
}

Non-Streaming Chat

For use cases where streaming is not needed, the SDK also provides a non-streaming chat method:

const response = await client.chat.send({
message: 'Summarize the application requirements.',
conversationId: crypto.randomUUID(),
params: {},
workflowId: 'my-workflow',
stage: 'chat',
});

console.log(response.content);

Using the useChat Hook

The useChat React hook provides a complete chat interface with state management, streaming, and conversation history:

'use client';
import { useChat } from '@/hooks/useChat';

function ChatPanel() {
const {
messages, // Array of chat messages (user + assistant)
sendMessage, // Function to send a new message
isStreaming, // Whether a response is currently streaming
error, // Error state
} = useChat({
workflowId: 'my-workflow',
stage: 'chat',
conversationId: crypto.randomUUID(),
});

const [input, setInput] = useState('');

async function handleSend() {
if (!input.trim() || isStreaming) return;
const userMessage = input;
setInput('');
await sendMessage(userMessage, {});
}

return (
<div>
<div className="messages">
{messages.map((msg, i) => (
<div key={i} className={msg.role}>
{msg.content}
</div>
))}
</div>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === 'Enter' && handleSend()}
disabled={isStreaming}
/>
<button onClick={handleSend} disabled={isStreaming}>
Send
</button>
</div>
);
}

Document Upload and Classification

AICore can process uploaded documents to classify their type and extract metadata. This is useful for applications that need to identify what kind of document a user has submitted (e.g., floor plan, site survey, environmental report).

Upload a Document

const client = new EAIPlatformClient({ tenantId: 'my-tenant' });

// Upload a single document
await client.documents.upload(file, {
category: 'permit-document',
});

Classify Documents

Classification identifies the type of a document based on its content:

// Classify multiple files (batch)
const classifications = await client.documents.classify(files);

classifications.forEach((result) => {
console.log(result.filename, result.classification);
});

// Classify a document by URL
const result = await client.documents.classifyByUrl(
'https://example.com/document.pdf'
);
console.log(result.classification);

RAG Indexing

RAG (Retrieval-Augmented Generation) indexing makes documents searchable by the AI during chat sessions. When a document is indexed, its content is chunked, embedded, and stored in a vector database. During chat, relevant chunks are retrieved and included in the AI's context.

// Index a document for RAG retrieval
await client.documents.ragIndex(documentId);

Once indexed, the document's content will be available to the chat AI when users ask questions in the relevant workflow and stage.

CLI Commands

The EAI CLI provides commands for chat and document operations:

# Start an interactive chat session
eai chat --tenant my-tenant --workflow my-workflow --stage chat

# Send a single chat message
eai chat send "What permits are needed?" \
--tenant my-tenant --workflow my-workflow --stage chat

# Upload a document
eai docs upload ./plans.pdf --tenant my-tenant

# Classify a document
eai docs classify ./unknown-document.pdf --tenant my-tenant

# Index a document for RAG
eai docs index <document-id> --tenant my-tenant

Chat API Reference

OperationMethodPathBody
Stream ChatPOST/v3/chat/stream/{tenant}/{workflow}/{stage}{ message, conversation_id, params }
Send ChatPOST/v3/chat/{tenant}/{workflow}/{stage}{ message, conversation_id, params }

Document API Reference

OperationMethodPathBody
UploadPOST/v3/documents/uploadmultipart/form-data
Classify (batch)POST/v3/documents/classifymultipart files
Classify by URLPOST/v3/documents/classify-by-url{ url }
RAG IndexPOST/v3/documents/rag-index{ document_id }
IndexPOST/v3/documents/index{ document_id }
Get ChecklistPOST/v3/documents/checklist{ tenant_id, development_type, ...opts }

SDK Proxy Routing

The Platform SDK automatically routes requests through the correct proxy path:

SDK MethodProxy PathReason
chat.stream()/api/eai/stream/SSE requires explicit streaming headers and no content-encoding stripping
chat.send()/api/eai/Standard JSON response through the regular proxy
documents.*/api/eai/Standard JSON/multipart through the regular proxy

This routing is handled automatically by the SDK. You do not need to configure proxy paths manually.