Chat Module
The Chat module provides AI-powered conversational capabilities with support for streaming responses via Server-Sent Events (SSE), conversation history, and RAG (Retrieval-Augmented Generation) context.
Basic Usage
import { PlatformClient } from '@enterpriseaigroup/platform-sdk';
const client = new PlatformClient({ /* config */ });
// Simple message
const response = await client.chat.sendMessage('What are the active projects?');
console.log(response.content);
// Streaming chat
const stream = await client.chat.streamChat('Analyze the Q1 performance');
for await (const event of stream) {
if (event.type === 'message') {
process.stdout.write(event.data.content);
}
}
Methods
sendMessage()
Send a message and receive a complete response (non-streaming).
sendMessage(message: string, options?: ChatOptions): Promise<ChatResponse>
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
message | string | Yes | User message content |
conversationId | string | No | Conversation ID for multi-turn context |
systemPrompt | string | No | System prompt to guide AI behavior |
temperature | number | No | Response randomness (0-1, default: 0.7) |
maxTokens | number | No | Maximum response length (default: 2048) |
context | ChatContext | No | Additional context (resources, documents) |
model | string | No | LLM model to use (default: tenant default) |
Returns
interface ChatResponse {
id: string;
content: string;
role: 'assistant';
conversationId: string;
citations?: Citation[];
metadata?: Record<string, any>;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
}
Example
const response = await client.chat.sendMessage(
'What is the status of project X?',
{
conversationId: 'conv_123',
temperature: 0.5,
context: {
resources: [{ type: 'project', id: 'proj_456' }],
includeHistory: true
}
}
);
console.log(response.content);
// Display citations
response.citations?.forEach(citation => {
console.log(`Source: ${citation.source} (${citation.confidence})`);
});
streamChat()
Send a message and receive a streaming response via Server-Sent Events.
streamChat(message: string, options?: ChatOptions): Promise<AsyncIterable<ChatEvent>>
Parameters
Same as sendMessage().
Returns
An async iterable of ChatEvent objects:
type ChatEvent =
| MessageEvent
| CitationEvent
| StatusEvent
| ErrorEvent
| DoneEvent;
interface MessageEvent {
type: 'message';
data: {
content: string;
delta: string; // Incremental content since last event
};
}
interface CitationEvent {
type: 'citation';
data: Citation;
}
interface StatusEvent {
type: 'status';
data: {
status: 'thinking' | 'searching' | 'generating';
message?: string;
};
}
interface ErrorEvent {
type: 'error';
data: {
code: string;
message: string;
};
}
interface DoneEvent {
type: 'done';
data: {
conversationId: string;
messageId: string;
usage?: TokenUsage;
};
}
Example
const stream = await client.chat.streamChat(
'Summarize the latest sales data',
{
conversationId: 'conv_789',
temperature: 0.3
}
);
let fullContent = '';
for await (const event of stream) {
switch (event.type) {
case 'status':
console.log(`Status: ${event.data.status}`);
break;
case 'message':
process.stdout.write(event.data.delta);
fullContent += event.data.delta;
break;
case 'citation':
console.log(`\nCitation: ${event.data.source}`);
break;
case 'error':
console.error(`Error: ${event.data.message}`);
break;
case 'done':
console.log(`\nConversation ID: ${event.data.conversationId}`);
console.log(`Tokens used: ${event.data.usage?.totalTokens}`);
break;
}
}
console.log('\nFinal response:', fullContent);
getHistory()
Retrieve conversation history.
getHistory(conversationId: string, options?: HistoryOptions): Promise<ChatMessage[]>
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
conversationId | string | Yes | Conversation ID |
limit | number | No | Maximum messages to return (default: 50) |
before | string | No | Return messages before this message ID |
Returns
interface ChatMessage {
id: string;
role: 'user' | 'assistant' | 'system';
content: string;
timestamp: string;
citations?: Citation[];
metadata?: Record<string, any>;
}
Example
const history = await client.chat.getHistory('conv_123', { limit: 20 });
history.forEach(msg => {
console.log(`[${msg.role}] ${msg.content}`);
});
getConversation()
Get conversation metadata and summary.
getConversation(conversationId: string): Promise<Conversation>
Returns
interface Conversation {
id: string;
title: string;
createdAt: string;
updatedAt: string;
messageCount: number;
summary?: string;
metadata?: Record<string, any>;
}
listConversations()
List all conversations for the current user.
listConversations(options?: ListOptions): Promise<Conversation[]>
Example
const conversations = await client.chat.listConversations({
limit: 10,
sort: { field: 'updatedAt', order: 'desc' }
});
conversations.forEach(conv => {
console.log(`${conv.title} (${conv.messageCount} messages)`);
});
deleteConversation()
Delete a conversation and all its messages.
deleteConversation(conversationId: string): Promise<void>
Chat Context
Provide additional context to enhance AI responses:
interface ChatContext {
// Include specific resources
resources?: Array<{
type: string;
id: string;
}>;
// Include documents for RAG
documents?: string[]; // Document IDs
// Include conversation history
includeHistory?: boolean;
// Custom context data
customData?: Record<string, any>;
}
Example with Context
const response = await client.chat.sendMessage(
'Compare these two projects',
{
context: {
resources: [
{ type: 'project', id: 'proj_123' },
{ type: 'project', id: 'proj_456' }
],
documents: ['doc_abc', 'doc_def'], // Related documents
includeHistory: true,
customData: {
analysisType: 'comparative',
metrics: ['budget', 'timeline', 'scope']
}
}
}
);
Citations
AI responses can include citations to source material:
interface Citation {
source: string; // Source identifier (document ID, URL, etc.)
sourceType: 'document' | 'resource' | 'web' | 'knowledge';
content: string; // Cited content snippet
confidence: number; // Relevance score (0-1)
metadata?: {
title?: string;
author?: string;
date?: string;
pageNumber?: number;
};
}
Using Citations
const response = await client.chat.sendMessage(
'What does the contract say about termination?'
);
if (response.citations) {
console.log('Sources:');
response.citations.forEach((citation, i) => {
console.log(`\n[${i + 1}] ${citation.metadata?.title || citation.source}`);
console.log(` "${citation.content}"`);
console.log(` Confidence: ${(citation.confidence * 100).toFixed(0)}%`);
});
}
System Prompts
Customize AI behavior with system prompts:
const response = await client.chat.sendMessage(
'Analyze the sales pipeline',
{
systemPrompt: `You are a sales analyst assistant.
Always provide specific numbers and percentages.
Focus on actionable insights.
Format responses with clear sections: Overview, Key Findings, Recommendations.`
}
);
Temperature Control
Adjust response creativity:
// Factual, deterministic responses (low temperature)
const factual = await client.chat.sendMessage(
'What is the project budget?',
{ temperature: 0.1 }
);
// Creative, varied responses (high temperature)
const creative = await client.chat.sendMessage(
'Write a marketing tagline',
{ temperature: 0.9 }
);
Error Handling
try {
const response = await client.chat.sendMessage('Hello');
} catch (error) {
if (error.code === 'RATE_LIMIT_EXCEEDED') {
console.error('Too many requests. Please wait.');
} else if (error.code === 'CONTEXT_LENGTH_EXCEEDED') {
console.error('Message too long or too much context.');
} else if (error.code === 'CONTENT_FILTER') {
console.error('Message blocked by content policy.');
} else {
console.error('Chat error:', error.message);
}
}
Streaming Error Handling
const stream = await client.chat.streamChat('Hello');
try {
for await (const event of stream) {
if (event.type === 'error') {
throw new Error(event.data.message);
}
// Handle other events
}
} catch (error) {
console.error('Stream error:', error);
}
Advanced Example
Complete example with streaming, context, and error handling:
async function analyzeProjectWithAI(projectId: string) {
const stream = await client.chat.streamChat(
'Provide a detailed analysis of this project including risks, progress, and recommendations.',
{
conversationId: `project_analysis_${projectId}`,
temperature: 0.4,
maxTokens: 4096,
context: {
resources: [{ type: 'project', id: projectId }],
includeHistory: false
},
systemPrompt: 'You are a project management expert. Provide structured analysis with clear sections.'
}
);
let analysis = '';
const citations: Citation[] = [];
try {
for await (const event of stream) {
switch (event.type) {
case 'status':
console.log(`[${event.data.status}] ${event.data.message || ''}`);
break;
case 'message':
process.stdout.write(event.data.delta);
analysis += event.data.delta;
break;
case 'citation':
citations.push(event.data);
break;
case 'error':
throw new Error(event.data.message);
case 'done':
console.log(`\n\nAnalysis complete. Tokens used: ${event.data.usage?.totalTokens}`);
break;
}
}
} catch (error) {
console.error('Analysis failed:', error);
throw error;
}
return { analysis, citations };
}
// Usage
const result = await analyzeProjectWithAI('proj_123');
console.log('Analysis:', result.analysis);
console.log('Sources:', result.citations.length);
Next Steps
- Learn about Documents module for RAG context
- Explore useChat hook for React integration
- See Chat API endpoints
- Read AI integration guide