Skip to main content
POST
/
api
/
chat
/
completions
Chat Completions
curl --request POST \
  --url https://api.example.com/api/chat/completions \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "role": "<string>",
      "content": "<string>",
      "messageId": "<string>",
      "searchMode": "<string>"
    }
  ],
  "dataSources": [
    {}
  ],
  "names": [
    {}
  ],
  "stream": true,
  "conversationId": "<string>",
  "debug": true
}
'
{
  "event": "<string>",
  "data.event": "<string>",
  "data.conversationId": "<string>",
  "data.messageId": "<string>",
  "data.chunk": "<string>",
  "data.type": "<string>",
  "data.done": true
}

Overview

The Chat Completions endpoint enables AI-powered conversations with deep understanding of your codebase. It streams responses in real-time, providing contextually aware answers based on your indexed repositories and workspaces.

Request

messages
array
required
Array of message objects representing the conversation history.
dataSources
array
Array of data source objects to use as context. Each object must have an id field with the data source ID.Example: [{"id": "69087243381f39ef605c3841"}]Required: You must specify either dataSources OR names (at least one is required).
names
array
Alternative to dataSources - array of data source names (strings) to use as context.Example: ["MyWorkspace", "my-repository"]Required: You must specify either dataSources OR names (at least one is required).
stream
boolean
default:"true"
Whether to stream the response as Server-Sent Events (SSE).
conversationId
string
Optional conversation ID to continue an existing conversation.
debug
boolean
default:"false"
Enable debug mode for additional information in the response.

Response

Streaming Response

When stream: true, the endpoint returns Server-Sent Events (SSE). The response includes:
  1. Metadata event (sent first) - Contains conversation and message IDs:
event: message
data: {"event":"metadata","conversationId":"507f1f77bcf86cd799439011","messageId":"507f191e810c19729de860ea"}
  1. Content chunks - The actual response content:
event: message
data: {"chunk": "The authentication ", "type": "content"}

event: message
data: {"chunk": "flow starts in ", "type": "content"}

event: message
data: {"chunk": "the AuthService class", "type": "content"}
  1. Done event (sent last) - Indicates stream completion:
event: message
data: {"done": true, "type": "done"}
event
string
Always "message" for SSE events
data.event
string
Event type in metadata: "metadata" (first event only)
data.conversationId
string
Unique identifier for the conversation. Use this ID to continue the conversation in subsequent requests by passing it in the conversationId field.
data.messageId
string
Unique identifier for the assistant’s message in this conversation
data.chunk
string
Text content chunk (for content events)
data.type
string
Content type: "content" for text chunks, "done" for completion
data.done
boolean
Set to true in the final event to indicate stream completion

Non-Streaming Response

When stream: false, returns a complete response:
{
  "content": "The authentication flow starts in the AuthService class...",
  "conversationId": "507f1f77bcf86cd799439011",
  "messageId": "507f191e810c19729de860ea"
}

Specifying Data Sources

You have two options for specifying which repositories/workspaces to search:
  1. By ID - Use dataSources with an array of objects containing data source IDs
  2. By Name - Use names with an array of data source name strings
Use either dataSources OR names, not both. If neither is specified, all available data sources will be used.

Code Examples

import requests
import json

url = "https://app.codealive.ai/api/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

# Option 1: Using data source IDs (recommended if you know the IDs)
data = {
    "messages": [
        {
            "role": "user",
            "content": "Explain how the authentication system works"
        }
    ],
    "dataSources": [
        {"id": "69087243381f39ef605c3841"}
    ],
    "stream": False
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Data Source Selection Examples

{
  "messages": [{"role": "user", "content": "Explain the code"}],
  "dataSources": [
    {"id": "69087243381f39ef605c3841"},
    {"id": "69087243381f39ef605c3842"}
  ]
}

Use Cases

Multi-Turn Conversations

Use the conversationId from the metadata event to maintain conversation context across multiple requests:
import requests

url = "https://app.codealive.ai/api/chat/completions"
headers = {"Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json"}

# First message - creates a new conversation
response1 = requests.post(url, headers=headers, json={
    "messages": [{"role": "user", "content": "What is the authentication flow?"}],
    "stream": False
})
result1 = response1.json()
conversation_id = result1['conversationId']

# Follow-up message - continues the conversation
response2 = requests.post(url, headers=headers, json={
    "messages": [{"role": "user", "content": "How is the JWT token validated?"}],
    "conversationId": conversation_id,  # Use the same conversation ID
    "stream": False
})
# The AI will have context from the previous message

Code Explanation

{
  "messages": [
    {
      "role": "user",
      "content": "Explain the payment processing flow in detail"
    }
  ]
}

Bug Analysis

{
  "messages": [
    {
      "role": "user",
      "content": "Why might users be experiencing timeout errors in the checkout process?"
    }
  ]
}

Architecture Review

{
  "messages": [
    {
      "role": "user",
      "content": "Analyze the microservices architecture and suggest improvements"
    }
  ]
}

Code Generation

{
  "messages": [
    {
      "role": "user",
      "content": "Generate unit tests for the UserService class"
    }
  ]
}

Error Responses

400 Bad Request

{
  "error": {
    "code": "INVALID_REQUEST",
    "message": "Messages array is required"
  }
}

404 Not Found

{
  "error": {
    "code": "DATASOURCE_NOT_FOUND",
    "message": "Repository 'repo-xyz' not found or not indexed"
  }
}

429 Rate Limited

{
  "error": {
    "code": "RATE_LIMITED",
    "message": "Chat completion rate limit exceeded"
  }
}

Best Practices

  • Specify relevant datasources to improve response quality
  • Include conversation history for better context
  • Use system messages to set behavior expectations
  • Implement proper error handling for SSE connections
  • Buffer responses for better user experience
  • Handle connection timeouts gracefully
  • Monitor token usage in responses
  • Set appropriate max_tokens limits
  • Consider response chunking for large outputs