Skip to main content

Overview

Codebase Chat is an AI consultant that knows your entire codebase. Unlike semantic search which returns file locations and snippets, Codebase Chat provides synthesized, ready-to-use answers — architecture explanations, debugging analysis, implementation guidance, and code reviews grounded in your actual code. This is powered by CodeAlive’s GraphRAG approach: the AI has access to the full knowledge graph of your codebase, not just individual files.

How It Works

1

Ask a Question

Ask any question about your codebase in natural language. The AI retrieves relevant context from the knowledge graph before generating a response.
2

Get a Synthesized Answer

The response combines information from across your codebase — referencing multiple files, tracing data flows, and explaining architectural patterns.
3

Follow Up

Continue the conversation with follow-up questions. Each conversation maintains context, so you can drill deeper without repeating yourself.

Example Conversations

You: "Explain how the authentication system works in this project"

AI: The authentication system uses a JWT-based flow with three main components:
    1. AuthController (src/controllers/auth.ts) handles login/signup endpoints
    2. JwtService (src/services/jwt.ts) manages token generation and validation
    3. AuthMiddleware (src/middleware/auth.ts) protects routes...
    [continues with specific code references]

Access Methods

The codebase_consultant tool is available through any CodeAlive-connected agent:
ParameterRequiredDescription
queryYesYour question about the codebase
data_sourcesYesRepository or workspace names
conversation_idNoContinue a previous conversation
Your AI agent calls this tool automatically when you ask in-depth questions about your codebase.

Conversation Continuity

Every chat response includes a conversation_id. Pass it in follow-up questions to maintain context:
# First question
python scripts/chat.py "How does authentication work?" my-backend
# Response includes conversation_id: abc123

# Follow-up (cheaper, preserves context)
python scripts/chat.py "What about the refresh token flow?" --continue abc123
This saves cost and provides better answers, since the AI doesn’t need to re-retrieve context for each question.

Best Practices

Search First

Use semantic search for locating code. Use chat when you need explanations or analysis — it’s more expensive per call

Use Follow-ups

Continue conversations with conversation_id instead of starting fresh each time

Be Specific

“Explain the payment retry logic” gets better results than “tell me about payments”

Scope by Repo

Target specific repositories for more focused, accurate answers