Skip to main content
A collection of practical tips for getting better results from CodeAlive across all AI agents.

Prompting Strategies

Use the actual names from your codebase — class names, service names, module names. The more specific you are, the more relevant the results.
Instead of…Try…
“auth code""authentication middleware in the Express API gateway"
"database stuff""Prisma schema for the User model"
"error handling""how does OrderService handle payment failures?"
"the API""the /api/v2/users REST endpoint”
CodeAlive understands architectural patterns, not just individual files. Ask higher-level questions:
"How does error handling work across the API layer?"
"What pattern do we use for database transactions?"
"How are background jobs structured in this project?"
When using codebase_consultant, pass the conversation_id from a previous response to maintain context. This lets you drill deeper without re-explaining:
First: "How does the payment flow work?"
Follow-up: "What happens if the payment provider returns a timeout?"
Follow-up: "Show me the retry logic for that case"
Start with codebase_search to find relevant files, then use codebase_consultant to ask deeper questions about what you found:
1. Search: "payment webhook handler"
2. Chat: "Explain how the payment webhook handler processes refund events"
This gives the consultant more focused context and produces better answers.

Search Optimization

codebase_search supports different modes:
ModeWhen to useSpeed
auto (default)Most queries — it picks the best mode automaticallyVaries
fastQuick lookups: function names, file locations, importsFast
deepArchitectural questions spanning multiple filesSlower
For most cases, auto makes the right choice. Use explicit modes when you know what you need.
If you have multiple repositories indexed, scope your search to avoid noise:
"Search for authentication logic in the backend repository"
"Find React components in the frontend repo"
Use get_data_sources to see available repositories and workspaces.
codebase_search is fast and returns file locations with relevant snippets. codebase_consultant is slower and uses more tokens but synthesizes a complete answer.Recommended flow:
  1. Search to find where relevant code lives
  2. Chat only when you need synthesis, explanation, or analysis

Workspace Organization

Indexing too many unrelated repositories in one workspace adds noise to search results. If you’re getting irrelevant hits, split your workspace or scope your queries to specific repos.

Cost & Token Optimization

codebase_search is significantly cheaper and faster than codebase_consultant. Use search for lookups and locating code. Reserve chat for synthesis and analysis.
ToolBest forRelative cost
get_data_sourcesListing repos, checking statusMinimal
codebase_searchFinding code, locating filesLow
codebase_consultantExplanations, analysis, reviewsHigher
Shorter, more focused queries return better results and use fewer tokens. Instead of explaining your entire situation, ask a direct question:
Instead of…Try…
”I’m working on a feature where users can reset their password and I need to understand how the current password reset flow works so I can modify it""How does the password reset flow work?”

Agent-Specific Tips

  • Use claude mcp add for the simplest setup — one command, done
  • Add CodeAlive to your custom instructions so Claude uses it automatically
  • Works with both remote MCP and local Docker
  • Composer Agent mode automatically uses MCP tools when relevant
  • Use .cursor/mcp.json for project-specific config (shareable with team)
  • Add CodeAlive patterns to .cursorrules for consistent AI behavior
  • Config uses serverUrl — not url — different from other agents
  • Supports Streamable HTTP transport
  • Check Windsurf’s MCP settings page for connection status
  • Supports auto-approve for MCP tools to reduce confirmation prompts
  • Add CodeAlive rules to .cline/rules.md for automatic context usage:
    Always search CodeAlive before writing new code.
    Follow existing patterns found in the codebase.
    
  • Native MCP support is GA in VS Code
  • Configure in .vscode/mcp.json for project-level setup
  • Agent mode in Copilot Chat automatically discovers MCP tools

What’s Next