Troubleshooting FAQ
Common issues and solutions for BrainOS setup, AI Worker configuration, connectors, and production deployment.
Quick Debugging Checklist
- Check environment variables are set correctly (SUPABASE_URL, SUPABASE_ANON_KEY, etc.)
- Verify database migrations have been applied (check Supabase dashboard → Table Editor)
- Ensure organization ID exists in the database and matches your queries
- Run consolidation engine manually to build brain knowledge (need min 30 task executions per domain)
- Check Claude Desktop logs for MCP connection errors (~/Library/Logs/Claude/mcp*.log)
- Verify API keys have correct permissions (Stripe read access, HubSpot CRM scopes, etc.)
- Use service role key (not anon key) for debugging RLS issues
Setup & Installation
Installation fails with 'Cannot find module @brainos/workers'
Ensure you've installed the package and your package.json includes it in dependencies. If using pnpm/yarn workspaces, run the install command from the workspace root.
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install
# Or with pnpm
pnpm install --forceSupabase migrations fail with 'permission denied for schema public'
Your Supabase user needs schema creation privileges. Run migrations with the service role key (not anon key). Check your Supabase dashboard → Settings → API for the service_role key.
# Use service role key for migrations
SUPABASE_SERVICE_ROLE_KEY=<your-service-key> \
npx supabase db pushTypeScript errors about missing types after installation
Brain OS packages export types. Ensure your tsconfig.json has 'moduleResolution': 'bundler' or 'node16' and includes the packages in your type roots.
// tsconfig.json
{
"compilerOptions": {
"moduleResolution": "bundler",
"types": ["@brainos/workers"]
}
}MCP Server
Claude Desktop doesn't show Brain OS MCP server
Check your claude_desktop_config.json location and formatting. On macOS it's at ~/Library/Application Support/Claude/claude_desktop_config.json. Restart Claude Desktop after editing.
{
"mcpServers": {
"brainos": {
"command": "npx",
"args": ["-y", "@brainos/mcp-server"],
"env": {
"SUPABASE_URL": "https://your-project.supabase.co",
"SUPABASE_ANON_KEY": "your-anon-key",
"BRAINOS_ORG_ID": "org_123"
}
}
}
}MCP server starts but tools aren't working
Verify environment variables are set correctly. Check Claude Desktop logs for connection errors. Ensure your organization ID exists in the database.
# Check Claude Desktop logs (macOS)
tail -f ~/Library/Logs/Claude/mcp*.log
# Verify organization exists in Supabase
SELECT * FROM organizations WHERE id = 'org_123';brainos_query returns empty results despite having data
Brain learning cycles may not have run yet. Trigger the consolidation cycle manually to build knowledge. Check that you have sufficient task executions (min 30 per domain).
import { createBrainSleepCycle } from '@brainos/workers/orchestrator';
const engine = createBrainSleepCycle({
supabase,
organizationId: 'org_123',
lookbackHours: 720, // 30 days
verbose: true
});
await engine.run(); // This runs the brain learning cycleBrain Learning & RL
Consolidation runs but brain learning produces 0 RL signals
Common causes: (1) Insufficient data (need min 30 observations per domain), (2) No temporal correlation in signals, (3) confidence threshold too high. Lower discovery thresholds or generate more task executions.
// Lower thresholds for initial testing
const engine = createBrainSleepCycle({
supabase,
organizationId: 'org_123',
minObservations: 10, // Default: 30
minEdgeWeight: 0.10, // Default: 0.20
autoPromoteConfidence: 0.55, // Default: 0.75
});RL signals look incorrect or quality scores are always 0.5
Check that tasks are actually completing with real results (not empty data arrays). Empty data arrays are penalised -0.25 in quality scoring. Review the federated_knowledge table to confirm edges are being written.
// Higher precision settings
const engine = createBrainSleepCycle({
supabase,
organizationId: 'org_123',
minEdgeWeight: 0.30, // Stricter threshold
autoPromoteConfidence: 0.85, // Higher confidence required
causalMethod: 'conditional', // Confounder rejection
});Error: 'Insufficient observations: X < 30'
Not enough data points for statistical significance. Either ingest more signals or lower the minObservations threshold. For demo/testing, use minObservations: 5-10.
// Generate more test data
import { generateRealisticData } from './realistic-data-generator';
for (let day = 0; day < 90; day++) {
const signals = generateRealisticData(day);
await repository.insertSignals(signals);
}Causal discovery is too slow (>30 seconds)
Large datasets require longer processing. Reduce discoveryLookbackDays or filter to specific domains. Consider running consolidation as a background job.
// Run consolidation in background
import { scheduleSleepCycle } from '@brainos/workers/orchestrator';
// Run every 6 hours
await scheduleSleepCycle({
supabase,
organizationId: 'org_123',
interval: '6 hours', // pg_cron syntax
});Persistence & Database
Error: 'relation cross_domain_signals does not exist'
Database migrations haven't run. Apply migrations using Supabase CLI or run the migration SQL files manually from supabase/migrations/.
# Apply migrations
cd packages/memory-stack
npx supabase db push
# Or manually in Supabase SQL Editor
-- Run each migration file in order from supabase/migrations/Signals inserted but not appearing in queries
Check Row Level Security (RLS) policies. Ensure your organization_id matches the query filter. Use service role key (not anon key) for admin queries.
// Use service role for debugging
const supabase = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY! // Not anon key
);
// Check if signals exist (bypasses RLS)
const { data, count } = await supabase
.from('cross_domain_signals')
.select('*', { count: 'exact' })
.eq('organization_id', 'org_123');
console.log('Total signals:', count);Database connection timeout errors
Supabase free tier has connection limits. Use connection pooling or reduce concurrent queries. Upgrade to Pro for higher limits.
// Use Supabase pooler connection
const supabase = createClient(
'https://your-project.supabase.co', // Transaction pooler
process.env.SUPABASE_ANON_KEY!,
{
db: { schema: 'public' },
global: { fetch: fetch.bind(globalThis) }
}
);Connectors
Stripe connector fails with 'Invalid API key'
Check that STRIPE_API_KEY is set correctly. Use test mode keys (sk_test_) for development. Ensure the key has read permissions for all required resources.
# Verify Stripe key
echo $STRIPE_API_KEY
# Test connection
curl https://api.stripe.com/v1/charges?limit=1 \
-u $STRIPE_API_KEY:HubSpot connector returns 'Authentication failed'
HubSpot requires OAuth or Private App tokens. Ensure your token has scopes: crm.objects.contacts.read, crm.objects.companies.read, crm.objects.deals.read.
# Test HubSpot token
curl -X GET \
'https://api.hubapi.com/crm/v3/objects/contacts?limit=1' \
-H 'Authorization: Bearer YOUR_HUBSPOT_TOKEN'Webhook receiver gets duplicate events
Ensure you're storing event IDs for deduplication. The brainos-webhook endpoint handles this automatically, but custom receivers need to implement it.
// Deduplicate webhooks
const { data: existing } = await supabase
.from('webhook_events')
.select('id')
.eq('external_event_id', event.id)
.single();
if (existing) {
return new Response('Duplicate', { status: 200 });
}LLM Copilot
Copilot returns generic answers without domain intelligence
Ensure brain learning cycles have run (run consolidation). Check that the Brain Context Mesh is injecting context at the right decision point. Verify ANTHROPIC_API_KEY is set and Brain IQ routing is working.
// Test federated knowledge exists
const { data: edges } = await supabase
.from('federated_knowledge')
.select('*')
.eq('organization_id', 'org_123');
console.log('Knowledge edges in brain:', edges?.length || 0);
// If 0, run consolidation cycle first
await engine.run();Error: 'Anthropic API key not found'
Set ANTHROPIC_API_KEY in your environment. Get API key from console.anthropic.com. Use Claude 3.5 Sonnet or later for best results.
# Set API key
export ANTHROPIC_API_KEY=sk-ant-api03-...
# Verify it works
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{"model":"claude-3-5-sonnet-20241022","max_tokens":10,"messages":[{"role":"user","content":"Hi"}]}'Performance & Scaling
Signal ingestion is slow for large batches
Use batch inserts (max 500 signals/batch). Enable connection pooling. Consider async processing with job queues for large volumes.
// Batch insert signals
const BATCH_SIZE = 500;
for (let i = 0; i < signals.length; i += BATCH_SIZE) {
const batch = signals.slice(i, i + BATCH_SIZE);
await repository.insertSignals(batch);
}Embeddings generation taking too long
Brain OS uses CPU-based N-gram embeddings (no GPU required). For large text volumes, increase batch size or use streaming ingestion.
// Faster embedding config
import { createEmbeddings } from '@brainos/workers/embeddings';
const embeddings = createEmbeddings({
ngramSize: 2, // Lower = faster (default: 3)
maxFeatures: 1000, // Lower = faster (default: 5000)
});