AI Features Troubleshooting Guide
Last Updated: 2026-01-30 Status: ✅ Production Ready
Overview
This guide provides solutions to common issues with Bayit+ AI features (AI Search, AI Recommendations, Auto Catch-Up). For each issue, we provide symptoms, causes, and step-by-step solutions.
Quick Reference
| Issue | Likely Cause | Quick Fix |
|---|---|---|
| Insufficient Credits Error | Balance < required credits | Check balance, wait for refill |
| Search Returns No Results | Query too vague | Make query more specific |
| Recommendations Not Personalized | No viewing history | Watch more content |
| Catch-Up Unavailable | Channel not live/transcribed | Try different channel |
| API Timeout | High load or slow LLM | Retry after 30 seconds |
| Rate Limit Exceeded | Too many requests | Wait for rate limit reset |
| Authentication Failed | Invalid/expired token | Re-login to refresh token |
AI Search Issues
Issue: Search Returns No Results
Symptoms:
- Empty results list
- "No results found" message
Causes:
- Query too vague or generic
- No matching content in database
- Search filters too restrictive
- Language mismatch
Solutions:
1. Make Query More Specific
❌ Bad: "movie"
✅ Good: "Israeli comedy movies from the 2020s"
❌ Bad: "show"
✅ Good: "Family-friendly series similar to Shtisel"2. Remove or Relax Filters
# Web (Chrome DevTools Console)
# Check active filters
console.log(searchStore.getState().filters);
# Clear filters
searchStore.setState({ filters: {} });3. Try Different Languages
// If searching in English, try Hebrew
searchWithAI("קומדיה ישראלית");4. Check Content Availability
# Backend - Check if matching content exists
cd backend
poetry run python -c "
from app.models.content import Content
import asyncio
async def check():
count = await Content.find().count()
print(f'Total content items: {count}')
asyncio.run(check())
"Issue: Results Not Relevant
Symptoms:
- Results don't match query intent
- Low relevance scores (<0.5)
- Unrelated content appears
Causes:
- Query ambiguous or unclear
- Metadata incomplete
- Vector embeddings outdated
- Model misinterpreting context
Solutions:
1. Add More Context to Query
❌ Vague: "something good"
✅ Specific: "Heartwarming Israeli family drama from recent years"
❌ Unclear: "action"
✅ Clear: "Fast-paced action movies with car chases, similar to Fast & Furious"2. Include Preferences and Constraints
✅ "Israeli comedy movies from 2020-2025, suitable for family viewing, around 90 minutes long"3. Rebuild Vector Embeddings (Admin)
# Backend
cd backend
poetry run python scripts/rebuild_vector_index.py4. Enrich Metadata (Admin)
# Use AI Agent to enrich metadata
curl -X POST http://localhost:8000/api/v1/admin/ai-agent/enrich \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"content_ids": ["all"]}'Issue: "Insufficient Credits" Error
Symptoms:
- Error message: "Insufficient AI credits"
- Balance shown as < 10 credits
- Search button disabled
Causes:
- Beta 500 credits depleted
- Credit deduction failed but balance not updated
- User not enrolled in Beta 500
Solutions:
1. Check Actual Balance
# Web (Chrome DevTools Console)
fetch('/api/v1/beta/credits/balance', {
headers: { 'Authorization': `Bearer ${localStorage.getItem('auth_token')}` }
}).then(r => r.json()).then(console.log);2. Request Credit Refund (if operation failed)
# Contact support with transaction details
# Include: user_id, timestamp, feature used, error message3. Verify Beta Enrollment
# Backend - Check enrollment status
cd backend
poetry run python -c "
from app.models.beta_user import BetaUser
import asyncio
async def check():
user = await BetaUser.find_one(BetaUser.email == 'user@example.com')
print(f'Beta User: {user.is_beta_user if user else False}')
print(f'Balance: {user.balance if user else 0}')
asyncio.run(check())
"4. Admin Credit Grant
# Admin can grant additional credits
curl -X POST http://localhost:8000/api/v1/admin/beta/credits/grant \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user_abc123",
"amount": 100,
"reason": "Beta testing participation reward"
}'AI Recommendations Issues
Issue: Recommendations Not Personalized
Symptoms:
- Generic recommendations
- Same content shown repeatedly
- No consideration of viewing history
Causes:
- No viewing history recorded
- Insufficient history (<10 views)
- Cache outdated
- Context not detected
Solutions:
1. Build Viewing History
Minimum required: 10+ completed views
Optimal: 30+ views across different categories2. Set Explicit Preferences
// Web
fetch('/api/v1/preferences', {
method: 'POST',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
preferred_genres: ['comedy', 'drama'],
preferred_languages: ['he', 'en'],
content_types: ['movies', 'series'],
}),
});3. Clear Recommendation Cache
# Web (Chrome DevTools Console)
localStorage.removeItem('ai-recommendations');
recommendationsStore.setState({ recommendations: [], lastFetched: null });4. Provide Context
# Request recommendations with specific context
curl -X GET "http://localhost:8000/api/v1/beta/recommendations?context=evening&mood=relaxing" \
-H "Authorization: Bearer $TOKEN"Issue: Same Recommendations Repeated
Symptoms:
- Identical recommendations every request
- No new suggestions
- Stale content
Causes:
- Cache TTL too long
- No new content added
- Viewing history not updating
- Algorithm not diversifying
Solutions:
1. Force Refresh
// Web - Force new recommendations
await aiService.getRecommendations({ force_refresh: true });2. Check Content Updates
# Backend - Check recent content additions
cd backend
poetry run python -c "
from app.models.content import Content
from datetime import datetime, timedelta
import asyncio
async def check():
week_ago = datetime.utcnow() - timedelta(days=7)
count = await Content.find(Content.created_at >= week_ago).count()
print(f'New content last 7 days: {count}')
asyncio.run(check())
"3. Update Preferences to Increase Diversity
// Request more diverse recommendations
await aiService.getRecommendations({
diversity: 0.8, // 0.0-1.0, higher = more diverse
exclude_watched: true,
});Auto Catch-Up Issues
Issue: "No Content Available"
Symptoms:
- Error: "No content available for catch-up"
- Catch-up button disabled
- Empty summary
Causes:
- Channel not live
- Insufficient transcript data (<5 minutes)
- Transcription service not enabled
- Channel not configured for catch-up
Solutions:
1. Verify Channel is Live
# Check live channel status
curl http://localhost:8000/api/v1/live/channels | jq '.[] | select(.is_live == true)'2. Wait for More Content
Minimum required: 5 minutes of live content
Optimal: 15-30 minutes for quality summaries3. Check Transcription Service
# Backend - Verify transcription enabled
cd backend
poetry run python -c "
from app.core.config import settings
print(f'Transcription enabled: {settings.ENABLE_LIVE_TRANSCRIPTION}')
print(f'Transcription service: {settings.TRANSCRIPTION_SERVICE}')
"4. Enable Channel for Catch-Up (Admin)
# Update channel configuration
curl -X PATCH http://localhost:8000/api/v1/admin/live/channels/{channel_id} \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '{"enable_catchup": true, "enable_transcription": true}'Issue: Summary Too Brief or Generic
Symptoms:
- Very short summary (< 50 words)
- Lacks detail
- Generic statements
- Missing key moments
Causes:
- Insufficient source material
- Poor transcript quality
- Model selection (Haiku vs Sonnet)
- Prompt not optimized
Solutions:
1. Wait for More Content
Brief summary: 5-10 minutes content
Detailed summary: 15-30 minutes content
Comprehensive: 30+ minutes content2. Request Detailed Mode (if available)
curl -X GET "http://localhost:8000/api/v1/live/{channel_id}/catchup?detail=high" \
-H "Authorization: Bearer $TOKEN"3. Check Transcript Quality
# Backend - Review transcript
cd backend
poetry run python scripts/view_channel_transcript.py --channel-id <channel_id>4. Upgrade to Sonnet for Better Summaries (Admin)
# Update LLM config to use Sonnet for catch-up
# Edit Google Cloud Secret: LLM_MODEL_CONFIG
# Change feature_model_mapping.catch_up to "claude-sonnet-4-5"General AI Feature Issues
Issue: Features Loading Slowly
Symptoms:
- Spinner shows for >5 seconds
- Timeout errors
- User frustration
Causes:
- LLM API latency
- Network congestion
- High server load
- Large request payloads
Solutions:
1. Check Network Connection
# Test API connectivity
curl -w "@curl-format.txt" -o /dev/null -s http://localhost:8000/health2. Monitor LLM API Latency
# Backend - Check Prometheus metrics
curl http://localhost:9090/metrics | grep llm_request_duration_seconds3. Optimize Request Payload
// Reduce query length if too long
const optimizedQuery = query.substring(0, 500); // Max 500 chars4. Enable Caching
# Backend - Verify caching enabled
cd backend
grep "ENABLE_LLM_CACHE" .env
# Should be: ENABLE_LLM_CACHE=trueIssue: "Service Unavailable" Error
Symptoms:
- HTTP 503 errors
- "AI service temporarily down" message
- Intermittent failures
Causes:
- LLM API outage
- Rate limit exceeded (global)
- Circuit breaker open
- Backend service down
Solutions:
1. Check LLM Service Status
# Anthropic Status Page
open https://status.anthropic.com
# OpenAI Status Page
open https://status.openai.com2. Verify Circuit Breaker State
# Backend logs
cd backend
tail -f logs/backend.log | grep "circuit_breaker"3. Reset Circuit Breaker (Admin)
# Reset circuit breaker
curl -X POST http://localhost:8000/api/v1/admin/llm/circuit-breaker/reset \
-H "Authorization: Bearer $ADMIN_TOKEN"4. Check Backend Service
# Health check
curl http://localhost:8000/health
# Service status
systemctl status bayit-backend-productionError Codes Reference
Credit System Errors
| Code | Message | Cause | Solution |
|---|---|---|---|
| 402 | Insufficient credits | Balance too low | Check balance, wait for refill |
| 409 | Concurrent credit deduction | Race condition | Retry automatically handled |
| 422 | Invalid credit amount | Negative or zero amount | Check request payload |
Rate Limiting Errors
| Code | Message | Cause | Solution |
|---|---|---|---|
| 429 | Rate limit exceeded | Too many requests | Wait for reset (shown in headers) |
| 429 | Feature rate limit exceeded | Feature-specific limit hit | Wait or use different feature |
LLM API Errors
| Code | Message | Cause | Solution |
|---|---|---|---|
| 500 | LLM request failed | LLM API error | Check LLM service status, retry |
| 503 | Service temporarily unavailable | Circuit breaker open | Wait 60s for circuit breaker reset |
| 504 | LLM request timeout | Request too slow | Reduce request complexity |
Debug Logging
Enable Debug Logging (Web)
# Chrome DevTools Console
localStorage.setItem('debug', 'bayit:ai:*');
location.reload();Enable Debug Logging (Backend)
# .env
LOG_LEVEL=debug
AI_AGENT_LOG_LEVEL=debug
# Restart backend
poetry run uvicorn app.main:app --reloadView Logs
Web:
# Chrome DevTools Console
# All AI-related logs automatically displayedBackend:
cd backend
tail -f logs/backend.log | grep "AI"Structured Logging Query:
# Query logs with jq
cat logs/backend.log | jq 'select(.component == "ai_service")'Performance Optimization
Client-Side
1. Enable Request Caching
// Cache search results for 5 minutes
const cacheKey = `search:${hashQuery(query)}`;
const cached = localStorage.getItem(cacheKey);
if (cached && Date.now() - JSON.parse(cached).timestamp < 300000) {
return JSON.parse(cached).results;
}2. Debounce Search Input
// Wait 500ms after user stops typing
const debouncedSearch = debounce(searchWithAI, 500);3. Lazy Load Components
// Lazy load AI components
const AISearchModal = lazy(() => import('./components/ai/AISearchModal'));Server-Side
1. Enable Redis Caching
# .env
REDIS_ENABLE=true
LLM_CACHE_TTL=36002. Use Haiku for High-Volume Operations
# For search and recommendations, use Haiku
model = "claude-haiku-3-5-20241022" # $1/M tokens vs $15/M3. Batch Requests
# Process multiple items in one LLM request
results = await llm_service.batch_analyze(items)Getting Help
Documentation
- AI Features Overview - Technical overview
- AI API Reference - Complete API docs
- Beta 500 User Manual - End-user guide
- LLM Configuration - Model setup
Support Channels
For Users:
- Email: support@bayitplus.com
- Community: community.bayitplus.com
- GitHub Issues (for developers)
For Developers:
- Backend Issues: https://github.com/bayit-plus/backend/issues
- Frontend Issues: https://github.com/bayit-plus/web/issues
- Documentation Issues: docs@bayitplus.com
Emergency Contact:
- Critical production issues: emergency@bayitplus.com
- On-call engineer: +1-XXX-XXX-XXXX
Information to Provide
When reporting issues, include:
User Information
- User ID
- Email address
- Beta enrollment status
Issue Details
- Feature used (AI Search, Recommendations, Catch-Up)
- Error message (exact text)
- Timestamp (with timezone)
- Steps to reproduce
Technical Context
- Platform (Web, iOS, Android, tvOS)
- Browser/OS version
- Network conditions
- Request/response payloads (if available)
Logs (if available)
- Browser console logs
- Backend logs (for admins)
- Network HAR file
Preventive Measures
For Users
- Monitor Credit Balance - Check regularly to avoid interruptions
- Use Specific Queries - More specific = better results
- Build Viewing History - Watch content to improve recommendations
- Provide Feedback - Help improve AI accuracy with thumbs up/down
For Developers
- Enable Monitoring - Prometheus + Grafana dashboards
- Set Budget Alerts - Alert when AI costs exceed thresholds
- Test Edge Cases - Low credits, no history, empty results
- Implement Retries - Auto-retry failed requests with exponential backoff
- Cache Aggressively - Reduce LLM API calls by 40-60%
For Admins
- Monitor Error Rates - Set alerts for >5% error rate
- Review Cost Daily - Track AI spending trends
- Audit Credit Usage - Identify unusual patterns
- Keep Metadata Updated - Run weekly enrichment jobs
- Test Failover - Verify fallback strategies work
Related Documentation
- AI API Reference - API error codes and responses
- Beta 500 User Manual - Feature usage guide
- LLM Configuration - Model and fallback config
- Credit System - Credit architecture
Document Status: ✅ Complete Last Updated: 2026-01-30 Maintained by: Support Team Next Review: 2026-02-28