Troubleshooting & Support
Information to Provide When Seeking Helpβ
When reporting issues, please include as much of the following as possible. It's okay if you can't provide everythingβespecially in production scenarios where the trigger might be unknown. Sharing most of this information will help us assist you more effectively.
1. LiteLLM Configuration Fileβ
Your config.yaml file (redact sensitive info like API keys). Include number of workers if not in config.
2. Initialization Commandβ
The command used to start LiteLLM (e.g., litellm --config config.yaml --num_workers 8 --detailed_debug).
3. LiteLLM Versionβ
- Current version
- Version when the issue first appeared (if different)
- If upgraded, the version changed from β to
4. Environment Variablesβ
Non-sensitive environment variables not in your config (e.g., NUM_WORKERS, LITELLM_LOG, LITELLM_MODE, HOST, PORT, REDIS_HOST, REDIS_PORT). Do not include passwords or API keys.
5. Server Specificationsβ
- Testing/Development: CPU cores, RAM, OS, Python version
- Production (if different): CPU cores, RAM, deployment method, number of instances/replicas
6. Database and Redis Usageβ
- Database: Using database? (
DATABASE_URLset), database type and version - Redis: Using Redis? Redis version, configuration type (Standalone/Cluster/Sentinel).
7. Endpointsβ
The endpoint(s) you're using that are experiencing issues (e.g., /chat/completions, /embeddings).
8. Request Exampleβ
A realistic example of the request causing issues, including expected vs. actual response and any error messages.
9. Error Logs, Stack Traces, and Metricsβ
Full error logs, stack traces, and any images from service metrics (CPU, memory, request rates, etc.) that might help diagnose the issue.
Support Channelsβ
Community Discord π Community Slack π
Our numbers π +1 (770) 8783-106 / +1 (412) 618-6238
Our emails βοΈ ishaan@berri.ai / krrish@berri.ai