We probe 9 major LLM API providers every 60 seconds from Cloudflare's global edge network. For OpenAI and Anthropic, we check their official Statuspage APIs. For other providers, we perform HTTP health checks — any response under HTTP 500 means the service is reachable and operational.
We monitor OpenAI (GPT-4, GPT-4o), Anthropic (Claude), Google Gemini, Groq, Mistral AI, xAI (Grok), Cohere, Perplexity, and Together AI. We track real-time status, response latency, and historical uptime for each provider.
Our probes run every 60 seconds from Cloudflare's edge network, providing near real-time accuracy. For OpenAI and Anthropic, we use their official status APIs. For other providers, we verify that their API endpoints are responding. We store 90 days of historical uptime data.