Rate Limits
Rate limit tiers
| Plan | Assessments/min | API requests/min | Batch size |
|---|---|---|---|
| Starter | 100 | 200 | 50 |
| Growth | 1,000 | 2,000 | 500 |
| Scale | 10,000 | 20,000 | 1,000 |
| Enterprise | Custom | Custom | Custom |
Rate limits apply per organization, not per API key. Multiple keys from the same org share the quota.
Rate limit headers
Every API response includes rate limit information:
X-Govern-RateLimit-Limit: 1000X-Govern-RateLimit-Remaining: 847X-Govern-RateLimit-Reset: 1744470240X-Govern-RateLimit-Window: 60| Header | Description |
|---|---|
X-Govern-RateLimit-Limit | Requests allowed per window |
X-Govern-RateLimit-Remaining | Requests remaining in current window |
X-Govern-RateLimit-Reset | Unix timestamp when window resets |
X-Govern-RateLimit-Window | Window duration in seconds |
Handling 429 Too Many Requests
async function assessWithRetry(input: AssessInput, maxRetries = 3): Promise<AssessmentResult> { for (let attempt = 0; attempt < maxRetries; attempt++) { const response = await fetch('https://api.govern.archetypal.ai/v1/assessments', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(input), });
if (response.status === 429) { const retryAfter = parseInt(response.headers.get('Retry-After') || '60', 10); await new Promise(resolve => setTimeout(resolve, retryAfter * 1000)); continue; }
return response.json(); } throw new Error('Rate limit exceeded after retries');}The SDK handles this automatically
The TypeScript, Python, and Go SDKs include built-in retry logic with exponential backoff for 429 responses:
const govern = new GovernClient({ apiKey: process.env.GOVERN_API_KEY, orgId: process.env.GOVERN_ORG_ID, maxRetries: 3, // default: 3 retryBackoffMs: 1000, // default: 1000});Probe telemetry rate limits
The GOVERN Probe has separate, higher limits for telemetry endpoints:
| Endpoint | Limit |
|---|---|
POST /api/govern/probe/telemetry | 100 req/s, 500 events/batch |
POST /api/govern/probe/heartbeat | 10 req/min per probe |
GET /api/govern/probe/policy-sync | 10 req/min per probe |
These limits are per probe instance, not per organization.
Burst capacity
Rate limits use a token bucket algorithm. You can briefly exceed the per-minute rate (up to 2x) as long as your average over 5 minutes stays within your plan limit. This handles normal traffic spikes without triggering 429s.
Requesting higher limits
Contact support at govern.archetypal.ai/support or email support@archetypal.ai to request higher rate limits or a custom plan.