Rate Limits
PreflightAPI enforces two types of throttling: rate limits (requests per 60-second window) and monthly quotas (total calls per billing period). Both depend on your subscription plan.
Limits by Plan
| Plan | Rate Limit | Monthly Quota |
|---|---|---|
| Student Pilot | 10 requests / 60 sec | 500 calls |
| Private Pilot | 60 requests / 60 sec | 25,000 calls |
| Commercial Pilot | 300 requests / 60 sec | 250,000 calls |
Rate limits are enforced on a sliding 60-second window per subscription key. If you exceed the limit, further requests in that window are rejected with 429 Too Many Requests until the window resets.
Monthly Quotas
In addition to per-minute rate limits, each plan has a monthly quota that caps the total number of API calls in a billing period. Quota counters reset at the start of each monthly billing cycle.
- When you hit your monthly quota, all further requests return
403 Forbiddenuntil the quota resets. - You can track your current usage on the dashboard overview page.
- Upgrading your plan immediately increases both your rate limit and monthly quota.
Rate Limit Headers
Every API response includes headers that let you monitor your rate limit usage in real time:
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58| Header | Description | Present On |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current 60-second window | Every response |
X-RateLimit-Remaining | Requests remaining before you hit the rate limit | Every response |
Retry-After | Seconds to wait before retrying | 429 responses only |
Exceeding Limits
Rate limit exceeded (429)
When you exceed your per-minute rate limit, the API gateway returns 429 Too Many Requests with a Retry-After header indicating how many seconds to wait. The response body uses the gateway error format:
{
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 52 seconds."
}Monthly quota exceeded (403)
When you exhaust your monthly quota, the API gateway returns 403 Forbidden. The quota resets at the start of your next billing cycle. The response body uses the same gateway format:
{
"statusCode": 403,
"message": "Out of call volume quota. Quota will be replenished in 06:23:15."
}Both 429 and quota-exceeded 403 responses use the gateway error format (statusCode + message), not the backend error format. See the error handling guide for details on distinguishing error formats.
Response Caching
GET responses are cached at the API gateway to reduce latency. Cache duration varies by data type. Cached responses are identical to fresh responses and still count toward your rate limit and monthly quota.
| Endpoint Category | Cache Duration |
|---|---|
| Real-time weather (METARs, PIREPs) | 2 minutes |
| Performance calculations | 2 minutes |
| Forecasts & NOTAMs (TAFs, AIRMETs, SIGMETs, G-AIRMETs, NOTAMs) | 5 minutes |
| Winds aloft | 5 minutes |
| Presigned URLs (airport diagrams, chart supplements) | 10 minutes |
| Static / NASR data (airports, frequencies, airspace, obstacles) | 15 minutes |
Only GET requests are cached. POST endpoints are never cached.
Monitoring Your Usage
- Dashboard — The dashboard overview shows your current monthly usage and remaining quota at a glance.
- Response headers — Check
X-RateLimit-Remainingafter each request to track your real-time rate limit usage. - Proactive alerts — If you're consistently hitting your limits, consider upgrading your plan for higher throughput.
Best Practices
- Cache locally — Store responses on your side to avoid redundant requests. Match the cache TTL to the gateway cache duration for optimal freshness.
- Use exponential backoff — When you receive a
429, wait for theRetry-Afterduration before retrying. Use exponential backoff with jitter to avoid thundering herds. - Monitor headers — Check
X-RateLimit-Remainingto proactively slow down before hitting the rate limit. - Batch where possible — Some endpoints accept multiple identifiers in a single call (e.g., fetching METARs for multiple ICAO codes). Use these to reduce the number of requests.
Retry with Exponential Backoff
Here's a reusable fetch wrapper that automatically retries on 429 responses with exponential backoff and jitter:
async function fetchWithRetry(
url: string,
options: RequestInit,
maxRetries = 3,
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options)
if (response.status !== 429) {
return response
}
if (attempt === maxRetries) {
throw new Error('Rate limit exceeded after max retries')
}
// Use Retry-After header if available, otherwise exponential backoff
const retryAfter = response.headers.get('Retry-After')
const baseDelay = retryAfter
? parseInt(retryAfter, 10) * 1000
: Math.pow(2, attempt) * 1000
// Add random jitter (0-500ms) to prevent thundering herd
const jitter = Math.random() * 500
await new Promise((resolve) => setTimeout(resolve, baseDelay + jitter))
}
throw new Error('Unreachable')
}
// Usage
const response = await fetchWithRetry(
'https://preflightapi-apim-service-test.azure-api.net/api/v1/metars/KJFK',
{
headers: {
'Ocp-Apim-Subscription-Key': process.env.PREFLIGHT_API_KEY!,
},
},
)
const data: Metar = await response.json()