Zum Inhalt springen

Rate Limits

Dieser Inhalt ist noch nicht in deiner Sprache verfügbar.

Rate limits are restrictions that our API imposes on the number of times a user or client can access our services within a specified period of time.

Rate limits are essential for maintaining API reliability and fairness. We implement them for several key reasons:

  • Protection against service disruption - Rate limits prevent malicious actors from overwhelming our API with excessive requests that could degrade performance or cause outages. This safeguards the stability of the Newtrition Data platform for all users.

  • Fair resource allocation - Without rate limits, a single user making excessive requests could consume disproportionate server resources, negatively impacting response times for other users. Rate limits ensure equitable access to our API across all customers.

  • Infrastructure optimization - Rate limits help us manage server load effectively. As API usage grows, they prevent sudden traffic spikes from overwhelming our infrastructure and help maintain consistent performance standards.

Our rate limiting system uses a sliding window approach that measures requests per minute (RPM). Rate limits are applied per user account and span across all access tokens associated with that account. Different API endpoints have varying rate limits based on their computational complexity and resource requirements.

The sliding window resets continuously rather than at fixed intervals. For example, if you have a 60 RPM limit:

  • At 10:00:00, you can make 60 requests
  • At 10:00:30, you can make additional requests up to your limit, considering requests made in the previous 60 seconds
  • The window continuously slides forward, maintaining the rate limit over any 60-second period

Rate limits vary by API endpoint to reflect their different resource requirements:

Endpoint CategoryRate Limit (RPM)Examples
Food Lookup60/food/:id
Search120/search
Nutrition Evaluation60/evaluate/nutrition-summary, /evaluate/macro-distribution, /evaluate/healthy-eating-index

You can monitor your rate limit status through HTTP response headers included with every API response:

Header NameSample ValueDescription
X-RateLimit-Limit60Maximum number of requests allowed per minute for this endpoint
X-RateLimit-Used15Number of requests used in the current window
X-RateLimit-Remaining45Number of requests remaining in the current window
X-RateLimit-Reset1703123456Unix timestamp when the rate limit window resets
HTTP/1.1 200 OK
X-RateLimit-Limit: 60
X-RateLimit-Used: 15
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1703123456
Content-Type: application/json

When you exceed your rate limit, the API returns a 429 Too Many Requests status code with details about when you can retry:

{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Please retry after 30 seconds.",
"details": {
"limit": 60,
"used": 60,
"reset_at": "2024-12-01T10:01:00Z"
}
}
}

The response will also include an X-Retry-After header indicating the number of seconds to wait before retrying.

One effective way to handle rate limit errors is to automatically retry requests with exponential backoff:

async function apiRequestWithBackoff(url, options, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.status !== 429) {
return response;
}
if (attempt === maxRetries) {
throw new Error('Max retries exceeded');
}
// Use X-Retry-After header if available, otherwise exponential backoff
const retryAfter = response.headers.get('X-Retry-After');
const delay = retryAfter ?
parseInt(retryAfter) * 1000 :
Math.pow(2, attempt) * 1000 + Math.random() * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
} catch (error) {
if (attempt === maxRetries) {
throw error;
}
}
}
}

This approach provides several benefits:

  • Automatic recovery from rate limit errors without crashes or missing data
  • Respects server guidance by using the X-Retry-After header when available
  • Progressive delays allow quick retries initially, with longer delays for persistent issues
  • Random jitter prevents all clients from retrying simultaneously

Caching API responses is an effective strategy to reduce request frequency and stay within rate limits.

For detailed caching implementation guidance and legal requirements, see our Caching API Responses documentation.

Track your API usage patterns by monitoring the X-RateLimit-Used and X-RateLimit-Remaining headers in every response. Consider implementing automated alerts when you reach 80% of your rate limit to prevent unexpected service interruptions.

When rate limit errors occur, implement exponential backoff retry logic that respects the X-Retry-After header. Always provide meaningful feedback to users during delays and maintain comprehensive logs to identify usage patterns that consistently trigger rate limits.

Implement response caching to reduce API calls. Review your application’s request patterns regularly to identify opportunities for efficiency improvements that keep you comfortably within rate limits.

Our rate limits are designed to accommodate typical usage patterns while protecting service stability. If the current rate limits interfere with your legitimate use case, please contact our support team at support@newtrition.com. We’ll work with you to find a suitable solution that meets your needs while maintaining service reliability for all users.

When contacting support, please include:

  • Your use case and expected request patterns
  • Current rate limits that are causing issues
  • Approximate request volume requirements
  • Any relevant business context