Rate Limits & Tiered Access
IPBot provides tiered rate limits based on your authentication level. This guide covers rate limit tiers, rate limit headers, caching strategies, and best practices for high-volume usage.
Rate Limit Tiers
Section titled “Rate Limit Tiers”| Tier | Rate Limit | How to Get |
|---|---|---|
| Anonymous | 60 req/min | No authentication required |
| Free | 200 req/min | Sign in with GitHub at /login |
| Pro | 600 req/min | Contact us for enterprise access |
Getting an API Key
Section titled “Getting an API Key”- Visit ipbot.com/login and sign in with GitHub
- Access your Dashboard to view your API key
- Copy your API key (shown once after account creation)
- Create additional keys as needed
Using Your API Key
Section titled “Using Your API Key”Include the X-API-Key header in your requests:
curl -H "X-API-Key: ipb_free_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \ https://api.ipbot.com/8.8.8.8Rate Limit Headers
Section titled “Rate Limit Headers”Every response includes rate limit headers:
HTTP/1.1 200 OKX-RateLimit-Limit: 200X-RateLimit-Remaining: 195X-RateLimit-Reset: 1704067260X-RateLimit-Tier: freeContent-Type: application/json| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed per minute |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
X-RateLimit-Tier | Your current tier (anonymous, free, pro) |
Rate Limit Exceeded Response
Section titled “Rate Limit Exceeded Response”When you exceed your rate limit, the API returns HTTP 429:
HTTP/1.1 429 Too Many RequestsX-RateLimit-Limit: 60X-RateLimit-Remaining: 0X-RateLimit-Reset: 1704067320X-RateLimit-Tier: anonymousContent-Type: application/json{ "error": "Rate limit exceeded", "code": "RATE_LIMITED", "details": { "limit": 60, "remaining": 0, "reset_at": "2026-01-07T12:02:00Z", "upgrade_url": "https://ipbot.com/pricing" }}Caching Strategies
Section titled “Caching Strategies”IP geolocation data is inherently stable. Geographic assignments change infrequently, and even security signals update slowly. Proper caching dramatically reduces API calls while maintaining data freshness.
Cache TTL Recommendations
Section titled “Cache TTL Recommendations”| Data Type | Recommended TTL | Rationale |
|---|---|---|
| Geolocation (country, city) | 24-48 hours | Rarely changes |
| ASN/Organization | 12-24 hours | Occasionally updated |
| Security/Risk Score | 1-4 hours | Updates more frequently |
| Threat Lists | 30-60 minutes | Real-time threat data |
Endpoint Caching Behavior
Section titled “Endpoint Caching Behavior”GET /{ip} - Cacheable (default 24h)
Static IP lookups return consistent results and include cache-friendly headers:
Cache-Control: public, max-age=86400ETag: "abc123def456"GET / - Not Cacheable
Auto-detection of client IP cannot be cached as it varies per request:
Cache-Control: no-store, no-cache, must-revalidateImplementation Examples
Section titled “Implementation Examples”In-Memory Cache (Node.js)
Section titled “In-Memory Cache (Node.js)”class IPCache { constructor(options = {}) { this.cache = new Map(); this.defaultTTL = options.ttl || 86400000; // 24 hours this.maxSize = options.maxSize || 10000; }
get(ip) { const entry = this.cache.get(ip); if (!entry) return null;
if (Date.now() > entry.expires) { this.cache.delete(ip); return null; }
return entry.data; }
set(ip, data, ttl = this.defaultTTL) { // Evict oldest entries if at capacity if (this.cache.size >= this.maxSize) { const oldestKey = this.cache.keys().next().value; this.cache.delete(oldestKey); }
this.cache.set(ip, { data, expires: Date.now() + ttl, }); }
// Variable TTL based on risk level setWithRiskTTL(ip, data) { const riskScore = data.security?.risk_score || 0; let ttl;
if (riskScore >= 70) { ttl = 1800000; // 30 minutes for high-risk } else if (riskScore >= 40) { ttl = 3600000; // 1 hour for medium-risk } else { ttl = 86400000; // 24 hours for low-risk }
this.set(ip, data, ttl); }}
// Usageconst ipCache = new IPCache({ maxSize: 50000 });
async function getIPData(ip) { // Check cache first const cached = ipCache.get(ip); if (cached) return cached;
// Fetch from API const response = await fetch(`https://api.ipbot.com/${ip}`); const data = await response.json();
// Cache with risk-aware TTL ipCache.setWithRiskTTL(ip, data);
return data;}Redis Cache (Node.js)
Section titled “Redis Cache (Node.js)”import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL);
async function getIPDataWithRedis(ip) { const cacheKey = `ipbot:${ip}`;
// Try cache first const cached = await redis.get(cacheKey); if (cached) { return JSON.parse(cached); }
// Fetch from API const response = await fetch(`https://api.ipbot.com/${ip}`); const data = await response.json();
// Determine TTL based on risk const riskScore = data.security?.risk_score || 0; const ttl = riskScore >= 70 ? 1800 : riskScore >= 40 ? 3600 : 86400;
// Store in Redis await redis.setex(cacheKey, ttl, JSON.stringify(data));
return data;}Python Cache with TTL
Section titled “Python Cache with TTL”from functools import lru_cachefrom datetime import datetime, timedeltaimport requestsimport threading
class IPBotCache: def __init__(self, default_ttl=86400): self._cache = {} self._lock = threading.RLock() self.default_ttl = default_ttl
def get(self, ip: str) -> dict | None: with self._lock: if ip not in self._cache: return None
entry = self._cache[ip] if datetime.now() > entry['expires']: del self._cache[ip] return None
return entry['data']
def set(self, ip: str, data: dict, ttl: int = None) -> None: if ttl is None: ttl = self.default_ttl
with self._lock: self._cache[ip] = { 'data': data, 'expires': datetime.now() + timedelta(seconds=ttl) }
def get_with_risk_ttl(self, ip: str, data: dict) -> None: risk_score = data.get('security', {}).get('risk_score', 0)
if risk_score >= 70: ttl = 1800 # 30 minutes elif risk_score >= 40: ttl = 3600 # 1 hour else: ttl = 86400 # 24 hours
self.set(ip, data, ttl)
# Global cache instancecache = IPBotCache()
def get_ip_data(ip: str) -> dict: # Check cache cached = cache.get(ip) if cached: return cached
# Fetch from API response = requests.get(f'https://api.ipbot.com/{ip}') data = response.json()
# Cache with risk-aware TTL cache.get_with_risk_ttl(ip, data)
return dataHandling Rate Limits Gracefully
Section titled “Handling Rate Limits Gracefully”Exponential Backoff
Section titled “Exponential Backoff”When rate limited, use exponential backoff to retry:
async function fetchWithBackoff(ip, maxRetries = 5) { let retries = 0; let delay = 1000; // Start with 1 second
while (retries < maxRetries) { const response = await fetch(`https://api.ipbot.com/${ip}`);
if (response.status === 429) { // Check Retry-After header const retryAfter = response.headers.get("Retry-After"); if (retryAfter) { delay = parseInt(retryAfter, 10) * 1000; }
console.log(`Rate limited. Retrying in ${delay}ms...`); await sleep(delay);
// Exponential backoff with jitter delay = Math.min(delay * 2, 60000) + Math.random() * 1000; retries++; continue; }
if (!response.ok) { throw new Error(`API error: ${response.status}`); }
return response.json(); }
throw new Error("Max retries exceeded");}
function sleep(ms) { return new Promise((resolve) => setTimeout(resolve, ms));}Request Queue with Rate Limiting
Section titled “Request Queue with Rate Limiting”For high-volume applications, implement a request queue:
class RateLimitedQueue { constructor(requestsPerSecond = 15) { this.queue = []; this.processing = false; this.interval = 1000 / requestsPerSecond; this.lastRequest = 0; }
async add(ip) { return new Promise((resolve, reject) => { this.queue.push({ ip, resolve, reject }); this.process(); }); }
async process() { if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) { const now = Date.now(); const timeSinceLastRequest = now - this.lastRequest;
if (timeSinceLastRequest < this.interval) { await sleep(this.interval - timeSinceLastRequest); }
const { ip, resolve, reject } = this.queue.shift();
try { const response = await fetch(`https://api.ipbot.com/${ip}`); const data = await response.json(); this.lastRequest = Date.now(); resolve(data); } catch (error) { reject(error); } }
this.processing = false; }}
// Usageconst queue = new RateLimitedQueue(15); // 15 requests per second
const results = await Promise.all([ queue.add("8.8.8.8"), queue.add("1.1.1.1"), queue.add("9.9.9.9"),]);Circuit Breaker Pattern
Section titled “Circuit Breaker Pattern”Prevent cascading failures with a circuit breaker:
class CircuitBreaker { constructor(options = {}) { this.failureThreshold = options.failureThreshold || 5; this.resetTimeout = options.resetTimeout || 60000; this.failures = 0; this.state = "CLOSED"; // CLOSED, OPEN, HALF_OPEN this.nextAttempt = 0; }
async execute(fn) { if (this.state === "OPEN") { if (Date.now() < this.nextAttempt) { throw new Error("Circuit breaker is OPEN"); } this.state = "HALF_OPEN"; }
try { const result = await fn(); this.onSuccess(); return result; } catch (error) { this.onFailure(); throw error; } }
onSuccess() { this.failures = 0; this.state = "CLOSED"; }
onFailure() { this.failures++;
if (this.failures >= this.failureThreshold) { this.state = "OPEN"; this.nextAttempt = Date.now() + this.resetTimeout; } }}
// Usageconst breaker = new CircuitBreaker();
async function getIPSafe(ip) { return breaker.execute(async () => { const response = await fetch(`https://api.ipbot.com/${ip}`); if (!response.ok) throw new Error(`API error: ${response.status}`); return response.json(); });}Best Practices for High-Volume Usage
Section titled “Best Practices for High-Volume Usage”1. Batch Requests Where Possible
Section titled “1. Batch Requests Where Possible”If you need to look up multiple IPs, batch them to reduce overhead:
async function batchLookup(ips) { // Use cache for known IPs const results = {}; const uncached = [];
for (const ip of ips) { const cached = cache.get(ip); if (cached) { results[ip] = cached; } else { uncached.push(ip); } }
// Fetch uncached IPs in parallel (respecting rate limits) const fetched = await Promise.all( uncached.map((ip) => fetchWithBackoff(ip).then((data) => { cache.set(ip, data); return { ip, data }; }), ), );
for (const { ip, data } of fetched) { results[ip] = data; }
return results;}2. Implement Request Deduplication
Section titled “2. Implement Request Deduplication”Prevent duplicate requests for the same IP:
class DedupedFetcher { constructor() { this.pending = new Map(); }
async fetch(ip) { // Return existing promise if request in flight if (this.pending.has(ip)) { return this.pending.get(ip); }
// Create new request const promise = fetch(`https://api.ipbot.com/${ip}`) .then((r) => r.json()) .finally(() => { this.pending.delete(ip); });
this.pending.set(ip, promise); return promise; }}
const fetcher = new DedupedFetcher();
// These will share the same requestconst [result1, result2] = await Promise.all([ fetcher.fetch("8.8.8.8"), fetcher.fetch("8.8.8.8"),]);3. Use Conditional Requests
Section titled “3. Use Conditional Requests”If you’ve cached data, use ETags for efficient revalidation:
async function fetchWithETag(ip, cachedData, etag) { const headers = {}; if (etag) { headers["If-None-Match"] = etag; }
const response = await fetch(`https://api.ipbot.com/${ip}`, { headers });
if (response.status === 304) { // Data hasn't changed, use cached version return { data: cachedData, etag }; }
const newETag = response.headers.get("ETag"); const data = await response.json();
return { data, etag: newETag };}4. Monitor Your Usage
Section titled “4. Monitor Your Usage”Track your API usage to stay within limits:
class UsageMonitor { constructor() { this.requests = []; this.window = 60000; // 1 minute }
record() { const now = Date.now(); this.requests.push(now);
// Clean old entries this.requests = this.requests.filter((t) => now - t < this.window); }
getRate() { const now = Date.now(); const recentRequests = this.requests.filter((t) => now - t < this.window); return recentRequests.length; }
shouldThrottle(limit = 900) { // Warn at 90% of limit return this.getRate() >= limit; }}
const monitor = new UsageMonitor();
async function monitoredFetch(ip) { if (monitor.shouldThrottle()) { console.warn("Approaching rate limit, slowing down..."); await sleep(1000); }
monitor.record(); return fetch(`https://api.ipbot.com/${ip}`);}CDN and Edge Caching
Section titled “CDN and Edge Caching”For global applications, consider caching at the edge:
Cloudflare Workers Example
Section titled “Cloudflare Workers Example”export default { async fetch(request, env) { const url = new URL(request.url); const ip = url.pathname.slice(1);
// Check KV cache const cached = await env.IP_CACHE.get(ip); if (cached) { return new Response(cached, { headers: { "Content-Type": "application/json", "X-Cache": "HIT" }, }); }
// Fetch from origin const response = await fetch(`https://api.ipbot.com/${ip}`); const data = await response.text();
// Cache for 24 hours await env.IP_CACHE.put(ip, data, { expirationTtl: 86400 });
return new Response(data, { headers: { "Content-Type": "application/json", "X-Cache": "MISS" }, }); },};Vercel Edge Config
Section titled “Vercel Edge Config”import { get } from "@vercel/edge-config";
export const config = { runtime: "edge" };
export default async function handler(request) { const ip = new URL(request.url).searchParams.get("ip");
// Try edge config cache const cached = await get(`ip:${ip}`); if (cached) { return Response.json(cached); }
const response = await fetch(`https://api.ipbot.com/${ip}`); return response;}Troubleshooting
Section titled “Troubleshooting”Common Issues
Section titled “Common Issues”429 Too Many Requests
- Implement caching to reduce requests
- Add exponential backoff for retries
- Check if you’re making duplicate requests
Slow Response Times
- Enable HTTP keep-alive
- Use connection pooling
- Consider edge caching for global traffic
Inconsistent Data
- Check cache TTL settings
- Verify you’re not caching error responses
- Use ETag validation for stale data
Debug Headers
Section titled “Debug Headers”Add debug headers to your requests for troubleshooting:
const response = await fetch(`https://api.ipbot.com/${ip}`, { headers: { "X-Request-ID": crypto.randomUUID(), "X-Client-Version": "1.0.0", },});
console.log("Request ID:", response.headers.get("X-Request-ID"));console.log( "Rate Limit Remaining:", response.headers.get("X-RateLimit-Remaining"),);Summary
Section titled “Summary”Effective rate limit management combines:
- Caching: Store results locally with appropriate TTLs
- Backoff: Handle 429 responses with exponential retry
- Queuing: Control request rate for high-volume workloads
- Monitoring: Track usage to prevent hitting limits
For most applications, implementing a simple cache with 24-hour TTL will reduce API calls by 90%+ while maintaining data accuracy.
Resources:
- API Reference - Complete endpoint documentation
- Code Examples - Integration samples
- IP Reputation API - Risk scoring details