Skip to content
IP IPBot
Get Started

Rate Limits & Tiered Access

IPBot provides tiered rate limits based on your authentication level. This guide covers rate limit tiers, rate limit headers, caching strategies, and best practices for high-volume usage.

TierRate LimitHow to Get
Anonymous60 req/minNo authentication required
Free200 req/minSign in with GitHub at /login
Pro600 req/minContact us for enterprise access
  1. Visit ipbot.com/login and sign in with GitHub
  2. Access your Dashboard to view your API key
  3. Copy your API key (shown once after account creation)
  4. Create additional keys as needed

Include the X-API-Key header in your requests:

Terminal window
curl -H "X-API-Key: ipb_free_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
https://api.ipbot.com/8.8.8.8

Every response includes rate limit headers:

HTTP/1.1 200 OK
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 195
X-RateLimit-Reset: 1704067260
X-RateLimit-Tier: free
Content-Type: application/json
HeaderDescription
X-RateLimit-LimitMaximum requests allowed per minute
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the window resets
X-RateLimit-TierYour current tier (anonymous, free, pro)

When you exceed your rate limit, the API returns HTTP 429:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067320
X-RateLimit-Tier: anonymous
Content-Type: application/json
{
"error": "Rate limit exceeded",
"code": "RATE_LIMITED",
"details": {
"limit": 60,
"remaining": 0,
"reset_at": "2026-01-07T12:02:00Z",
"upgrade_url": "https://ipbot.com/pricing"
}
}

IP geolocation data is inherently stable. Geographic assignments change infrequently, and even security signals update slowly. Proper caching dramatically reduces API calls while maintaining data freshness.

Data TypeRecommended TTLRationale
Geolocation (country, city)24-48 hoursRarely changes
ASN/Organization12-24 hoursOccasionally updated
Security/Risk Score1-4 hoursUpdates more frequently
Threat Lists30-60 minutesReal-time threat data

GET /{ip} - Cacheable (default 24h)

Static IP lookups return consistent results and include cache-friendly headers:

Cache-Control: public, max-age=86400
ETag: "abc123def456"

GET / - Not Cacheable

Auto-detection of client IP cannot be cached as it varies per request:

Cache-Control: no-store, no-cache, must-revalidate
class IPCache {
constructor(options = {}) {
this.cache = new Map();
this.defaultTTL = options.ttl || 86400000; // 24 hours
this.maxSize = options.maxSize || 10000;
}
get(ip) {
const entry = this.cache.get(ip);
if (!entry) return null;
if (Date.now() > entry.expires) {
this.cache.delete(ip);
return null;
}
return entry.data;
}
set(ip, data, ttl = this.defaultTTL) {
// Evict oldest entries if at capacity
if (this.cache.size >= this.maxSize) {
const oldestKey = this.cache.keys().next().value;
this.cache.delete(oldestKey);
}
this.cache.set(ip, {
data,
expires: Date.now() + ttl,
});
}
// Variable TTL based on risk level
setWithRiskTTL(ip, data) {
const riskScore = data.security?.risk_score || 0;
let ttl;
if (riskScore >= 70) {
ttl = 1800000; // 30 minutes for high-risk
} else if (riskScore >= 40) {
ttl = 3600000; // 1 hour for medium-risk
} else {
ttl = 86400000; // 24 hours for low-risk
}
this.set(ip, data, ttl);
}
}
// Usage
const ipCache = new IPCache({ maxSize: 50000 });
async function getIPData(ip) {
// Check cache first
const cached = ipCache.get(ip);
if (cached) return cached;
// Fetch from API
const response = await fetch(`https://api.ipbot.com/${ip}`);
const data = await response.json();
// Cache with risk-aware TTL
ipCache.setWithRiskTTL(ip, data);
return data;
}
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL);
async function getIPDataWithRedis(ip) {
const cacheKey = `ipbot:${ip}`;
// Try cache first
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Fetch from API
const response = await fetch(`https://api.ipbot.com/${ip}`);
const data = await response.json();
// Determine TTL based on risk
const riskScore = data.security?.risk_score || 0;
const ttl = riskScore >= 70 ? 1800 : riskScore >= 40 ? 3600 : 86400;
// Store in Redis
await redis.setex(cacheKey, ttl, JSON.stringify(data));
return data;
}
from functools import lru_cache
from datetime import datetime, timedelta
import requests
import threading
class IPBotCache:
def __init__(self, default_ttl=86400):
self._cache = {}
self._lock = threading.RLock()
self.default_ttl = default_ttl
def get(self, ip: str) -> dict | None:
with self._lock:
if ip not in self._cache:
return None
entry = self._cache[ip]
if datetime.now() > entry['expires']:
del self._cache[ip]
return None
return entry['data']
def set(self, ip: str, data: dict, ttl: int = None) -> None:
if ttl is None:
ttl = self.default_ttl
with self._lock:
self._cache[ip] = {
'data': data,
'expires': datetime.now() + timedelta(seconds=ttl)
}
def get_with_risk_ttl(self, ip: str, data: dict) -> None:
risk_score = data.get('security', {}).get('risk_score', 0)
if risk_score >= 70:
ttl = 1800 # 30 minutes
elif risk_score >= 40:
ttl = 3600 # 1 hour
else:
ttl = 86400 # 24 hours
self.set(ip, data, ttl)
# Global cache instance
cache = IPBotCache()
def get_ip_data(ip: str) -> dict:
# Check cache
cached = cache.get(ip)
if cached:
return cached
# Fetch from API
response = requests.get(f'https://api.ipbot.com/{ip}')
data = response.json()
# Cache with risk-aware TTL
cache.get_with_risk_ttl(ip, data)
return data

When rate limited, use exponential backoff to retry:

async function fetchWithBackoff(ip, maxRetries = 5) {
let retries = 0;
let delay = 1000; // Start with 1 second
while (retries < maxRetries) {
const response = await fetch(`https://api.ipbot.com/${ip}`);
if (response.status === 429) {
// Check Retry-After header
const retryAfter = response.headers.get("Retry-After");
if (retryAfter) {
delay = parseInt(retryAfter, 10) * 1000;
}
console.log(`Rate limited. Retrying in ${delay}ms...`);
await sleep(delay);
// Exponential backoff with jitter
delay = Math.min(delay * 2, 60000) + Math.random() * 1000;
retries++;
continue;
}
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
return response.json();
}
throw new Error("Max retries exceeded");
}
function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}

For high-volume applications, implement a request queue:

class RateLimitedQueue {
constructor(requestsPerSecond = 15) {
this.queue = [];
this.processing = false;
this.interval = 1000 / requestsPerSecond;
this.lastRequest = 0;
}
async add(ip) {
return new Promise((resolve, reject) => {
this.queue.push({ ip, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
const now = Date.now();
const timeSinceLastRequest = now - this.lastRequest;
if (timeSinceLastRequest < this.interval) {
await sleep(this.interval - timeSinceLastRequest);
}
const { ip, resolve, reject } = this.queue.shift();
try {
const response = await fetch(`https://api.ipbot.com/${ip}`);
const data = await response.json();
this.lastRequest = Date.now();
resolve(data);
} catch (error) {
reject(error);
}
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(15); // 15 requests per second
const results = await Promise.all([
queue.add("8.8.8.8"),
queue.add("1.1.1.1"),
queue.add("9.9.9.9"),
]);

Prevent cascading failures with a circuit breaker:

class CircuitBreaker {
constructor(options = {}) {
this.failureThreshold = options.failureThreshold || 5;
this.resetTimeout = options.resetTimeout || 60000;
this.failures = 0;
this.state = "CLOSED"; // CLOSED, OPEN, HALF_OPEN
this.nextAttempt = 0;
}
async execute(fn) {
if (this.state === "OPEN") {
if (Date.now() < this.nextAttempt) {
throw new Error("Circuit breaker is OPEN");
}
this.state = "HALF_OPEN";
}
try {
const result = await fn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
onSuccess() {
this.failures = 0;
this.state = "CLOSED";
}
onFailure() {
this.failures++;
if (this.failures >= this.failureThreshold) {
this.state = "OPEN";
this.nextAttempt = Date.now() + this.resetTimeout;
}
}
}
// Usage
const breaker = new CircuitBreaker();
async function getIPSafe(ip) {
return breaker.execute(async () => {
const response = await fetch(`https://api.ipbot.com/${ip}`);
if (!response.ok) throw new Error(`API error: ${response.status}`);
return response.json();
});
}

If you need to look up multiple IPs, batch them to reduce overhead:

async function batchLookup(ips) {
// Use cache for known IPs
const results = {};
const uncached = [];
for (const ip of ips) {
const cached = cache.get(ip);
if (cached) {
results[ip] = cached;
} else {
uncached.push(ip);
}
}
// Fetch uncached IPs in parallel (respecting rate limits)
const fetched = await Promise.all(
uncached.map((ip) =>
fetchWithBackoff(ip).then((data) => {
cache.set(ip, data);
return { ip, data };
}),
),
);
for (const { ip, data } of fetched) {
results[ip] = data;
}
return results;
}

Prevent duplicate requests for the same IP:

class DedupedFetcher {
constructor() {
this.pending = new Map();
}
async fetch(ip) {
// Return existing promise if request in flight
if (this.pending.has(ip)) {
return this.pending.get(ip);
}
// Create new request
const promise = fetch(`https://api.ipbot.com/${ip}`)
.then((r) => r.json())
.finally(() => {
this.pending.delete(ip);
});
this.pending.set(ip, promise);
return promise;
}
}
const fetcher = new DedupedFetcher();
// These will share the same request
const [result1, result2] = await Promise.all([
fetcher.fetch("8.8.8.8"),
fetcher.fetch("8.8.8.8"),
]);

If you’ve cached data, use ETags for efficient revalidation:

async function fetchWithETag(ip, cachedData, etag) {
const headers = {};
if (etag) {
headers["If-None-Match"] = etag;
}
const response = await fetch(`https://api.ipbot.com/${ip}`, { headers });
if (response.status === 304) {
// Data hasn't changed, use cached version
return { data: cachedData, etag };
}
const newETag = response.headers.get("ETag");
const data = await response.json();
return { data, etag: newETag };
}

Track your API usage to stay within limits:

class UsageMonitor {
constructor() {
this.requests = [];
this.window = 60000; // 1 minute
}
record() {
const now = Date.now();
this.requests.push(now);
// Clean old entries
this.requests = this.requests.filter((t) => now - t < this.window);
}
getRate() {
const now = Date.now();
const recentRequests = this.requests.filter((t) => now - t < this.window);
return recentRequests.length;
}
shouldThrottle(limit = 900) {
// Warn at 90% of limit
return this.getRate() >= limit;
}
}
const monitor = new UsageMonitor();
async function monitoredFetch(ip) {
if (monitor.shouldThrottle()) {
console.warn("Approaching rate limit, slowing down...");
await sleep(1000);
}
monitor.record();
return fetch(`https://api.ipbot.com/${ip}`);
}

For global applications, consider caching at the edge:

export default {
async fetch(request, env) {
const url = new URL(request.url);
const ip = url.pathname.slice(1);
// Check KV cache
const cached = await env.IP_CACHE.get(ip);
if (cached) {
return new Response(cached, {
headers: { "Content-Type": "application/json", "X-Cache": "HIT" },
});
}
// Fetch from origin
const response = await fetch(`https://api.ipbot.com/${ip}`);
const data = await response.text();
// Cache for 24 hours
await env.IP_CACHE.put(ip, data, { expirationTtl: 86400 });
return new Response(data, {
headers: { "Content-Type": "application/json", "X-Cache": "MISS" },
});
},
};
import { get } from "@vercel/edge-config";
export const config = { runtime: "edge" };
export default async function handler(request) {
const ip = new URL(request.url).searchParams.get("ip");
// Try edge config cache
const cached = await get(`ip:${ip}`);
if (cached) {
return Response.json(cached);
}
const response = await fetch(`https://api.ipbot.com/${ip}`);
return response;
}

429 Too Many Requests

  • Implement caching to reduce requests
  • Add exponential backoff for retries
  • Check if you’re making duplicate requests

Slow Response Times

  • Enable HTTP keep-alive
  • Use connection pooling
  • Consider edge caching for global traffic

Inconsistent Data

  • Check cache TTL settings
  • Verify you’re not caching error responses
  • Use ETag validation for stale data

Add debug headers to your requests for troubleshooting:

const response = await fetch(`https://api.ipbot.com/${ip}`, {
headers: {
"X-Request-ID": crypto.randomUUID(),
"X-Client-Version": "1.0.0",
},
});
console.log("Request ID:", response.headers.get("X-Request-ID"));
console.log(
"Rate Limit Remaining:",
response.headers.get("X-RateLimit-Remaining"),
);

Effective rate limit management combines:

  1. Caching: Store results locally with appropriate TTLs
  2. Backoff: Handle 429 responses with exponential retry
  3. Queuing: Control request rate for high-volume workloads
  4. Monitoring: Track usage to prevent hitting limits

For most applications, implementing a simple cache with 24-hour TTL will reduce API calls by 90%+ while maintaining data accuracy.

Resources: