{ ILoveJS }

Fetch with in-memory cache

typescript

A production-ready TypeScript utility that wraps fetch with an in-memory cache using Map, supports TTL-based expiration, and deduplicates concurrent requests to the same URL.

fetchcacheperformanceapi

Code

typescript
interface CacheEntry<T> {
  data: T;
  expiresAt: number;
}

interface FetchCacheOptions {
  ttl?: number;
  bypassCache?: boolean;
}

const cache = new Map<string, CacheEntry<unknown>>();
const inFlightRequests = new Map<string, Promise<unknown>>();

function getCacheKey(url: string, init?: RequestInit): string {
  const method = init?.method ?? 'GET';
  const body = init?.body ? String(init.body) : '';
  return `${method}:${url}:${body}`;
}

function isExpired<T>(entry: CacheEntry<T>): boolean {
  return Date.now() > entry.expiresAt;
}

function cleanExpiredEntries(): void {
  const now = Date.now();
  for (const [key, entry] of cache.entries()) {
    if (now > entry.expiresAt) {
      cache.delete(key);
    }
  }
}

setInterval(cleanExpiredEntries, 60000);

export async function fetchWithCache<T>(
  url: string,
  init?: RequestInit,
  options: FetchCacheOptions = {}
): Promise<T> {
  const { ttl = 300000, bypassCache = false } = options;
  const cacheKey = getCacheKey(url, init);

  if (!bypassCache) {
    const cached = cache.get(cacheKey) as CacheEntry<T> | undefined;
    if (cached && !isExpired(cached)) {
      return cached.data;
    }
  }

  const existingRequest = inFlightRequests.get(cacheKey);
  if (existingRequest) {
    return existingRequest as Promise<T>;
  }

  const fetchPromise = (async (): Promise<T> => {
    try {
      const response = await fetch(url, init);
      
      if (!response.ok) {
        throw new Error(`HTTP ${response.status}: ${response.statusText}`);
      }

      const data: T = await response.json();

      cache.set(cacheKey, {
        data,
        expiresAt: Date.now() + ttl,
      });

      return data;
    } finally {
      inFlightRequests.delete(cacheKey);
    }
  })();

  inFlightRequests.set(cacheKey, fetchPromise);

  return fetchPromise;
}

export function clearCache(urlPattern?: string): void {
  if (!urlPattern) {
    cache.clear();
    return;
  }
  
  for (const key of cache.keys()) {
    if (key.includes(urlPattern)) {
      cache.delete(key);
    }
  }
}

export function getCacheStats(): { size: number; keys: string[] } {
  return {
    size: cache.size,
    keys: Array.from(cache.keys()),
  };
}

// Usage example:
// interface User {
//   id: number;
//   name: string;
//   email: string;
// }
// 
// const user = await fetchWithCache<User>(
//   'https://api.example.com/users/1',
//   { headers: { 'Authorization': 'Bearer token' } },
//   { ttl: 60000 }
// );

How It Works

This utility addresses three common challenges when working with fetch: avoiding redundant network requests, caching responses for performance, and preventing race conditions from duplicate concurrent requests. The implementation uses TypeScript generics to ensure type safety throughout the request lifecycle.

The caching mechanism uses a Map to store responses with their expiration timestamps. Each cache entry includes the response data and an expiresAt timestamp calculated from the configurable TTL (time-to-live). The cache key is generated from the HTTP method, URL, and request body, ensuring that POST requests with different payloads are cached separately. A background cleanup interval runs every 60 seconds to remove expired entries, preventing memory leaks in long-running applications.

Request deduplication is achieved through a separate Map that tracks in-flight requests. When a request is initiated, the promise is stored in inFlightRequests. If another call comes in for the same cache key while the first request is still pending, the existing promise is returned instead of initiating a new network request. The finally block ensures the in-flight entry is cleaned up regardless of success or failure, preventing memory leaks and stale state.

The generic type parameter <T> flows through the entire function, from the return type to the cache entry typing. This means consumers get full type inference without needing to cast results. The bypassCache option allows forcing a fresh fetch when needed, useful for user-initiated refreshes or when you know data has changed.

Use this pattern for read-heavy APIs where data doesn't change frequently, such as user profiles, configuration, or reference data. Avoid it for real-time data, write operations, or when you need request-specific headers that affect the response (unless you include those headers in the cache key). For server-side rendering or when memory is constrained, consider using an LRU cache implementation or external caching layer instead.