A fully-featured Least Recently Used (LRU) cache implementation in TypeScript with O(1) get/set/delete operations, optional time-to-live per entry, and customizable eviction callbacks.
interface CacheEntry<V> {
value: V;
expiresAt: number | null;
}
interface LRUCacheOptions<K, V> {
maxSize: number;
defaultTTL?: number;
onEvict?: (key: K, value: V, reason: 'capacity' | 'expired' | 'manual') => void;
}
class LRUCache<K, V> {
private cache: Map<K, CacheEntry<V>>;
private readonly maxSize: number;
private readonly defaultTTL: number | null;
private readonly onEvict?: (key: K, value: V, reason: 'capacity' | 'expired' | 'manual') => void;
constructor(options: LRUCacheOptions<K, V>) {
if (options.maxSize <= 0) {
throw new Error('maxSize must be a positive integer');
}
this.cache = new Map();
this.maxSize = options.maxSize;
this.defaultTTL = options.defaultTTL ?? null;
this.onEvict = options.onEvict;
}
get(key: K): V | undefined {
const entry = this.cache.get(key);
if (!entry) {
return undefined;
}
if (this.isExpired(entry)) {
this.deleteEntry(key, entry, 'expired');
return undefined;
}
this.cache.delete(key);
this.cache.set(key, entry);
return entry.value;
}
set(key: K, value: V, ttl?: number): this {
const existingEntry = this.cache.get(key);
if (existingEntry) {
this.cache.delete(key);
}
while (this.cache.size >= this.maxSize) {
const oldestKey = this.cache.keys().next().value;
if (oldestKey !== undefined) {
const oldestEntry = this.cache.get(oldestKey)!;
this.deleteEntry(oldestKey, oldestEntry, 'capacity');
}
}
const effectiveTTL = ttl ?? this.defaultTTL;
const entry: CacheEntry<V> = {
value,
expiresAt: effectiveTTL ? Date.now() + effectiveTTL : null
};
this.cache.set(key, entry);
return this;
}
delete(key: K): boolean {
const entry = this.cache.get(key);
if (!entry) {
return false;
}
this.deleteEntry(key, entry, 'manual');
return true;
}
has(key: K): boolean {
const entry = this.cache.get(key);
if (!entry) {
return false;
}
if (this.isExpired(entry)) {
this.deleteEntry(key, entry, 'expired');
return false;
}
return true;
}
clear(): void {
if (this.onEvict) {
for (const [key, entry] of this.cache) {
this.onEvict(key, entry.value, 'manual');
}
}
this.cache.clear();
}
get size(): number {
return this.cache.size;
}
keys(): K[] {
this.pruneExpired();
return Array.from(this.cache.keys());
}
values(): V[] {
this.pruneExpired();
return Array.from(this.cache.values()).map(entry => entry.value);
}
entries(): [K, V][] {
this.pruneExpired();
return Array.from(this.cache.entries()).map(([key, entry]) => [key, entry.value]);
}
peek(key: K): V | undefined {
const entry = this.cache.get(key);
if (!entry) {
return undefined;
}
if (this.isExpired(entry)) {
this.deleteEntry(key, entry, 'expired');
return undefined;
}
return entry.value;
}
private isExpired(entry: CacheEntry<V>): boolean {
return entry.expiresAt !== null && Date.now() > entry.expiresAt;
}
private deleteEntry(key: K, entry: CacheEntry<V>, reason: 'capacity' | 'expired' | 'manual'): void {
this.cache.delete(key);
this.onEvict?.(key, entry.value, reason);
}
private pruneExpired(): void {
for (const [key, entry] of this.cache) {
if (this.isExpired(entry)) {
this.deleteEntry(key, entry, 'expired');
}
}
}
}
const cache = new LRUCache<string, { data: string }>({
maxSize: 3,
defaultTTL: 5000,
onEvict: (key, value, reason) => {
console.log(`Evicted "${key}" due to ${reason}:`, value);
}
});
cache.set('user:1', { data: 'Alice' });
cache.set('user:2', { data: 'Bob' });
cache.set('user:3', { data: 'Charlie' });
console.log('Get user:1:', cache.get('user:1'));
cache.set('user:4', { data: 'Diana' });
console.log('Current keys:', cache.keys());
console.log('Cache size:', cache.size);
cache.set('temp', { data: 'Temporary' }, 100);
setTimeout(() => {
console.log('After 150ms, temp exists:', cache.has('temp'));
}, 150);This LRU Cache implementation leverages JavaScript's Map data structure, which maintains insertion order during iteration. This property is crucial for achieving O(1) time complexity for all core operations. When we access an entry via get(), we delete it and re-insert it, moving it to the end of the iteration order (most recently used). When eviction is needed, we simply take the first key from the Map iterator, which represents the least recently used entry.
The cache supports optional TTL (Time-To-Live) per entry, allowing fine-grained control over expiration. You can set a default TTL in the constructor and override it per-entry when calling set(). Expiration checks happen lazily during get(), has(), and iteration methods rather than using timers, which keeps the implementation simple and avoids memory leaks from abandoned timers. The peek() method allows checking a value without updating its recency, useful for debugging or conditional logic.
The onEvict callback provides visibility into cache behavior, receiving the key, value, and eviction reason ('capacity', 'expired', or 'manual'). This is invaluable for logging, metrics, cleanup of resources held by cached values, or implementing write-through patterns. The callback fires synchronously during eviction, so keep handlers lightweight to maintain O(1) performance characteristics.
Edge cases are handled carefully: the constructor validates maxSize is positive, attempting to get/has expired entries triggers cleanup and returns undefined/false, and setting an existing key updates it in-place (preserving the eviction callback behavior). The clear() method invokes onEvict for all entries, treating it as a manual bulk deletion. Note that size returns the raw Map size without pruning expired entries for performance; use keys().length if you need the exact count of valid entries.
Use this pattern when you need bounded memory usage with predictable eviction behavior, such as caching API responses, memoizing expensive computations, or managing connection pools. Avoid it when entries have complex interdependencies (where evicting one should invalidate others), when you need disk persistence, or when the working set significantly exceeds maxSize causing constant churn. For very high-throughput scenarios, consider whether the synchronous onEvict callback might become a bottleneck.