Caching

Caching is the art of remembering things so you don't have to go look them up again. It trades memory (RAM) for speed (Latency).

The Problem

Databases store data on disks (Hard Drives). Reading from a disk is slow. Network travel is even slower. A typical DB query takes 50ms - 200ms.

The Solution

We put a "Cache" (like Redis or Memcached) in front of the DB. Caches store data in RAM (Memory). Reading from RAM takes < 1ms.

🎒
The Backpack Analogy:
Imagine your Math Book is at the library (The Database). It takes 30 minutes to go get it.
If you put the book in your Backpack (The Cache), it takes 3 seconds to grab it.
But your backpack is small... you can't fit every book in the world in it.

đŸ•šī¸ Latency Race: RAM vs Disk

Experiment: Request "User Profile ID: 101".
1st time: The Cache is empty (Miss). 2nd time: The Cache has it (Hit).

👤
You (Client)
⚡ Redis Cache [ Empty ]
đŸ—„ī¸
Main Database (Disk)
Time Taken
0 ms
Ready

đŸ—‘ī¸ Eviction Policies (Taking out the trash)

Your cache has limited space (RAM is expensive). What do you delete when it gets full?

LRU (Least Recently Used)

Delete the item that hasn't been used for the longest time.
"Nobody asked for 'Page 5' in a week? Throw it out."

TTL (Time To Live)

Give every item an expiration date (e.g., 5 minutes). When the time is up, it self-destructs. Ensures data doesn't get too stale.

LFU (Least Frequently Used)

Delete the item that is used the least often overall, even if it was used recently.