Database Caching is a million-dollar technique you can’t ignore.
it helps your project's data retrieval performance by reducing the need to access the underlying slower storage layer.
So - what's the catch?
There are multiple strategies and you've to choose the right one for the job.
✅ Cache-Aside Strategy
In this strategy, the cache sits next to the database.
Here’s how it works:
- When there is a request for data, the application first checks the cache
- If there’s a cache hit, the application returns from the cache
- In case of a cache miss, the application queries the DB and returns the data
- The application also stores the missing data in the cache for future requests
Pros: Great for read-heavy workloads. Also, better resiliency since a cache failure cannot cripple the system.
Cons: Potential inconsistency between the cache and the database.
✅ Read-Through Strategy
The cache sits between the application and the database.
Here’s what happens in read-through:
- The application goes to the cache for any read request.
- If there’s a cache hit, data is returned from the cache and that’s the end of the flow.
- In case of a cache miss, the cache gets the missing data from the database and returns it to the application.
Pros: The application doesn’t have to worry about fetching from the database or the cache. The cache takes care of it.
Cons: Potential inconsistency between the cache and the DB and the need to go to the database for every brand-new read request.
✅ Write-Around Strategy
Same as Cache-Aside with the added context about the write operations.
In this strategy, all writes go to the database and the data that is read goes to the cache.
For a cache miss, the application reads from the DB and updates the cache for the next time.
Great for cases where data is only written once and rarely updated (like a blog post or a static website).
✅ Write-Through Strategy
Write-through tries to solve the problems with read-through.
Instead of writing to the DB, the application first writes to the cache.
And the cache immediately writes to the DB.
The word “immediately” is the key over here.
Pros: Cache will always have any written data. New read requests won’t experience a delay while the cache requests the data from the main DB.
Cons: Extra write latency because the data must go to the cache and then to the DB.
✅ Write-Back Strategy
It’s a variation of the write-through strategy.
With one key difference…
In the write-back, the application writes directly to the cache.
However, the cache doesn’t immediately write to the database but after a delay.
Pros: The strain on the cache is reduced in case you have a write-heavy workload. Requests to the DB are batched and the overall write performance is improved.
Cons: In case of a cache failure, there are chances of possible data loss.
it helps your project's data retrieval performance by reducing the need to access the underlying slower storage layer.
So - what's the catch?
There are multiple strategies and you've to choose the right one for the job.
✅ Cache-Aside Strategy
In this strategy, the cache sits next to the database.
Here’s how it works:
- When there is a request for data, the application first checks the cache
- If there’s a cache hit, the application returns from the cache
- In case of a cache miss, the application queries the DB and returns the data
- The application also stores the missing data in the cache for future requests
Pros: Great for read-heavy workloads. Also, better resiliency since a cache failure cannot cripple the system.
Cons: Potential inconsistency between the cache and the database.
✅ Read-Through Strategy
The cache sits between the application and the database.
Here’s what happens in read-through:
- The application goes to the cache for any read request.
- If there’s a cache hit, data is returned from the cache and that’s the end of the flow.
- In case of a cache miss, the cache gets the missing data from the database and returns it to the application.
Pros: The application doesn’t have to worry about fetching from the database or the cache. The cache takes care of it.
Cons: Potential inconsistency between the cache and the DB and the need to go to the database for every brand-new read request.
✅ Write-Around Strategy
Same as Cache-Aside with the added context about the write operations.
In this strategy, all writes go to the database and the data that is read goes to the cache.
For a cache miss, the application reads from the DB and updates the cache for the next time.
Great for cases where data is only written once and rarely updated (like a blog post or a static website).
✅ Write-Through Strategy
Write-through tries to solve the problems with read-through.
Instead of writing to the DB, the application first writes to the cache.
And the cache immediately writes to the DB.
The word “immediately” is the key over here.
Pros: Cache will always have any written data. New read requests won’t experience a delay while the cache requests the data from the main DB.
Cons: Extra write latency because the data must go to the cache and then to the DB.
✅ Write-Back Strategy
It’s a variation of the write-through strategy.
With one key difference…
In the write-back, the application writes directly to the cache.
However, the cache doesn’t immediately write to the database but after a delay.
Pros: The strain on the cache is reduced in case you have a write-heavy workload. Requests to the DB are batched and the overall write performance is improved.
Cons: In case of a cache failure, there are chances of possible data loss.