What is a Java LRU cache?
By storing frequently accessed resources and files in a closer location, caching can significantly increase the speed and performance of your databases and applications. Efficiently using cache memory will reduce the strain on your network and improve scalability and availability. But how do you decide which items should be stored in the cache?
There are many ways to implement a caching policy, including a Java LRU cache. So what is a Java LRU cache, and how does a Java LRU cache work?
What is a Java LRU cache? How does a Java LRU cache work?
A Java LRU (Least Recently Used) cache is a cache implemented in the Java programming language that uses the LRU cache policy. By design, caches have a fixed size, and can only store a limited number of elements. This means that developers need to come up with a cache policy that decides how to purge items from the cache if it becomes too full.
LRU is a cache policy that starts by removing the least recently used items from the cache, once the cache has reached its maximum size. The LRU policy is an alternative to other cache policies such as LFU (Least Frequently Used), which removes the items from the cache that are least often used.
Using LRU as your cache policy has both pros and cons. LRU caches are best if you expect the entire contents of the cache to gradually change. For example, a user scrolling through a photo album may go back a few images to look at them again, but is less likely to go back to the very beginning of the album.
On the other hand, LRU is a poor cache policy if you need to access certain items infrequently on a recurring basis. If there are too many items in the cache, these infrequently accessed items may be targeted for eviction, even if you plan on accessing them in the future. In this situation, an alternative cache policy such as LFU could help preserve these items in the cache.
Implementing a Java LRU cache requires you to keep around an extra piece of information for each item in the cache: the timestamp at which the item was most recently accessed. In practice, we may use data structures such as queues and hash tables to implement an LRU cache:
- The queue stores items in the order that they have been most recently accessed. If an item in the queue is newly accessed, it is placed at the front of the queue.
- When we need to evict items from the queue, we can quickly delete them by removing the item(s) at the tail end of the queue.
- The hash table associates each item in the cache with its position in the queue, so that we can quickly locate it, move it to the front of the queue, or delete it.
Java LRU caches and Redis
Redis is an open-source, in-memory data structure store used to implement NoSQL key-value databases, caches, and message brokers. If you decide to use Redis as a cache, you can select either an LRU or LFU cache policy. Redis includes a sophisticated implementation of LRU caching: you can evict the least recently used item, or the least recently used item that has a defined expiration.
While Redis has many advantages, it's not automatically compatible with programming languages such as Java out of the box. For this reason, many Java developers install a third-party Redis Java client such as Redisson in order to lower the Redis learning curve.
Redisson comes with many familiar Java objects and collections, including the RMapCache interface, which provides a built-in Java LRU cache for Redis. Below is an example of how to use RMapCache for Java LRU caches in Redis:
RMapCache<String, String> map = redisson.getMapCache("anyMap"); // set or change limit map to 10 entries map.setMaxSize(10); map.put("1", "2"); map.put("3", "3", 1, TimeUnit.SECONDS);
The setMaxSize() method sets the maximum size of the RMapCache, and evicts superfluous elements using the LRU cache policy. The put() method inserts items into the cache; it may include optional arguments defining the item's time to live (TTL) in the cache.