I was looking at this problem of LRU cache implementation where after the size of the cache is full, the least recently used item is popped out and it is replaced by the new item.
I have two implementations in mind:
1). Create two maps which looks something like this
std::map<timestamp, k> time_to_key
std::map<key, std::pair<timestamp, V>> LRUCache
To insert a new element, we can put the current timestamp and value in the LRUCache. While when the size of the cache is full, we can evict the least recent element by finding the smallest timestamp present in time_to_key and removing the corresponding key from LRUCache. Inserting a new item is O(1), updating the timestamp is O(n) (because we need to search the k corresponding to the timestamp in time_to_key.
2). Have a linked list in which the least recently used cache is present at the head and the new item is added at the tail. When an item arrives which is already present in the cache, the node corresponding to the key of that item is moved to the tail of the list. Inserting a new item is O(1), updating the timestamp is again O(n) (because we need to move to the tail of the list), and deleting an element is O(1).
Now I have the following questions:
Which one of these implementations is better for an LRUCache.
Is there any other way to implement the LRU Cache.
In Java, should I use HashMap to implement the LRUCache
I have seen questions like, implement a generic LRU cache, and also have seen questions like implementing an LRU cache. Is generic LRU cache different from LRU cache?
Thanks in advance!!!
EDIT:
Another way (easiest way) to implement LRUCache in Java is by using LinkedHashMap and by overriding the boolean removeEldestEntry(Map.entry eldest) function.
Problem Statement :
Create LRU cache and store Employee object Max =5 objects and find out who login first and after ….
LRUObjectCacheExample
Results:
I'd like to expand on some of these suggestions with two possible implementations. One not thread safe, and one that might be.
Here is a simplest version with a unit test that shows it works.
First the non-concurrent version:
The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).
Here is the test that proves it works as an LRU cache:
Now for the concurrent version...
You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.
For O(1) access we need a hash table and to maintain order we can use DLL. Basic algo is - from page number we can get to DLL node using hash table. If page exists we can move the node to the head of DLL else insert the node in DLL and hash table. If the size of DLL is full we can remove the least recently used node from tail.
Here is an implementation based on doubly linked list and unordered_map in C++.
Normally, LRU caches are represented as LIFO structures- a single queue of elements. If the one provided by your Standard doesn't allow you to remove objects from the middle, for example, to put them on the top, then you may have to roll your own.
If you want an LRU cache, the simplest in Java is LinkedHashMap. The default behaviour is FIFO however you can changes it to "access order" which makes it an LRU cache.
Note: I have using the constructor which changes the collection from newest first to most recently used first.
From the Javadoc
When accessOrder is
true
the LinkedHashMap re-arranges the order of the map whenever you get() an entry which is not the last one.This way the oldest entry is the least which is recently used.
Extending Peter Lawrey's answer
The
removeEldestEntry(java.util.Map.Entry)
method may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map.So LinkedHashMap's FIFO behavior can be overriden to make LRU