Best way to implement LRU cache

2019-02-01 01:27发布

I was looking at this problem of LRU cache implementation where after the size of the cache is full, the least recently used item is popped out and it is replaced by the new item.

I have two implementations in mind:

1). Create two maps which looks something like this

std::map<timestamp, k> time_to_key
std::map<key, std::pair<timestamp, V>> LRUCache

To insert a new element, we can put the current timestamp and value in the LRUCache. While when the size of the cache is full, we can evict the least recent element by finding the smallest timestamp present in time_to_key and removing the corresponding key from LRUCache. Inserting a new item is O(1), updating the timestamp is O(n) (because we need to search the k corresponding to the timestamp in time_to_key.

2). Have a linked list in which the least recently used cache is present at the head and the new item is added at the tail. When an item arrives which is already present in the cache, the node corresponding to the key of that item is moved to the tail of the list. Inserting a new item is O(1), updating the timestamp is again O(n) (because we need to move to the tail of the list), and deleting an element is O(1).

Now I have the following questions:

  1. Which one of these implementations is better for an LRUCache.

  2. Is there any other way to implement the LRU Cache.

  3. In Java, should I use HashMap to implement the LRUCache

  4. I have seen questions like, implement a generic LRU cache, and also have seen questions like implementing an LRU cache. Is generic LRU cache different from LRU cache?

Thanks in advance!!!

EDIT:

Another way (easiest way) to implement LRUCache in Java is by using LinkedHashMap and by overriding the boolean removeEldestEntry(Map.entry eldest) function.

标签: java c++ caching
9条回答
2楼-- · 2019-02-01 02:09

LinkedHashMap allows you to override the removeEldestEntry function so that whenever a put is executed, you can specify whether to remove the oldest entry or not, thus allowing implementation of an LRU.

myLRUCache = new LinkedHashMap<Long,String>() {
protected boolean removeEldestEntry(Map.Entry eldest) 
{ 
if(this.size()>1000)
    return true;
  else
      return false;
}
}; 
查看更多
Evening l夕情丶
3楼-- · 2019-02-01 02:09
class LRU {
    Map<String, Integer> ageMap = new HashMap<String, Integer>();
    int age = 1;

    void addElementWithAge(String element) {

        ageMap.put(element, age);
        age++;

    }

    String getLeastRecent(Map<String, Integer> ageMap) {
        Integer oldestAge = (Integer) ageMap.values().toArray()[0];
        String element = null;
        for (Entry<String, Integer> entry : ageMap.entrySet()) {
            if (oldestAge >= entry.getValue()) {
                oldestAge = entry.getValue();
                element = entry.getKey();
            }
        }
        return element;
    }

    public static void main(String args[]) {
        LRU obj = new LRU();
        obj.addElementWithAge("M1");
        obj.addElementWithAge("M2");
        obj.addElementWithAge("M3");
        obj.addElementWithAge("M4");
        obj.addElementWithAge("M1");
    }

}
查看更多
孤傲高冷的网名
4楼-- · 2019-02-01 02:12

Considering the cache allows concurrent access, the problem of good design of LRU cache boils down to:

a) Can we avoid using mutex while updating the two structures(cache structure and LRU structure).

b) Will the read (get) operation of cache require mutex?

In more detail: Say, if we implements this using java.util.concurrent.ConcuurentHashMap(cache structure) and java.util.concurrent.ConcurrentLinkedQueue(LRU structure)

a)Locking both these two structures on edit operations - addEntry(), removeEntry(), evictEntries() etc.

b) The above could probably pass as slow write operations, but the problem is even for read (get) operation, we need to apply lock on both the structures. Because, get will mean putting the entry in front of the queue for LRU strategy.(Assuming entries are removed from end of the queue).

Using efficient concurrent structures like ConcurrentHashMap and wait free ConcurrentLinkedQueue and then putting a lock over them is loosing the whole purpose of it.

I implemented an LRU cache with the same approach, however, the LRU structured was updated asynchronously eliminating the need of using any mutex while accessing these structures. LRU is an internal detail of the Cache and can be implemented in any manner without affecting the users of the cache.

Later, I also read about ConcurrentLinkedHashMap

https://code.google.com/p/concurrentlinkedhashmap/

And found that it is also trying to do the same thing. Not used this structure but perhaps might be a good fit.

查看更多
登录 后发表回答