Best way to implement LRU cache

2019-02-01 01:27发布

I was looking at this problem of LRU cache implementation where after the size of the cache is full, the least recently used item is popped out and it is replaced by the new item.

I have two implementations in mind:

1). Create two maps which looks something like this

std::map<timestamp, k> time_to_key
std::map<key, std::pair<timestamp, V>> LRUCache

To insert a new element, we can put the current timestamp and value in the LRUCache. While when the size of the cache is full, we can evict the least recent element by finding the smallest timestamp present in time_to_key and removing the corresponding key from LRUCache. Inserting a new item is O(1), updating the timestamp is O(n) (because we need to search the k corresponding to the timestamp in time_to_key.

2). Have a linked list in which the least recently used cache is present at the head and the new item is added at the tail. When an item arrives which is already present in the cache, the node corresponding to the key of that item is moved to the tail of the list. Inserting a new item is O(1), updating the timestamp is again O(n) (because we need to move to the tail of the list), and deleting an element is O(1).

Now I have the following questions:

  1. Which one of these implementations is better for an LRUCache.

  2. Is there any other way to implement the LRU Cache.

  3. In Java, should I use HashMap to implement the LRUCache

  4. I have seen questions like, implement a generic LRU cache, and also have seen questions like implementing an LRU cache. Is generic LRU cache different from LRU cache?

Thanks in advance!!!

EDIT:

Another way (easiest way) to implement LRUCache in Java is by using LinkedHashMap and by overriding the boolean removeEldestEntry(Map.entry eldest) function.

标签: java c++ caching
9条回答
再贱就再见
2楼-- · 2019-02-01 01:50

Problem Statement :

Create LRU cache and store Employee object Max =5 objects and find out who login first and after ….

package com.test.example.dto;

import java.sql.Timestamp;
/**
 * 
 * @author Vaquar.khan@gmail.com
 *
 */
public class Employee implements Comparable<Employee> {
    private int     id;
    private String  name;
    private int     age;
    private Timestamp loginTime ;

public int getId() {
    return id;
}

public void setId(int id) {
    this.id = id;
}

public String getName() {
    return name;
}

public void setName(String name) {
    this.name = name;
}

public int getAge() {
    return age;
}

public void setAge(int age) {
    this.age = age;
}

public Timestamp getLoginTime() {
    return loginTime;
}

public void setLoginTime(Timestamp loginTime) {
    this.loginTime = loginTime;
}

@Override
public String toString() {
    return "Employee [id=" + id + ", name=" + name + ", age=" + age + ", loginTime=" + loginTime + "]";
}

Employee(){}

public Employee(int id, String name, int age, Timestamp loginTime) {
    super();
    this.id = id;
    this.name = name;
    this.age = age;
    this.loginTime = loginTime;
}

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + age;
    result = prime * result + id;
    result = prime * result + ((loginTime == null) ? 0 : loginTime.hashCode());
    result = prime * result + ((name == null) ? 0 : name.hashCode());
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (getClass() != obj.getClass()) return false;
    Employee other = (Employee) obj;
    if (age != other.age) return false;
    if (id != other.id) return false;
    if (loginTime == null) {
        if (other.loginTime != null) return false;
    } else if (!loginTime.equals(other.loginTime)) return false;
    if (name == null) {
        if (other.name != null) return false;
    } else if (!name.equals(other.name)) return false;
    return true;
}

@Override
public int compareTo(Employee emp) {
    if (emp.getLoginTime().before( this.loginTime) ){
        return 1;
    } else if (emp.getLoginTime().after(this.loginTime)) {
        return -1;
    }
    return 0;
}


}

LRUObjectCacheExample

package com.test.example;

import java.sql.Timestamp;
import java.util.Calendar;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Map.Entry;
import com.test.example.dto.Employee;
/**
 * 
 * @author Vaquar.khan@gmail.com
 *
 */
public class LRUObjectCacheExample {


    LinkedHashMap<Employee, Boolean>    lruCacheLinkedQueue;

public LRUObjectCacheExample(int capacity) {
    lruCacheLinkedQueue = new LinkedHashMap<Employee, Boolean>(capacity, 1.0f, true) {
        /**
         * 
         */
        private static final long   serialVersionUID    = 1L;

        @Override
        protected boolean removeEldestEntry(
                //calling map's entry method
                Map.Entry<Employee, Boolean> eldest) {
            return this.size() > capacity;
        }
    };
}

void addDataIntoCache(Employee employee) {
    lruCacheLinkedQueue.put(employee, true);
    displayLRUQueue();
}

boolean checkIfDataPresentIntoLRUCaache(int data) {
    return lruCacheLinkedQueue.get(data) != null;
}

 void deletePageNo(int data) {
    if (lruCacheLinkedQueue.get(data) != null){
            lruCacheLinkedQueue.remove(data);
    }
    displayLRUQueue();
}

 void displayLRUQueue() {
    System.out.print("-------------------------------------------------------"+"\n");
    System.out.print("Data into LRU Cache : ");
    for (Entry<Employee, Boolean> mapEntry : lruCacheLinkedQueue.entrySet()) {
        System.out.print("[" + mapEntry.getKey() + "]");
    }
    System.out.println("");
}

public static void main(String args[]) {
    Employee employee1 = new Employee(1,"Shahbaz",29, getCurrentTimeStamp());
    Employee employee2 = new Employee(2,"Amit",35,getCurrentTimeStamp());
    Employee employee3 = new Employee(3,"viquar",36,getCurrentTimeStamp());
    Employee employee4 = new Employee(4,"Sunny",20,getCurrentTimeStamp());
    Employee employee5 = new Employee(5,"sachin",28,getCurrentTimeStamp());
    Employee employee6 = new Employee(6,"Sneha",25,getCurrentTimeStamp());
    Employee employee7 = new Employee(7,"chantan",19,getCurrentTimeStamp());
    Employee employee8 = new Employee(8,"nitin",22,getCurrentTimeStamp());
    Employee employee9 = new Employee(9,"sanuj",31,getCurrentTimeStamp());
    //
    LRUObjectCacheExample lru = new LRUObjectCacheExample(5);
    lru.addDataIntoCache(employee5);//sachin
    lru.addDataIntoCache(employee4);//Sunny
    lru.addDataIntoCache(employee3);//viquar
    lru.addDataIntoCache(employee2);//Amit
    lru.addDataIntoCache(employee1);//Shahbaz -----capacity reached
    //
        lru.addDataIntoCache(employee6);/Sneha
        lru.addDataIntoCache(employee7);//chantan
        lru.addDataIntoCache(employee8);//nitin
        lru.addDataIntoCache(employee9);//sanuj
        //
        lru.deletePageNo(3);
        lru.deletePageNo(4);

    }
    private static Timestamp getCurrentTimeStamp(){
        return new java.sql.Timestamp(Calendar.getInstance().getTime().getTime());
    }

}

Results:

**Data into LRU Cache :** 
[Employee [id=1, name=Shahbaz, age=29, loginTime=2015-10-15 18:47:28.1
[Employee [id=6, name=Sneha, age=25, loginTime=2015-10-15 18:47:28.125
[Employee [id=7, name=chantan, age=19, loginTime=2015-10-15 18:47:28.125
[Employee [id=8, name=nitin, age=22, loginTime=2015-10-15 18:47:28.125
[Employee [id=9, name=sanuj, age=31, loginTime=2015-10-15 18:47:28.125
查看更多
疯言疯语
3楼-- · 2019-02-01 01:56

I'd like to expand on some of these suggestions with two possible implementations. One not thread safe, and one that might be.

Here is a simplest version with a unit test that shows it works.

First the non-concurrent version:

import java.util.LinkedHashMap;
import java.util.Map;

public class LruSimpleCache<K, V> implements LruCache <K, V>{

    Map<K, V> map = new LinkedHashMap (  );


    public LruSimpleCache (final int limit) {
           map = new LinkedHashMap <K, V> (16, 0.75f, true) {
               @Override
               protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
                   return super.size() > limit;
               }
           };
    }
    @Override
    public void put ( K key, V value ) {
        map.put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map.get(key);
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        V value =  map.get ( key );
        if (value!=null) {
            map.remove ( key );
            map.put(key, value);
        }
        return value;
    }

    @Override
    public void remove ( K key ) {
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }


}

The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).

Here is the test that proves it works as an LRU cache:

public class LruSimpleTest {

    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        if ( !ok ) die ();

    }

Now for the concurrent version...

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {

    final CacheMap<K, V>[] cacheRegions;


    private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
        private final ReadWriteLock readWriteLock;
        private final int limit;

        CacheMap ( final int limit, boolean fair ) {
            super ( 16, 0.75f, true );
            this.limit = limit;
            readWriteLock = new ReentrantReadWriteLock ( fair );

        }

        protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
            return super.size () > limit;
        }


        @Override
        public V put ( K key, V value ) {
            readWriteLock.writeLock ().lock ();

            V old;
            try {

                old = super.put ( key, value );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return old;

        }


        @Override
        public V get ( Object key ) {
            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.get ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;
        }

        @Override
        public V remove ( Object key ) {

            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.remove ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public V getSilent ( K key ) {
            readWriteLock.writeLock ().lock ();

            V value;

            try {

                value = this.get ( key );
                if ( value != null ) {
                    this.remove ( key );
                    this.put ( key, value );
                }
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public int size () {
            readWriteLock.readLock ().lock ();
            int size = -1;
            try {
                size = super.size ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return size;
        }

        public String toString () {
            readWriteLock.readLock ().lock ();
            String str;
            try {
                str = super.toString ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return str;
        }


    }

    public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
        int cores = Runtime.getRuntime ().availableProcessors ();
        int stripeSize = cores < 2 ? 4 : cores * 2;
        cacheRegions = new CacheMap[ stripeSize ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {

        cacheRegions = new CacheMap[ concurrency ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    private int stripeIndex ( K key ) {
        int hashCode = key.hashCode () * 31;
        return hashCode % ( cacheRegions.length );
    }

    private CacheMap<K, V> map ( K key ) {
        return cacheRegions[ stripeIndex ( key ) ];
    }

    @Override
    public void put ( K key, V value ) {

        map ( key ).put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map ( key ).get ( key );
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        return map ( key ).getSilent ( key );

    }

    @Override
    public void remove ( K key ) {
        map ( key ).remove ( key );
    }

    @Override
    public int size () {
        int size = 0;
        for ( CacheMap<K, V> cache : cacheRegions ) {
            size += cache.size ();
        }
        return size;
    }

    public String toString () {

        StringBuilder builder = new StringBuilder ();
        for ( CacheMap<K, V> cache : cacheRegions ) {
            builder.append ( cache.toString () ).append ( '\n' );
        }

        return builder.toString ();
    }


}

You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.

查看更多
Anthone
4楼-- · 2019-02-01 02:01

For O(1) access we need a hash table and to maintain order we can use DLL. Basic algo is - from page number we can get to DLL node using hash table. If page exists we can move the node to the head of DLL else insert the node in DLL and hash table. If the size of DLL is full we can remove the least recently used node from tail.

Here is an implementation based on doubly linked list and unordered_map in C++.

#include <iostream>
#include <unordered_map>
#include <utility>

using namespace std;

// List nodeclass
class Node
{
    public:
    int data;
    Node* next;
    Node* prev;
};

//Doubly Linked list
class DLList
{
    public:

    DLList()
    {
        head = NULL;
        tail = NULL;
        count = 0;
    }

    ~DLList() {}

    Node* addNode(int val);
    void print();
    void removeTail();
    void moveToHead(Node* node);

    int size()
    {
        return count;
    }

    private:
    Node* head;
    Node* tail;
    int count;
};

// Function to add a node to the list

Node* DLList::addNode(int val)
{
    Node* temp = new Node();

    temp->data = val;
    temp->next = NULL;
    temp->prev = NULL;

    if ( tail == NULL )
    {
        tail = temp;
        head = temp;
    }
    else
    {
        head->prev = temp;
        temp->next = head;
        head = temp;
    }

    count++;

    return temp;
}

void DLList::moveToHead(Node* node)
{
    if (head == node)
        return;

    node->prev->next = node->next;

    if (node->next != NULL)
    {
        node->next->prev = node->prev;
    }
    else
    {
        tail = node->prev;
    }
        node->next = head;
        node->prev = NULL;
        head->prev = node;
        head = node;
}

void DLList::removeTail()
{
    count--;

    if (head == tail)
    {
        delete head;
        head = NULL;
        tail = NULL;
    }
    else
    {
        Node* del = tail;
        tail = del->prev;
        tail->next = NULL;
        delete del;
    }
}

void DLList::print()
{
    Node* temp = head;

    int ctr = 0;

    while ( (temp != NULL) && (ctr++ != 25) )
    {
        cout << temp->data << " ";
        temp = temp->next;
    }
    cout << endl;
}

class LRUCache
{
    public:
        LRUCache(int aCacheSize);
        void fetchPage(int pageNumber);

    private:
        int cacheSize;
        DLList dlist;
        unordered_map< int, Node* > directAccess;
};

    LRUCache::LRUCache(int aCacheSize):cacheSize(aCacheSize) { }

    void LRUCache::fetchPage(int pageNumber)
    {
        unordered_map< int, Node* >::const_iterator it = directAccess.find(pageNumber);

        if (it != directAccess.end())
        {
            dlist.moveToHead( (Node*)it->second);
        }
        else
        {
            if (dlist.size() == cacheSize-1)
               dlist.removeTail();

            Node* node = dlist.addNode(pageNumber);

            directAccess.insert(pair< int, Node* >(pageNumber,node));
        }

        dlist.print();
    }

    int main()
    {
        LRUCache lruCache(10);

        lruCache.fetchPage(5);
        lruCache.fetchPage(7);
        lruCache.fetchPage(15);
        lruCache.fetchPage(34);
        lruCache.fetchPage(23);
        lruCache.fetchPage(21);
        lruCache.fetchPage(7);
        lruCache.fetchPage(32);
        lruCache.fetchPage(34);
        lruCache.fetchPage(35);
        lruCache.fetchPage(15);
        lruCache.fetchPage(37);
        lruCache.fetchPage(17);
        lruCache.fetchPage(28);
        lruCache.fetchPage(16);

        return 0;
}
查看更多
Bombasti
5楼-- · 2019-02-01 02:02

Normally, LRU caches are represented as LIFO structures- a single queue of elements. If the one provided by your Standard doesn't allow you to remove objects from the middle, for example, to put them on the top, then you may have to roll your own.

查看更多
戒情不戒烟
6楼-- · 2019-02-01 02:06

If you want an LRU cache, the simplest in Java is LinkedHashMap. The default behaviour is FIFO however you can changes it to "access order" which makes it an LRU cache.

public static <K,V> Map<K,V> lruCache(final int maxSize) {
    return new LinkedHashMap<K, V>(maxSize*4/3, 0.75f, true) {
        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    };
}

Note: I have using the constructor which changes the collection from newest first to most recently used first.

From the Javadoc

public LinkedHashMap(int initialCapacity,
                     float loadFactor,
                     boolean accessOrder)
Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode.
Parameters:
initialCapacity - the initial capacity
loadFactor - the load factor
accessOrder - the ordering mode - true for access-order, false for insertion-order

When accessOrder is true the LinkedHashMap re-arranges the order of the map whenever you get() an entry which is not the last one.

This way the oldest entry is the least which is recently used.

查看更多
爷的心禁止访问
7楼-- · 2019-02-01 02:07

Extending Peter Lawrey's answer

The removeEldestEntry(java.util.Map.Entry) method may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map.

So LinkedHashMap's FIFO behavior can be overriden to make LRU

public class LRUCache<K, V> extends LinkedHashMap<K, V> {

    private int cacheSize;

    public LRUCache(int cacheSize) {
        super(cacheSize * 4 / 3, 0.75f, true);
        this.cacheSize = cacheSize;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() >= cacheSize;
    }

    public static void main(String args[]) {
        LRUCache<Integer, Integer> lruCache = new LRUCache<>(5);

        lruCache.put(1, 1);
        lruCache.put(2, 2);
        lruCache.put(3, 3);
        lruCache.put(1, 4);
        lruCache.put(2, 5);
        lruCache.put(7, 6);

        System.out.println(lruCache.keySet());

        lruCache.put(1, 4);
        lruCache.put(2, 5);

        System.out.println(lruCache.keySet());
    }
} 
查看更多
登录 后发表回答