Is there a technique such that I can specify a number n such that when the (n + 1)th entry is inserted, the oldest entry is removed first ensuring that the size of the hashtable is always limited to n?
问题:
回答1:
LinkedHashMap does exactly that, see javadoc for the removeEldestEntry method.
Something like this should do the trick, this will remove the oldest inserted entry:
Map map = new LinkedHashMap() {
@Override
protected boolean removeEldestEntry(Entry eldest) {
return size() > N;
}
};
You can also remove oldest accessed entry by specifying it in the constructor:
Map map = new LinkedHashMap(16, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Entry eldest) {
return size() > N;
}
};
回答2:
You're looking for a LRU cache perhaps? Here's a blog post on one based on LinkedHashMap.
回答3:
You may want to consider using Apache Collections. They have a bunch of LRU implementations. Otherwise, you can easily write a similar wrapper for the standard library collections; I don't think there's one you can use directly.
回答4:
You could use a double-ended queue, or Deque, and just remove the first item when you're at the max count.
回答5:
If you have concurrency needs, don't try to solve this problem yourself. Guava's CacheBuilder has a .maximumSize() method that allows you to limit the size of a map, although it's my understanding that old entries may be cleaned out before you actually reach the limit.
There's an interesting page on the data structure's design, which should impress on the reader how difficult it would be to do any better than Google's implementation. :)
回答6:
If you're caching you could use a WeakHashMap or WeakReference and then not have to worry about the size of the cache.