Is there a technique such that I can specify a number n such that when the (n + 1)th entry is inserted, the oldest entry is removed first ensuring that the size of the hashtable is always limited to n?
相关问题
- Delete Messages from a Topic in Apache Kafka
- Jackson Deserialization not calling deserialize on
- How to maintain order of key-value in DataFrame sa
- StackExchange API - Deserialize Date in JSON Respo
- Difference between Types.INTEGER and Types.NULL in
You may want to consider using Apache Collections. They have a bunch of LRU implementations. Otherwise, you can easily write a similar wrapper for the standard library collections; I don't think there's one you can use directly.
You're looking for a LRU cache perhaps? Here's a blog post on one based on LinkedHashMap.
If you're caching you could use a WeakHashMap or WeakReference and then not have to worry about the size of the cache.
You could use a double-ended queue, or Deque, and just remove the first item when you're at the max count.
If you have concurrency needs, don't try to solve this problem yourself. Guava's CacheBuilder has a .maximumSize() method that allows you to limit the size of a map, although it's my understanding that old entries may be cleaned out before you actually reach the limit.
There's an interesting page on the data structure's design, which should impress on the reader how difficult it would be to do any better than Google's implementation. :)
LinkedHashMap does exactly that, see javadoc for the removeEldestEntry method.
Something like this should do the trick, this will remove the oldest inserted entry:
You can also remove oldest accessed entry by specifying it in the constructor: