What I need to do is to store a one-to-one mapping. The dataset consists of a large number of key-value pairs of the same kind (10M+). For example, one could use a single instance of HashMap object in Java for storing such data.
The first way to do this is to store lots of key-value pairs, like this:
SET map:key1 value1
...
SET map:key900000 value900000
GET map:key1
The second option is to use a single "Hash":
HSET map key1 value
...
HSET map key900000 value900000
HGET map key1
Redis Hashes have some convenient commands (HMSET
, HMGET
, HGETALL
, etc.), and they don't pollute the keyspace, so this looks like a better option. However, are there any performance or memory considerations when using this approach?
Yes, as Itamar Haber says you should look at redis memory optimization guide. But also you should keep in mind such things (in few lines):
hash-max-zipmap-entries
and validhash-max-zipmap-value
if memory is main target. Be sure to understand, whathash-max-zipmap-entries
andhash-max-zipmap-value
means. Also take some time to read about ziplist.hash-max-zipmap-entries
with 10M+ keys (to slow access in this keys) you should break one HSET in some slots. For example you sethash-max-zipmap-entries
as 10,000. So to store 10M+ keys your need 1000+ HSET keys with 10,000 each. For rough example - crc32(key) % maxHsets.It may be usefull to read about: