Segmenting Redis By Database

2019-05-04 09:02发布

问题:

By default, Redis is configured with 16 databases, numbered 0-15. Is this simply a form of name spacing, or are there performance implications of segregating by database ?

For example, if I use the default database (0), and I have 10 million keys, best practices suggest that using the keys command to find keys by wildcard patterns will be inefficient. But what if I store my major keys, perhaps the first 4 segments of 8 segment keys, resulting in a much smaller subset of keys in a separate database (say database 3). Will Redis see these as a smaller set of keys, or do all keys across all databases appear as one giant index of keys ?

More explicitly put, in terms of time complexity, if my databases look like this:

  • Database 0: 10,000,000 keys
  • Database 3: 10,000 keys

will the time complexity of keys calls against Database 3 be O(10m) or will it be O(10k) ?

Thanks for your time.

回答1:

Redis has a separate dictionary for each database. From your example, the keys call against database 3 will be O(10K)

That said, using keys is against best practice. Additionally, using multiple databases for the same application is against best practices as well. If you want to iterate over keys, you should index them in an application specific way. A SortedSet is a good way way to build an index.

References :

  1. The structure redisServer has an array of redisDB. See redisServer in redis.h
  2. Each redisDB has its own dictionary object. See redisDB in redis.h
  3. keys command operates on the dictionary for the current database