Locking and Redis

2020-05-23 16:29发布

We have 75 (and growing) servers that need to share data via Redis. All 75 servers would ideally want to write to two fields in Redis with INCRBYFLOAT operations. We anticipate eventually having potentially millions of daily write operations and billions of daily reads on these two fields. This data must be persistent.

We're concerned that Redis locking might cause write operations to be repeatedly retried with many simultaneous attempts to increment the same field.

Questions:

  • Is multiple, simultaneous INCRBYFLOAT on a single field a bad idea under a very heavy load?
  • Should we have an external process "summarize" separate fields and write the two fields instead? (this introduces another failure point)
  • Will reads on those two fields block while writing?

2条回答
SAY GOODBYE
2楼-- · 2020-05-23 17:19

Since Redis is single threaded, you will probably want to use master-slave replication to separate writes from reads, since yes, writes will generally block reads.

Alternatively you can consider using Apache Zookeeper for this, it provides reliable cluster coordination without single points of failure (like single Redis instance).

查看更多
放我归山
3楼-- · 2020-05-23 17:21

Redis does not lock. Also, it is single threaded; so there are no race conditions. Reads or Writes do not block.

You can run millions of INCRBYFLOAT on the same key without any problems. No need for external processes. Reading those fields does not pose any problems.

That said, "Millions of updates to two keys" sounds strange. If you can explain your use case, perhaps there might be a better way to handle it within Redis.

查看更多
登录 后发表回答