How to resume hash slots of a particular node in r

2019-06-01 07:08发布

So I am testing out the redis cluster. I have a setup with 3 masters and 3 slaves. Now, in case a node faces hard-failure (both master and slave go down), the cluster is still functional, barring the hash slots served by the failed node. Now, while testing such a scenario, I see that reads/writes that operate on keys served by these hash slots fail with exceptions, which is fine (I'm using jedis btw). However, if I am using redis cluster as a cache, I would like these hash slots to be served by some other node. This functionality doesn't seem to be present in the redis-trib utility.

I cannot reshard the cluster to move these hash slots as ./redis-trib.rb reshard fails with [ERR] Not all #{ClusterHashSlots} slots are covered by nodes.. I also cannot remove the node from the cluster as ./redis-trib.rb del-node fails with [ERR] Node #{node} is not empty! Reshard data away and try again.. What is the best way then, to deal with a scenario where I cannot bring my original node up but want those hash slots to be served by some other node (assuming that I am even fine with losing data on the old node)? Ideally, something like being able to remove that node (master and slave from the cluster and assign those hash slots to some other node).

1条回答
放我归山
2楼-- · 2019-06-01 07:57

It fixes the cluster by adding all slots that was served by the failed node to some connectable nodes. The approach is to use the cluster addslots command, but of course it's somehow difficult to do it manually so I suggest this tool developed by our team.

Usage (in shell):

# it requires Python2.7; install it via pip
pip install redis-trib

# suppose one of the accessible nodes is serving at 172.0.0.1:7000
# start a cluster-mode Redis that is not involved in any cluster
# suppose its address is 172.0.0.5:8000
redis-trib.py rescue --existing-addr 172.0.0.1:7000 --new-addr 172.0.0.5:8000

After that the new node would serve all the failed slots so that the cluster state will become ok.

查看更多
登录 后发表回答