Bulk ingest into Redis

2019-02-01 10:02发布

I'm trying to load a large piece of data into Redis as fast as possible.

My data looks like:

771240491921 SOME;STRING;ABOUT;THIS;LENGTH
345928354912 SOME;STRING;ABOUT;THIS;LENGTH

There is a ~12 digit number on the left and a variable length string on the right. The key is going to be the number on the left and the data is going to be the string on the right.

In my Redis instance that I just installed out of the box and with an uncompressed plain text file with this data, I can get about a million records into it a minute. I need to do about 45 million, which would take about 45 minutes. 45 minutes is too long.

Are there some standard performance tweaks that exist for me to do this type of optimization? Would I get better performance by sharding across separate instances?

标签: redis
2条回答
啃猪蹄的小仙女
2楼-- · 2019-02-01 10:31

I like what Salvadore proposed, but here you are one more very clear way - generate feed for cli, e.g.

SET xxx yyy
SET xxx yyy
SET xxx yyy

pipe it into cli on server close to you. Then do save, shutdown and move data file to the destination server.

查看更多
Summer. ? 凉城
3楼-- · 2019-02-01 10:48

The fastest way to do this is the following: generate Redis protocol out of this data. The documentation to generate the Redis protocol is on the Redis.io site, it is a trivial protocol. Once you have that, just call it appendonly.log and start redis in append only mode.

You can even do a FLUSHALL command and finally push the data into your server with netcat, redirecting the output to /dev/null.

This will be super fast, there is no RTT to wait, it's just a bulk loading of data.

Less hackish way, just insert things 1000 per time using pipelining. It's almost as fast as generating the protocol, but much more clean :)

查看更多
登录 后发表回答