I am looking around redis to provide me an intermediate cache storage with a lot of computation around set operations like intersection and union.
I have looked at the redis website, and found that the redis is not designed for a multi-core CPU. My question is, Why is it so ?
Also, if yes, how can we make 100% utilization of CPU resources with redis on a multi core CPU's.
Redis server is a single threaded. But it allows to achieve 100% utilization of CPU resources using Redis nodes (master and/or slave).
Read operations could be scaled using Redis master/slave configuration with single master. One of CPU core used for master node and all others for slaves.
Write operations could be scaled using Redis multi-master cluster configuration. Multiple CPU cores used for master nodes and all others for slaves.
Redisson - Redis Java client which provides full support of Redis cluster. Works with AWS Elasticache and Azure Redis Cache. It includes master/slave discovery and topology update.
It is a design decision.
A reason for choosing an event-driven approach is that synchronization between threads comes at a cost in both the software (code complexity) and the hardware level (context switching). Add to this that the bottleneck of Redis is usually the network, not the CPU. On the other hand, a single-threaded architecture has its own benefits (for example the guarantee of atomicity).
Therefore event loops seem like a good design for an efficient & scalable system like Redis.
The Redis approach to scale over multiple cores is sharding, mostly together with Twemproxy.
However if for some reason you still want to use a multi-threaded approach, take a look at Thredis but make sure you understand the implications of what its author did (you can not use it as a replication master, for instance).