If redis is already a part of the stack, why is Me

2019-03-07 10:45发布

Redis can do everything that Memcached provides (LRU cache, item expiry, and now clustering in version 3.x+, currently in beta) or by tools like twemproxy. The performance is similar too. Morever, Redis adds persistence due to which you need not do cache warming in case of a server restart.

Reference to some old answers which compare Redis and Memcache, some of which favor Redis as replacement of Memcache (if already present in the stack):

In spite of this, on studying stacks of large web scale companies like Instagram, Pinterest, Twitter etc, I found that they use both Memcached and Redis for different purposes, not using Redis for primary caching. The primary cache is still Memcached, and Redis is used for its data structures based logical caching.

As of 2014, why is memcached still worth the pain to be added as additional component into your stack, when you already have a Redis component which can do everything that memcached can? What are the favorable points that incline the architects/engineers to still include memcached apart from already existing Redis?

Update :

For our platforms, we have completely discarded Memcached, and use redis for plain as well as logical caching requirements. Highly performant, flexible and reliable.

Some example scenarios:

  • Listing all cached keys by a specific pattern, and read or delete their values. Very easy in redis, not doable ( easily ) in memcached.
  • Storing a payload more than 1mb, easy to do in redis, requires slab size tweaks in memcached, which has performance side effects of its own.
  • Easy snapshots of current cache content
  • Redis cluster is production ready as well along with language drivers,hence clustered deployment is easy too.

2条回答
甜甜的少女心
2楼-- · 2019-03-07 10:53

The main reason I see today as an use-case for memcached over Redis is the superior memory efficiency you should be able to get with plain HTML fragments caching (or similar applications). If you need to store different fields of your objects in different memcached keys, then Redis hashes are going to be more memory efficient, but when you have a large number of key -> simple_string pairs, memcached should be able to give you more items per megabyte.

Other things which are good points about memcached:

  • It is a very simple piece of code, so if you just need the functionality it provides, it is a reasonable alternative I guess, but I never used it in production.
  • It is multi-threaded, so if you need to scale in a single-box setup, it is a good thing and you need to talk with just one instance.

I believe that Redis as a cache makes more and more sense as people move towards intelligent caching or when they try to preserve structure of the cached data via Redis data structures.

Comparison between Redis LRU and memcached LRU.

Both memcached and Redis don't perform real LRU evictions, but only an approximation of that.

Memcache eviction is per-size class and depends on the implementation details of its slab allocator. For example if you want to add an item which fits in a given size class, memcached will try to remove expired / not-recently-used items in that class, instead to try a global attempt to understand what is the object, regardless of its size, which is the best candidate.

Redis instead tries to pick a good object as a candidate for eviction when the maxmemory limit is reached, looking at all the objects, regardless of the size class, but is only able to provide an approximately good object, not the best object with the greater idle time.

The way Redis does this is by sampling a few objects, picking the one which was idle (not accessed) for the longest time. Since Redis 3.0 (currently in beta) the algorithm was improved and also takes a good candidates pools across evictions, so the approximation was improved. In the Redis documentation you can find a description and graphs with details about how it works.

Why memcached has a better memory footprint than Redis for simple string -> string maps.

Redis is a more complex piece of software, so values in Redis are stored in a way more similar to objects in a high level programming language: they have associated type, encoding, reference counting for memory management. This makes Redis internal structure good and manageable, but has an overhead compared to memcached which only deals with strings.

When Redis starts to be more memory efficient

Redis is able to store small aggregate data types in a special memory saving way. For example a small Redis Hash representing an object, is stored internally not with an hash table, but as a binary unique blob. So setting multiple fields per object into an hash is more efficient than storing N separated keys into memcached.

You can, actually, store an object into memcached as a single JSON (or binary-encoded) blob, but contrary to Redis, this will not allow you to fetch or update independent fields.

The advantage of Redis in the context of intelligent caching.

Because of Redis data structures, the usual pattern used with memcached of destroying objects when the cache is invalidated, to recreate it from the DB later, is a primitive way of using Redis.

For example, imagine you need to cache the latest N news posted into Hacker News in order to populate the "Newest" section of the site. What you do with Redis is to take a list (capped to M items) with the newest news inserted. If you use another store for your data, and Redis as a cache, what you do is to populate both the views (Redis and the DB) when a new item is posted. There is no cache invalidation.

However the application can always have logic so that if the Redis list is found to be empty, for example after a startup, the initial view can be re-created from the DB.

By using intelligent caching it is possible to perform caching with Redis in a more efficient way compared to memcached, but not all the problems are suitable for this pattern. For example HTML fragments caching may not benefit from this technique.

查看更多
手持菜刀,她持情操
3楼-- · 2019-03-07 11:11

Habits are hard to break :)

Seriously though, there are two main reasons - to my understanding - why Memcached is still used:

  1. Legacy - there developers that are comfortable and familiar with Memcached, as well as applications that support it. This also means that it is a mature and well-tested technology.
  2. Scaling - standard Memcached is easily horizontally scalable, whereas Redis (until and excluding the soon-to-be-released v3) requires more work to that end (i.e. sharding).

However:

  1. Re. legacy - given Redis' robustness (data structures, commands, persistence...), it being actively developed and clients in every conceivable language - new applications are usually developed with it.
  2. Re scaling - besides the upcoming v3, there are solutions that can make scaling much easier. For example, Redis Cloud offers seamless scaling without data loss or service interruption. Another popular approach to scaling/sharding Redis is twemproxy.
查看更多
登录 后发表回答