Redis: Amazon EC2 vs Elasticache

2019-03-08 22:39发布

问题:

I want to host a Redis Server by myself. I compared EC2 to Elasticache. And I would like to know what the disadvantage of EC2 are.

An EC2 tiny instance costs as much as the ELasticache tiny instance but hast 400 mb of ram more. Why should use Elasticache and not setup an own Redis Server on the ec2 tiny isntance?

回答1:

Since I'm lazy, I'd choose Elasticcache over EC2 so that I can avoid some of the operational aspects of managing a Redis instance. With Redis on EC2, you are responsible for scaling, updating, monitoring, and maintenance of the host and the Redis instance. If you're fine dealing with the operational aspects of Redis, then it shouldn't be a problem. A lot of folks over look the costs of the operational aspects of running a Redis instance. Unless you're well-seasoned with Redis, I'd consider Elasticache. I've been using it and have been pretty happy with it so far.

Now, EC2 makes sense when you need custom configurations of Redis that aren't supported by Elasticache.

A̶d̶d̶i̶t̶i̶o̶n̶a̶l̶l̶y̶,̶ ̶i̶f̶ ̶y̶o̶u̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶c̶o̶n̶n̶e̶c̶t̶ ̶t̶o̶ ̶y̶o̶u̶r̶ ̶R̶e̶d̶i̶s̶ ̶i̶n̶s̶t̶a̶n̶c̶e̶ ̶f̶r̶o̶m̶ ̶o̶u̶t̶s̶i̶d̶e̶ ̶o̶f̶ ̶t̶h̶e̶ ̶A̶W̶S̶ ̶e̶n̶v̶i̶r̶o̶n̶m̶e̶n̶t̶,̶ ̶E̶l̶a̶s̶t̶i̶c̶a̶c̶h̶e̶ ̶w̶i̶l̶l̶ ̶b̶e̶ ̶a̶ ̶p̶r̶o̶b̶l̶e̶m̶ ̶a̶s̶ ̶y̶o̶u̶ ̶c̶a̶n̶'̶t̶ ̶u̶s̶e̶ ̶t̶h̶e̶ ̶r̶e̶d̶i̶s̶-̶c̶l̶i̶ ̶t̶o̶ ̶c̶o̶n̶n̶e̶c̶t̶ ̶t̶o̶ ̶a̶ ̶R̶e̶d̶i̶s̶ ̶i̶n̶s̶t̶a̶n̶c̶e̶ ̶t̶h̶a̶t̶ ̶i̶s̶ ̶r̶u̶n̶n̶i̶n̶g̶ ̶i̶n̶ ̶E̶l̶a̶s̶t̶i̶c̶a̶c̶h̶e̶ ̶f̶r̶o̶m̶ ̶o̶u̶t̶s̶i̶d̶e̶.̶

Update: Accessing ElastiCache Resources from Outside AWS

Lastly, if you plan on being on the bleeding edge of Redis, it makes more sense to run your own. But then again, you own the operational bits, monitoring, patching, etc..



回答2:

tl;dr: Elasticache forces you to use a single instance of redis, which is sub-optimal.

The long version:

I realize this is an old post (2 years at the time of this writing) but I think it's important to note a point I don't see here.

On elasticache your redis deployment is managed by Amazon. This means you're stuck with however they choose to run your redis.

Redis uses a single thread of execution for reads/writes. This ensures consistency w/o locking. It's a major asset in terms of performance not to manage locks and latches. The unfortunate consequence, though, is that if your EC2 has more than 1 vCPU they will go unused. This is the case for all elasticache instances with more than one vCPU.

The default elasticache instance size is cache.r3.large, which has two cores.

In fact, there are a number of instance sizes with multiple vCPUs. Lots of opportunity for this issue to manifest.

It seems Amazon is already aware of this issue, but they seem a bit dismissive of it.

The part that makes this especially relevant to this question is that on your EC2 (since you're managing your own deployment) you're able to implement multi-tenancy. This means you have many instances of the redis process listening on different ports. By choosing which port to read/write to/from in the application based on a hash of the record's key you can leverage all your vCPUs.

As a side note; an redis elasticache deployment on a multi-core machine should always under perform compared to memcached elasticache deployment on the instance size. With multi-tenancy redis tends to be the winner.

Update:

Amazon now provides separate metrics for your redis instance CPU, EngineCPUUtilization. You no longer need to compute your CPU with the shoddy multiplication, but multi-tenancy is still not implemented.



回答3:

Another point is Elasticache is dynamic , you can decrease/increase the memory you use dynamically, or even close the cache(and save $$) if your performance indexes are in the green.



回答4:

Elasticache Pros and Cons:

Pros
- AWS Managed service; so just make use of Redis in application without overhead of the management (leave it to AWS)
- Flexible Instance Types suited for in memory databases.
- If there is any issue with the node, AWS will take care of it (Failover, node replacement, maintenance etc)
- HIPAA compliant service.
- Redis only -> they have their own backup implementation if there is a memory issue which cannot allow BGSAVE.
- Can allow Snapshot creation on a regular basis.
- Easily Scalable both horizontally and vertically (Cluster mode enabled can have upto 250 shards).
- The Configuration Endpoint never changes thus no need to change anything in application in case of failover ( Unless you are using node endpoints )

Cons:
- As the service is AWS Managed, there is very little scope for performance optimization ( through parameter groups ) as you do not get OS level access.
- Not a lot of instance types such as x1 etc.
- Not much customization available Eg: you cannot change the password (Redis AUTH) once created.
- Regular maintenance may be required which can co-incide with the production critical time so extra thing to worry about.
- Not all maintenance event gets notification so unwanted disturbance.
- A lot of time is required to be launched (depends on node type and count of nodes).
- May prove to be expensive.

Custom EC2 Installation Pros and Cons:

Pros
- Extra freedom to optimize and customize
- Maintenance at your own time
- Use whatever resource

Cons
- Need custom logic for maintenance, scaling, recovery from failure and backups, etc.
- Increased operational overhead.

The list is long but these should cover significant differences.



回答5:

t2.micro vs cache.t2.micro

t2.micro - 1GiB cache.t2.micro - 0.555GiB

But on t2.micro You need OS!. The most of them need about 512MiB.

t2.micro can win maybe only on network performance. You can try run benchmarks and compare.