Memcache-based message queue?

2020-05-27 04:07发布

I'm working on a multiplayer game and it needs a message queue (i.e., messages in, messages out, no duplicates or deleted messages assuming there are no unexpected cache evictions). Here are the memcache-based queues I'm aware of:

I learned the concept of the memcache queue from this blog post:

All messages are saved with an integer as key. There is one key that has the next key and one that has the key of the oldest message in the queue. To access these the increment/decrement method is used as its atomic, so there are two keys that act as locks. They get incremented, and if the return value is 1 the process has the lock, otherwise it keeps incrementing. Once the process is finished it sets the value back to 0. Simple but effective. One caveat is that the integer will overflow, so there is some logic in place that sets the used keys to 1 once we are close to that limit. As the increment operation is atomic, the lock is only needed if two or more memcaches are used (for redundancy), to keep those in sync.

My question is, is there a memcache-based message queue service that can run on App Engine?

5条回答
别忘想泡老子
2楼-- · 2020-05-27 04:48

I would be very careful using the Google App Engine Memcache in this way. You are right to be worrying about "unexpected cache evictions".

Google expect you to use the memcache for caching data and not storing it. They don't guarantee to keep data in the cache. From the GAE Documentation:

By default, items never expire, though items may be evicted due to memory pressure.

Edit: There's always Amazon's Simple Queueing Service. However, this may not meet price/performance levels either as:

  1. There would be the latency of calling from the Google to Amazon servers.
  2. You'd end up paying twice for all the data traffic - paying for it to leave Google and then paying again for it to go in to Amazon.
查看更多
走好不送
3楼-- · 2020-05-27 04:52

Until Google impliment a proper job-queue, why not use the data-store? As others have said, memcache is just a cache and could lose queue items (which would be.. bad)

The data-store should be more than fast enough for what you need - you would just have a simple Job model, which would be more flexible than memcache as you're not limited to key/value pairs

查看更多
兄弟一词,经得起流年.
4楼-- · 2020-05-27 04:58

If you're happy with the possibility of losing data, by all means go ahead. Bear in mind, though, that although memcache generally has lower latency than the datastore, like anything else, it will suffer if you have a high rate of atomic operations you want to execute on a single element. This isn't a datastore problem - it's simply a problem of having to serialize access.

Failing that, Amazon's SQS seems like a viable option.

查看更多
放荡不羁爱自由
5楼-- · 2020-05-27 04:58

Why not use Task Queue:
https://developers.google.com/appengine/docs/python/taskqueue/
https://developers.google.com/appengine/docs/java/taskqueue/

It seems to solve the issue without the likely loss of messages in Memcached-based queue.

查看更多
做个烂人
6楼-- · 2020-05-27 05:04

I have started a Simple Python Memcached Queue, it might be useful: http://bitbucket.org/epoz/python-memcache-queue/

查看更多
登录 后发表回答