MongoDB Best way to pair and delete sequential dat

2019-04-15 01:08发布

问题:

Okay so let's say I'm making a game of blind war! Users A & B have x amount of soldiers

There are currently 0 DB docs.

User A sends 50 soldiers making a DB doc User B sends 62 soldiers after user A!

This creates a new DB doc.

I need the most effective/scalable way to lookup user A's doc, compare it to User B's doc then delete both docs! (After returning the result of course)

Here's the problem! I could potentially have 10,000+ users sending soldiers at relatively the same time! How can I successfully complete the above process without overlapping?

I'm using the MEANstack for development so I'm not limited to doing this in the database but obviously the WebApp has to be 100% secure!

If you need any additional info or explanation please let me know and I'll update this question

-Thanks

回答1:

One thing that comes to mind here is you may not need to do all the work that you think you need to, and your problem can probably be solved with a little help from TTL Indexes and possibly capped collections. Consider the following entries:

{ "_id" : ObjectId("531cf5f3ba53b9dd07756bb7"), "user" : "A", "units" : 50 }
{ "_id" : ObjectId("531cf622ba53b9dd07756bb9"), "user" : "B", "units" : 62 }

So there are two entries and you got that _id value back when you inserted. So at start, "A" had no-one to play against, but the entry for "B" will play against the one before it.

ObejctId's are monotonic, which means that the "next" one along is always greater in value from the last. So with the inserted data, just do this:

db.moves.find({ 
    _id: {$lt: ObjectId("531cf622ba53b9dd07756bb9") }, 
    user: { $ne: "B" } 
}).limit(1)

That gives the preceding inserted "move" to the current move that was just made, and does this because anything that was previously inserted will have an _id with less value than the current item. You also make sure that you are not "playing" against the user's own move, and of course you limit the result to one document only.

So the "moves" will be forever moving forward, When the next insert is made by user "C" they get the "move" from user "B", and then user "A" would get the "move" from user "C", and so on.

All that "could" happen here is that "B" make the next "move" in sequence, and you would pick up the same document as in the last request. But that is a point for your "session" design, to store the last "result" and make sure that you didn't get the same thing back, and as such, deal with that however you want to in your design.

That should be enough to "play" with. But let's get to your "deletion" part.

Naturally you "think" you want to delete things, but back to my initial "helpers" this should not be necessary. From above, deletion becomes only a factor of "cleaning-up", so your collection does not grow to massive proportions.

If you applied a TTL index,in much the same way as this tutorial explains, your collection entries will be cleaned up for you, and removed after a certain period of time.

Also what can be done, and especially considering that we are using the increasing nature of the _id key and that this is more or less a "queue" in nature, you could possibly apply this as a capped collection. So you can set a maximum size to how many "moves" you will keep at any given time.

Combining the two together, you get something that only "grows" to a certain size, and will be automatically cleaned for you, should activity slow down a bit. And that's going to keep all of the operations fast.

Bottom line is that the concurrency of "deletes" that you were worried about has been removed by actually "removing" the need to delete the documents that were just played. The query keeps it simple, and the TTL index and capped collection look after you data management for you.

So there you have what is my take on a very concurrent game of "Blind War".