I've read this nice post from Jonathan Oliver about handling out of order events.
http://blog.jonathanoliver.com/cqrs-out-of-sequence-messages-and-read-models/
The solution that we use is to dequeue a message and to place it in a “holding table” until all messages with a previous sequence are
received. When all previous messages have been received we take all
messages out of the holding table and run them in sequence through the
appropriate handlers. Once all handlers have been executed
successfully, we remove the messages from the holding table and commit
the updates to the read models.
This works for us because the domain publishes events and marks them
with the appropriate sequence number. Without this, the solution
below would be much more difficult—if not impossible.
This solution is using a relational database as a persistence storage
mechanism, but we’re not using any of the relational aspects of the
storage engine. At the same time, there’s a caveat in all of this.
If message 2, 3, and 4 arrive but message 1 never does, we don’t apply
any of them. The scenario should only happen if there’s an error
processing message 1 or if message 1 somehow gets lost. Fortunately,
it’s easy enough to correct any errors in our message handlers and
re-run the messages. Or, in the case of a lost message, to re-build
the read models from the event store directly.
Got a few questions particularly about how he says we can always ask the event store for missing events.
- Does the write side of CQRS have to expose a service for the read
side to "demand" replaying of events? For example if event 1 was not
received but but 2, 4, 3 have can we ask the eventstore through a
service to republish events back starting from 1?
- Is this service the responsibility of the write side of CQRS?
- How do we re-build the read model using this?
If you have a sequence number, then you can detect a situation where current event is out of order, e.g. currentEventNumber != lastReceivedEventNumber + 1
Once you've detected that, you just throw an exception. If your subscriber has a mechanism for 'retries' it will try to process this event again in a second or so. There is a pretty good chance that during this time earlier events will be processed and sequence will be correct. This is a solution if out-of-order events are happening rarely.
If you are facing with this situation regularly, you need to implement global locking mechanism, which will allow certain events be processed sequentially.
For example, we were using sp_getapplock in MSSQL to achieve global "critical section" behaviour in certain situations. Apache ZooKeeper offers a framework to deal with even more complicated scenarios when multiple parts of the distributed application require something more than a just simple lock.
What you are describing here, is event sourcing(ES). Store emitted events by command model to persistent storage.
Replaying stored events by event type, command model ids (aggregate root id), command model types (aggregate root type). Thats advantage of having ES. You can even later replay those events to produce new type of query model.
Using ES approach can be also used to have UnitOfWork scoped application transaction. At the commit the emitted events are persisted and distributed to event listeners (QM maintain service). The validation at commit stage should contain checking of concurrent access by sequence number in db.
Another alternative would be to feed the service that your reading events from (S1) in such a way that that it can only produce in-order events to your service (S2).
For example if you have loads of events for many different sessions coming in, have an ordering service (O1) at the front end responsible for order. It ensures only one event for each session gets passed to (S1) and only when (S1) and (S2) have both processed it successfully does (O1) allow a new event for that session to pass to (S1). Throw in a bit of queuing too for performance.