I'm receiving events from an EventHub using EventProcessorHost and an IEventProcessor class (call it: MyEventProcessor). I scale this out to two servers by running my EPH on both servers, and having them connect to the Hub using the same ConsumerGroup, but unique hostName's (using the machine name).
The problem is: at random hours of the day/night, the app logs this:
Exception information:
Exception type: ReceiverDisconnectedException
Exception message: New receiver with higher epoch of '186' is created hence current receiver with epoch '186' is getting disconnected. If you are recreating the receiver, make sure a higher epoch is used.
at Microsoft.ServiceBus.Common.ExceptionDispatcher.Throw(Exception exception)
at Microsoft.ServiceBus.Common.Parallel.TaskHelpers.EndAsyncResult(IAsyncResult asyncResult)
at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.StepCallback(IAsyncResult result)
This Exception occurs at the same time as a LeaseLostException, thrown from MyEventProcessor's CloseAsync method when it tries to checkpoint. (Presumably Close is being called because of the ReceiverDisconnectedException?)
I think this is occurring due to Event Hubs' automatic lease management when scaling out to multiple machines. But I'm wondering if I need to do something different to make it work more cleanly and avoid these Exceptions? Eg: something with epochs?
TLDR: This behavior is absolutely normal.
Why can't Lease Management be smooth & exception-free: To give more control on the situation to developer.
The really long story - all-the-way from Basics
EventProcessorhost
(herebyEPH
- is very similar to what__consumer_offset topic
does forKafka Consumers
- partition ownership & checkpoint store) is written byMicrosoft Azure EventHubs
team themselves - to translate all of theEventHubs partition receiver Gu
into a simpleonReceive(Events)
callback.EPH
is used to address 2 general, major, well-known problems while reading out of a high-throughput partitioned streams likeEventHubs
:fault tolerant receive pipe-line - for ex: a simpler version of the problem - if the host running a
PartitionReceiver
dies and comes back - it needs to resume processing from where it left. To remember the last successfully processedEventData
,EPH
uses theblob
supplied toEPH
constructor to store the checkpoints - when ever user invokescontext.CheckpointAsync()
. Eventually, when the host process dies (for ex: abruptly reboots or hits a hardware fault and never/comesback) - anyEPH
instance can pick up this task and resume from thatCheckpoint
.Balance/distribute partitions across
EPH
instances - lets say, if there are 10 partitions and 2EPH
instances processing events from these 10 partitions - we need a way to divide partitions across the instances (PartitionManager
component ofEPH
library does this). We useAzure Storage - Blob LeaseManagement-feature
to implement this. As of version2.2.10
- to simplify the problem,EPH
assumes that all partitions are loaded equally.With this, lets try to see what's going on: So, to start with, in the above example of
10
event hub partitions and2
EPH
instances processing events out of them:EPH
instance -EPH1
started, at-first, alone and a part of start-up, it created receivers to all 10 partitions and is processing events. In the start up -EPH1
will announce that it owns all these10
partitions by acquiring Leases on10
storage blobs representing these10
event hub partitions (with a standardnomenclature
- whichEPH
internally creates in the Storage account - from theStorageConnectionString
passed to thector
). Leases will be acquired for a set time, after which theEPH
instance will loose the ownership on this Partition.EPH1
continuallyannounces
once in a while - that it is still owning those partitions - byrenewing
leases on the blob. Frequency ofrenewal
, along with other useful tuning, can be performed usingPartitionManagerOptions
EPH2
starts up - and you supplied the sameAzureStorageAccount
asEPH1
to thector
ofEPH2
as well. Right now, it has0
partitions to process. So, to achieve balance of partitions acrossEPH
instances, it will go ahead anddownload
the list of allleaseblobs
which has the mapping ofowner
topartitionId
. From this, it willSTEAL
leases for its fair share ofpartitions
- which is5
in our example, and will announce that information on thatlease blob
. As part of this,EPH2
reads the latest checkpoint written byPartitionX
it wants to steal the lease for and goes ahead and creates correspondingPartitionReceiver
's with theEPOCH
same as the one in theCheckpoint
.EPH1
will loose ownership of these 5partitions
and will run into different errors based on the exact state it is in.EPH1
is actually invoking thePartitionReceiver.Receive()
call - whileEPH2
is creating thePartitionReceiver
on the same receiver -EPH1
will experience ReceiverDisconnectedException. This will eventually, invokeIEventProcessor.Close(CloseReason=LeaseLost)
. Note that, probability of hitting this specific Exception is higher, if the messages being received are larger or thePrefetchCount
is smaller - as in both cases the receiver would be performing more aggressive I/O.EPH1
is in the state ofcheckpointing
thelease
orrenewing
thelease
, while theEPH2
stole
the lease, theEventProcessorOptions.ExceptionReceived
eventHandler would be signaled with aleaselostException
(with409
conflict error on theleaseblob
) - which also eventually invokesIEventProcess.Close(LeaseLost)
.Why can't Lease Management be smooth & exception-free:
To keep the consumer simple and error-free, lease management related exceptions could have been swallowed by
EPH
and not notified to the user-code at all. However, we realized, throwingLeaseLostException
could empower customers to find interesting bugs inIEventProcessor.ProcessEvents()
callback - for which the symptom would be - frequent partition-movesEPH1
fails torenew
leases and comes back up! - and imagine if the n/w of this machine stands flaky for a day -EPH
instances are going to playping-pong
withPartitions
! This machine will continuously try to steal the lease from other machine - which is legitimate fromEPH
point-of-view - but, is a total disaster for the user ofEPH
- as it completely interferes with the processing pipe.EPH
- would exactly see aReceiverDisconnectedException
, when the n/w comes back up on this flaky m/c! We think the best and infact the only way is to enable the developer to smell this!ProcessEvents
logic - which throws unhandled exceptions which are fatal and brings down the whole process - ex: a poison event. This partition is going to move around a lot.EPH
is also using - by mistake (like an automated clean-up script) etc.outage
on Azure d.c where a specificEventHub.Partition
is located - say n/w incident. Partitions are going to move around acrossEPH
instances.Basically, in majority of situations, it would be tricky - for us to detect the diff. between these situations and a legitimate leaseLost due to balancing and we want to delegate control of these situations to the Developer.
refer to blog from our PM Dan for a general overview.