There's a standard pattern for events in .NET - they use a delegate
type that takes a plain object called sender and then the actual "payload" in a second parameter, which should be derived from EventArgs
.
The rationale for the second parameter being derived from EventArgs
seems pretty clear (see the .NET Framework Standard Library Annotated Reference). It is intended to ensure binary compatibility between event sinks and sources as the software evolves. For every event, even if it only has one argument, we derive a custom event arguments class that has a single property containing that argument, so that way we retain the ability to add more properties to the payload in future versions without breaking existing client code. Very important in an ecosystem of independently-developed components.
But I find that the same goes for zero arguments. This means that if I have an event that has no arguments in my first version, and I write:
public event EventHandler Click;
... then I'm doing it wrong. If I change the delegate type in the future to a new class as its payload:
public class ClickEventArgs : EventArgs { ...
... I will break binary compatibility with my clients. The client ends up bound to a specific overload of an internal method add_Click
that takes EventHandler
, and if I change the delegate type then they can't find that overload, so there's a MissingMethodException
.
Okay, so what if I use the handy generic version?
public EventHandler<EventArgs> Click;
No, still wrong, because an EventHandler<ClickEventArgs>
is not an EventHandler<EventArgs>
.
So to get the benefit of EventArgs
, you have to derive from it, rather than using it directly as is. If you don't, you may as well not be using it (it seems to me).
Then there's the first argument, sender
. It seems to me like a recipe for unholy coupling. An event firing is essentially a function call. Should the function, generally speaking, have the ability to dig back through the stack and find out who the caller was, and adjust its behaviour accordingly? Should we mandate that interfaces should look like this?
public interface IFoo
{
void Bar(object caller, int actualArg1, ...);
}
After all, the implementor of Bar
might want to know who the caller
was, so they can query for additional information! I hope you're puking by now. Why should it be any different for events?
So even if I am prepared to take the pain of making a standalone EventArgs
-derived class for every event I declare, just to make it worth my while using EventArgs
at all, I definitely would prefer to drop the object sender argument.
Visual Studio's autocompletion feature doesn't seem to care what delegate you use for an event - you can type +=
[hit Space, Return] and it writes a handler method for you that matches whatever delegate it happens to be.
So what value would I lose by deviating from the standard pattern?
As a bonus question, will C#/CLR 4.0 do anything to change this, perhaps via contravariance in delegates? I attempted to investigate this but hit another problem. I originally included this aspect of the question in that other question, but it caused confusion there. And it seems a bit much to split this up into a total of three questions...
Update:
Turns out I was right to wonder about the effects of contravariance on this whole issue!
As noted elsewhere, the new compiler rules leave a hole in the type system that blows up at runtime. The hole has effectively been plugged by defining EventHandler<T>
differently to Action<T>
.
So for events, to avoid that type hole you should not use Action<T>
. That doesn't mean you have to use EventHandler<TEventArgs>
; it just means that if you use a generic delegate type, don't pick one that is enabled for contravariance.