A monad is a mathematical structure which is heavily used in (pure) functional programming, basically Haskell. However, there are many other mathematical structures available, like for example applicative functors, strong monads, or monoids. Some have more specific, some are more generic. Yet, monads are much more popular. Why is that?
One explanation I came up with, is that they are a sweet spot between genericity and specificity. This means monads capture enough assumptions about the data to apply the algorithms we typically use and the data we usually have fulfills the monadic laws.
Another explanation could be that Haskell provides syntax for monads (do-notation), but not for other structures, which means Haskell programmers (and thus functional programming researchers) are intuitively drawn towards monads, where a more generic or specific (efficient) function would work as well.
First, I think that it is not quite true that monads are much more popular than anything else; both Functor and Monoid have many instances that are not monads. But they are both very specific; Functor provides mapping, Monoid concatenation. Applicative is the one class that I can think of that is probably underused given its considerable power, due largely to its being a relatively recent addition to the language.
But yes, monads are extremely popular. Part of that is the do notation; a lot of Monoids provide Monad instances that merely append values to a running accumulator (essentially an implicit writer). The blaze-html library is a good example. The reason, I think, is the power of the type signature
(>>=) :: Monad m => m a -> (a -> m b) -> m b
. While fmap and mappend are useful, what they can do is fairly narrowly constrained. bind, however, can express a wide variety of things. It is, of course, canonized in the IO monad, perhaps the best pure functional approach to IO before streams and FRP (and still useful beside them for simple tasks and defining components). But it also provides implicit state (Reader/Writer/ST), which can avoid some very tedious variable passing. The various state monads, especially, are important because they provide a guarantee that state is single threaded, allowing mutable structures in pure (non-IO) code before fusion. But bind has some more exotic uses, such as flattening nested data structures (the List and Set monads), both of which are quite useful in their place (and I usually see them used desugared, calling liftM or (>>=) explicitly, so it is not a matter of do notation). So while Functor and Monoid (and the somewhat rarer Foldable, Alternative, Traversable, and others) provide a standardized interface to a fairly straightforward function, Monad's bind is considerably more flexibility.In short, I think that all your reasons have some role; the popularity of monads is due to a combination of historical accident (do notation and the late definition of Applicative) and their combination of power and generality (relative to functors, monoids, and the like) and understandability (relative to arrows).
If a type
m :: * -> *
has aMonad
instance, you get Turing-complete composition of functions with typea -> m b
. This is a fantastically useful property. You get the ability to abstract various Turing-complete control flows away from specific meanings. It's a minimal composition pattern that supports abstracting any control flow for working with types that support it.Compare this to
Applicative
, for instance. There, you get only composition patterns with computational power equivalent to a push-down automaton. Of course, it's true that more types support composition with more limited power. And it's true that when you limit the power available, you can do additional optimizations. These two reasons are why theApplicative
class exists and is useful. But things that can be instances ofMonad
usually are, so that users of the type can perform the most general operations possible with the type.Edit: By popular demand, here are some functions using the
Monad
class:Combining those with do syntax (or the raw
>>=
operator) gives you name binding, indefinite looping, and complete boolean logic. That's a well-known set of primitives sufficient to give Turing completeness. Note how all the functions have been lifted to work on monadic values, rather than simple values. All monadic effects are bound only when necessary - only the effects from the chosen branch ofifM
are bound into its final value. Both*&&
and*||
ignore their second argument when possible. And so on..Now, those type signatures may not involve functions for every monadic operand, but that's just a cognitive simplification. There would be no semantic difference, ignoring bottoms, if all the non-function arguments and results were changed to
() -> m a
. It's just friendlier to users to optimize that cognitive overhead out.Now, let's look at what happens to those functions with the
Applicative
interface.Well, uh. It got the same type signature. But there's a really big problem here already. The effects of both x and y are bound into the composed structure, regardless of which one's value is selected.
Well, ok, that seems like it'd be ok, except for the fact that it's an infinite loop because
ifA
will always execute both branches... Except it's not even that close.pure x
has the typef a
.whileA p step <$> step x
has the typef (f a)
. This isn't even an infinite loop. It's a compile error. Let's try again..Well shoot. Don't even get that far.
whileA p step
has the typea -> f a
. If you try to use it as the first argument to<*>
, it grabs theApplicative
instance for the top type constructor, which is(->)
, notf
. Yeah, this isn't gonna work either.In fact, the only function from my
Monad
examples that would work with theApplicative
interface isnotM
. That particular function works just fine with only aFunctor
interface, in fact. The rest? They fail.Of course it's to be expected that you can write code using the
Monad
interface that you can't with theApplicative
interface. It is strictly more powerful, after all. But what's interesting is what you lose. You lose the ability to compose functions that change what effects they have based on their input. That is, you lose the ability to write certain control-flow patterns that compose functions with typesa -> f b
.Turing-complete composition is exactly what makes the
Monad
interface interesting. If it didn't allow Turing-complete composition, it would be impossible for you, the programmer, to compose togetherIO
actions in any particular control flow that wasn't nicely prepackaged for you. It was the fact that you can use theMonad
primitives to express any control flow that made theIO
type a feasible way to manage the IO problem in Haskell.Many more types than just
IO
have semantically validMonad
interfaces. And it happens that Haskell has the language facilities to abstract over the entire interface. Due to those factors,Monad
is a valuable class to provide instances for, when possible. Doing so gets you access to all the existing abstract functionality provided for working with monadic types, regardless of what the concrete type is.So if Haskell programmers seem to always care about
Monad
instances for a type, it's because it's the most generically-useful instance that can be provided.Monads are special because of
do
notation, which lets you write imperative programs in a functional language. Monad is the abstraction that allows you to splice together imperative programs from smaller, reusable components (which are themselves imperative programs). Monad transformers are special because they represent enhancing an imperative language with new features.I suspect that the disproportionately large attention given to this one particular type class (
Monad
) over the many others is mainly a historical fluke. People often associateIO
withMonad
, although the two are independently useful ideas (as are list reversal and bananas). BecauseIO
is magical (having an implementation but no denotation) andMonad
is often associated withIO
, it's easy to fall into magical thinking aboutMonad
.(Aside: it's questionable whether
IO
even is a monad. Do the monad laws hold? What do the laws even mean forIO
, i.e., what does equality mean? Note the problematic association with the state monad.)Well, first let me explain what the role of monads is: Monads are very powerful, but in a certain sense: You can pretty much express anything using a monad. Haskell as a language doesn't have things like action loops, exceptions, mutation, goto, etc. Monads can be expressed within the language (so they are not special) and make all of these reachable.
There is a positive and a negative side to this: It's positive that you can express all those control structures you know from imperative programming and a whole bunch of them you don't. I have just recently developed a monad that lets you reenter a computation somewhere in the middle with a slightly changed context. That way you can run a computation, and if it fails, you just try again with slightly adjusted values. Furthermore monadic actions are first class, and that's how you build things like loops or exception handling. While
while
is primitive in C in Haskell it's actually just a regular function.The negative side is that monads give you pretty much no guarantees whatsoever. They are so powerful that you are allowed to do whatever you want, to put it simply. In other words just like you know from imperative languages it can be hard to reason about code by just looking at it.
The more general abstractions are more general in the sense that they allow some concepts to be expressed which you can't express as monads. But that's only part of the story. Even for monads you can use a style known as applicative style, in which you use the applicative interface to compose your program from small isolated parts. The benefit of this is that you can reason about code by just looking at it and you can develop components without having to pay attention to the rest of your system.