class Applicative f => Monad f where
return :: a -> f a
(>>=) :: f a -> (a -> f b) -> f b
(<*>)
can be derived from pure and (>>=)
:
fs <*> as =
fs >>= (\f -> as >>= (\a -> pure (f a)))
For the line
fs >>= (\f -> as >>= (\a -> pure (f a)))
I am confused by the usage of >>=
. I think it takes a functor f a
and a function, then return another functor f b
. But in this expression, I feel lost.
Lets start with the type we're implementing:
(<*>) :: Monad f => f (a -> b) -> f a -> f b
(The normal type of <*>
of course has an Applicative
constraint, but here we're trying to use Monad
to implement Applicative
)
So in fs <*> as = _
, fs
is an "f of functions" (f (a -> b)
), and as
is an "f of a
s".
We'll start by binding fs
:
(<*>) :: Monad f => f ( a -> b) -> f a -> f b
fs <*> as
= fs >>= _
If you actually compile that, GHC will tell us what type the hole (_
) has:
foo.hs:4:12: warning: [-Wtyped-holes]
• Found hole: _ :: (a -> b) -> f b
Where: ‘a’, ‘f’, ‘b’ are rigid type variables bound by
the type signature for:
(Main.<*>) :: forall (f :: * -> *) a b.
Monad f =>
f (a -> b) -> f a -> f b
at foo.hs:2:1-45
That makes sense. Monad's >>=
takes an f a
on the left and a function a -> f b
on the right, so by binding an f (a -> b)
on the left the function on the right gets to receive an (a -> b)
function "extracted" from fs
. And provided we can write a function that can use that to return an f b
, then the whole bind expression will return the f b
we need to meet the type signature for <*>
.
So it'll look like:
(<*>) :: Monad f => f ( a -> b) -> f a -> f b
fs <*> as
= fs >>= (\f -> _)
What can we do there? We've got f :: a -> b
, and we've still got as :: f a
, and we need to make an f b
. If you're used to Functor
that's obvious; just fmap f as
. Monad
implies Functor
, so this does in fact work:
(<*>) :: Monad f => f ( a -> b) -> f a -> f b
fs <*> as
= fs >>= (\f -> fmap f as)
It's also, I think, a much easier way to understand the way Applicative
can be implemented generically using the facilities from Monad
.
So why is your example written using another >>=
and pure
instead of just fmap
? It's kind of harkening back to the days when Monad
did not have Applicative
and Functor
as superclasses. Monad
always "morally" implied both of these (since you can implement Applicative
and Functor
using only the features of Monad
), but Haskell didn't always require there to be these instances, which leads to books, tutorials, blog posts, etc explaining how to implement these using only Monad
. The example line given is simply inlining the definition of fmap
in terms of >>=
and pure
(return
)1.
I'll continue to unpack as if we didn't have fmap
, so that it leads to the version you're confused by.
If we're not going to use fmap
to combine f :: a -> b
and as :: f a
, then we'll need to bind as
so that we have an expression of type a
to apply f
to:
(<*>) :: Monad f => f ( a -> b) -> f a -> f b
fs <*> as
= fs >>= (\f -> as >>= (\a -> _))
Inside that hole we need to make an f b
, and we have f :: a -> b
and a :: a
. f a
gives us a b
, so we'll need to call pure
to turn that into an f b
:
(<*>) :: Monad f => f ( a -> b) -> f a -> f b
fs <*> as
= fs >>= (\f -> as >>= (\a -> pure (f a)))
So that's what this line is doing.
- Binding
fs :: f (a -> b)
to get access to an f :: a -> b
- Inside the function that has access to
f
it's binding as
to get access to a :: a
- Inside the function that has access to
a
(which is still inside the function that has access to f
as well), call f a
to make a b
, and call pure
on the result to make it an f b
1 You can implement fmap
using >>=
and pure
as fmap f xs = xs >>= (\x -> pure (f x))
, which is also fmap f xs = xs >>= pure . f
. Hopefully you can see that the inner bind of your example is simply inlining the first version.
Applicative is a Functor. Monad is also a Functor. We can see the "functorial values" as standing for computations of those values (like IO a
, Maybe a
, [] a
, etc.).
Both fs
and as
are your functorial values, and bind ((>>=)
, or in do
notation <-
) "gets" the carried values "in" the functor. Bind belongs to Monad.
What we can implement in Monad with (using return
as just a synonym for pure
):
do { f <- fs ; -- fs >>= ( \ f -> -- fs :: F (a -> b) -- f :: a -> b
a <- as ; -- as >>= ( \ a -> -- as :: F a -- a :: a
return (f a) -- return (f a) ) ) -- f a :: b
} -- :: F b
( or, with MonadComprehensions,
[ f a | f <- fs, a <- as ]
), we get from the Applicative's <*>
which expresses the same computation combination, but without the full power of Monad. The difference is, with Applicative as
is not dependent on the value f
there, "produced by" the computation fs
. Monad allows such dependency, with
[ bar x y | x <- xs, y <- foo x ]
but Applicative forbids it.
With Applicative all the "computations" (like fs
or as
) must be known "in advance"; with Monad they can be calculated based on the results of the previous "computation steps" (like foo x
is doing: for (each) value x
that the computation xs
will produce, new computation foo x
will be (purely) calculated, that will produce (some) y
(s) in its turn).
If you want to see how the types are aligned in the >>=
expressions, here's your expression with its subexpressions named, so they can be annotated with their types,
exp = fs >>= g -- fs >>=
where g f = xs >>= h -- (\ f -> xs >>=
where h x = return (f x) -- ( \ x -> pure (f x) ) )
x :: a
f :: a -> b
f x :: b
return (f x) :: F b
h :: a -> F b -- (>>=) :: F a -> (a -> F b) -> F b
xs :: F a -- xs h
-- <-----
xs >>= h :: F b
g f :: F b
g :: (a -> b) -> F b -- (>>=) :: F (a->b) -> ((a->b) -> F b) -> F b
fs :: F (a -> b) -- fs g
-- <----------
fs >>= g :: F b
exp :: F b
and the types of the two (>>=)
applications fit:
(fs :: F (a -> b)) >>= (g :: (a -> b) -> F b)) :: F b
(xs :: F a ) >>= (h :: (a -> F b)) :: F b
Thus, the overall type is indeed
foo :: F (a -> b) -> F a -> F b
foo fs xs = fs >>= g -- foo = (<*>)
where g f = xs >>= h
where h x = return (f x)
In the end, we can see monadic bind as an implementation of do
, and treat the do
notation
do {
abstractly, axiomatically, as consisting of the lines of the form
a <- F a ;
b <- F b ;
......
n <- F n ;
return (foo a b .... n)
}
(with a
, F b
, etc. denoting values of the corresponding types), such that it describes the overall combined computation of the type F t
, where foo :: a -> b -> ... -> n -> t
. And when none of the <-
's right-hand side's expressions is dependent on no preceding left-hand side's variable, it's not essentially Monadic, but just an Applicative computation that this do
block is describing.
Because of the Monad laws it is enough to define the meaning of do
blocks with just two <-
lines. For Functors, just one <-
line is allowed ( fmap f xs = do { x <- xs; return (f x) }
).
Thus, Functors/Applicative Functors/Monads are EDSLs, embedded domain-specific languages, because the computation-descriptions are themselves values of our language (those to the right of the arrows in do
notation).
Lastly, a types mandala for you:
M a
(a -> M b)
M (M b)
M b
This contains three in one:
F a A a M a
a -> b A (a -> b) a -> M b
-------------- -------------- -----------------
F b A b M b
You can define (<*>)
in terms of (>>=)
and return
because all monads are applicative functors. You can read more about this in the Functor-Applicative-Monad Proposal. In particular, pure = return
and (<*>) = ap
is the shortest way to achieve an Applicative definition given an existing Monad definition.
See the type signatures for (<*>)
, ap
and (>>=)
:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
ap :: Monad m => m (a -> b) -> m a -> m b
(>>=) :: Monad m => m a -> (a -> m b) -> m b
The type signature for (<*>)
and ap
are nearly equivalent. Since ap
is written using do
-notation, it is equivalent to some use of (>>=)
. I'm not sure this helps, but I find the definition of ap
readable. Here's a rewrite:
ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
≡ ap m1 m2 = do
x1 <- m1
x2 <- m2
return (x1 x2)
≡ ap m1 m2 =
m1 >>= \x1 ->
m2 >>= \x2 ->
return (x1 x2)
≡ ap m1 m2 = m1 >>= \x1 -> m2 >>= \x2 -> return (x1 x2)
≡ ap mf ma = mf >>= (\f -> ma >>= (\a -> pure (f a)))
Which is your definition. You could show that this definition upholds the applicative functor laws, since not everything defined in terms of (>>=)
and return
does that.