ap
doesn't have a documented spec, and reads with a comment pointing out it could be <*>
, but isn't for practical reasons:
ap :: (Monad m) => m (a -> b) -> m a -> m b
ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
-- Since many Applicative instances define (<*>) = ap, we
-- cannot define ap = (<*>)
So I assume the ap
in the (<*>) = ap
law is shorthand for "right-hand side of ap" and the law actually expresses a relationship between >>=
, return
and <*>
right? Otherwise the law is meaningless.
The context is me thinking about Validation
and how unsatisfying it is that it can't seem to have a lawful Monad
instance. I'm also thinking about ApplicativeDo
and how that transformation sort of lets us recover from the practical effects of a Monad
instance for Validation
; what I most often want to do is accumulate errors as far as possible, but still be able to use bind when necessary. We actually export a bindV
function which we need to use just about everywhere, it's all kind of absurd. The only practical consequence I can think of the lawlessness is that we accumulate different or fewer errors depending on what sort of composition we use (or how our program might theoretically be transformed by rewrite rules, though I'm not sure why applicative composition would ever get converted to monadic).
EDIT: The documentation for the same laws in Monad
is more extensive:
Furthermore, the Monad and Applicative operations should relate as follows:
pure = return (<*>) = ap
The above laws imply:
fmap f xs = xs >>= return . f (>>) = (*>)
"The above laws imply"... so is the idea here that these are the real laws we care about?
But now I'm left trying to understand these in the context of Validation
. The first law would hold. The second could obviously be made to hold if we just define (>>) = (*>)
.
But the documentation for Monad
surprisingly says nothing at all (unless I'm just missing it) about how >>
should relate. Presumably we want that
a >> b = a >>= \_ -> b
...and (>>)
is included in the class so that it can be overridden for efficiency, and this just never quite made it into the docs.
So if that's the case, then I guess the way Monad
and Applicative
relate is actually something like:
return = pure
xs >>= return . f = fmap f xs
a >>= \_ -> b = fmap (const id) a <*> b
Speaking pedantically, I'd say the opposite: because
ap
is definitionally equal to its right-hand side, saying(<*>) = ap
is exactly the same as sayingm1 <*> m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
. It's just the normal first step of dealing with equalities like that: expanding the definitions.Reply to the comment:
Then the law would change or be removed too. Just as when/if
join
is added toMonad
the current definition will become a law instead.Do you mean it would be impossible to define
ap
or the law in this way?If
ap
, then you are correct: it would have the wrong type. But stating the law like this would be fine.Every
Monad
gives rise to anApplicative
, and for that inducedApplicative
,<*> = ap
will hold definitionally. But given two structures -Monad m
andApplicative m
- there is no guarantee that these structures agree without the two laws<*> = ap
andpure = return
. For example, take the 'regular'Monad
instance for lists, and the zip-listApplicative
instance. While there is nothing fundamentally 'wrong' about aMonad
andApplicative
instance disagreeing, it would probably be confusing to most users, and so it's prohibited by theMonad
laws.tl;dr The laws in question serve to ensure that
Monad
andApplicative
agree in an intuitively obvious way.