Applicative is a Monoidal Functor :
mappend :: f -> f -> f
$ :: (a -> b) -> a -> b
<*> :: f(a -> b) -> f a -> f b
But I don't see any reference about Monoid in the definition of the Applicative typeclass, could you tell me why ?
Definition :
class Functor f => Applicative (f :: * -> *) where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
GHC.Base.liftA2 :: (a -> b -> c) -> f a -> f b -> f c
(*>) :: f a -> f b -> f b
(<*) :: f a -> f b -> f a
{-# MINIMAL pure, ((<*>) | liftA2) #-}
No mention of that structural Monoid is provided in this definition, but when you do
> ("ab",(+1)) <*> ("cd", 5)
>("abcd", 6)
You can clearly see the use of a Structural Monoid "(,) String" when implementing this instance of Applicative.
Another example to show that a "Structural Monoid" is used :
Prelude Data.Monoid> (2::Integer,(+1)) <*> (1::Integer,5)
<interactive>:35:1: error:
• Could not deduce (Monoid Integer) arising from a use of ‘<*>’
from the context: Num b
bound by the inferred type of it :: Num b => (Integer, b)
at <interactive>:35:1-36
• In the expression: (2 :: Integer, (+ 1)) <*> (1 :: Integer, 5)
In an equation for ‘it’:
it = (2 :: Integer, (+ 1)) <*> (1 :: Integer, 5)
The monoid that's referred to with “monoidal functor” is not a
Monoid
monoid, i.e. a value-level monoid. It's a type-level monoid instead. Namely, the boring product monoid(You may notice that this is not strictly speaking a monoid; it's only if you consider
((a,b),c)
and(a,(b,c))
as the same type. They are sure enough isomorphic.)To see what this has to do with
Applicative
, resp. monoidal functors, we need to write the class in other terms.IOW
It's a simple exercise to define a generic instance of the standard
Applicative
class in terms ofMonoidal
, vice versa.Regarding
("ab",(+1)) <*> ("cd", 5)
: that doesn't have much to do withApplicative
in general, but only with the writer applicative specifically. The instance isI wanted to complement Conor McBride's (pigworker) instructive answer with some more examples of
Monoid
s found inApplicative
s. It has been observed that theApplicative
instance of some functors resembles a correspondingMonoid
instance; for example, we have the following analogies:Following Conor's comment, we can understand why we have these correspondences. We use the following observations:
Applicative
container forms aMonoid
under the application operation<*>
.F
is given byF 1
(where1
denotes the unit()
).For each of the
Applicative
functors listed above, we compute their shape by instantiating the functor with the unit element. We get that...List
has the shape ofNat
:Maybe
has the shape ofBool
:Either
has the shape ofMaybe
:State
has the shape ofEndo
:The types of the shapes match precisely the types underlying the
Monoid
s listed in the beginning. One thing still puzzles me: some of these types admit multipleMonoid
instances (e.g.,Bool
can be made into aMonoid
asAll
orAny
) and I'm not entirely sure why we get one of the instances and not the other. My guess would be that this is related to the applicative laws and how they interact with the other component of the container – its positions.Perhaps the monoid you're looking for is this one.
As a comment, below, observes, it can be found in the reducers library under the name
Ap
. It's fundamental toApplicative
, so let's unpack it.Note, in particular, that because
()
is trivially aMonoid
,AppM f ()
is aMonoid
, too. And that's the monoid lurking behindApplicative f
.We could have insisted on
Monoid (f ())
as a superclass ofApplicative
, but that would have fouled things up royally.The monoid underlying
Applicative []
is multiplication of natural numbers, whereas the ‘obvious’ monoidal structure for lists is concatenation, which specialises to addition of natural numbers.Mathematics warning. Dependent types warning. Fake Haskell warning.
One way to see what's going on is to consider those Applicatives which happen to be containers in the dependently typed sense of Abbott, Altenkirch and Ghani. We'll have these in Haskell sometime soon. I'll just pretend the future has arrived.
The data structure
(s <| p)
is characterised bys
which tell you what the container looks like.p
which tell you for a given shape where you can put data.The above type says that to give data for such a structure is to pick a shape, then fill all the positions with data.
The container presentation of
[]
isNat <| Fin
whereso that
Fin n
has exactlyn
values. That is, the shape of a list is its length, and that tells you how many elements you need to fill up the list.You can find the shapes for a Haskell
Functor f
by takingf ()
. By making the data trivial, the positions don't matter. Constructing the GADT of positions generically in Haskell is rather more difficult.Parametricity tells us that a polymorphic function between containers in
must be given by
f :: s -> s'
mapping input shapes to output shapesg :: pi (a :: s) -> p' (f a) -> p a
mapping (for a given input shape) output positions back to the input positions where the output element will come from.(Secretly, those of us who have had our basic Hancock training also think of "shapes" as "commands" and "positions" as "valid responses". A morphism between containers is then exactly a "device driver". But I digress.)
Thinking along similar lines, what does it take to make a container
Applicative
? For starters,which is equivalently
That has to be given by
where
f = const neutral
for someNow, what about
? Again, parametricity tells us two things. Firstly, the only useful data for calculating the output shapes are the two input shapes. We must have a function
Secondly, the only way we can fill an output position with a
y
is to pick a position from the first input to find a function in `x -> y' and then a position in the second input to obtain its argument.That is, we can always identify the pair of input positions which determine the output in an output position.
The applicative laws tell us that
neutral
andoutShape
must obey the monoid laws, and that, moreover, we can lift monoids as followsThere's something more to say here, but for that, I need to contrast two operations on containers.
Composition
where
Sigma
is the type of dependent pairsWhat on earth does that mean?
Or, in Hancock
Or, more blatantly
The
join
of aMonad
flattens a composition. Lurking behind it is not just a monoid on shapes, but an integration operator. That is,requires
Your free monad gives you strategy trees, where you can use the result of one command to choose the rest of your strategy. As if you're interacting at a 1970s teletype.
Meanwhile...
Tensor
The tensor (also due to Hancock) of two containers is given by
That is
or
or
[] >< []
is the type of rectangular matrices: the ‘inner’ lists must all have the same lengthThe latter is a clue to why
><
is very hard to get your hands on in Haskell, but easy in the dependently typed setting.Like composition, tensor is a monoid with the identity functor as its neutral element. If we replace the composition underlying
Monad
by tensor, what do we get?But whatever can
mystery
be? It's not a mystery, because we know there's a rather rigid way to make polymorphic functions between containers. There must beand those are exactly what we said determined
<*>
earlier.Applicative
is the notion of effectful programming generated by tensor, whereMonad
is generated by composition. The fact that you don't get to/need to wait for the outer response to choose the inner command is whyApplicative
programs are more readily parallelizable.Seeing
[] >< []
as rectangular matrices tells us why<*>
for lists is built on top of multiplication.The free applicative functor is the free monoid with knobs on. For containers,
where
So a "command" is a big list of commands, like a deck of punch cards. You don't get to see any output before you choose your card deck. The "response" is your lineprinter output. It's the 1960s.
So there you go. The very nature of
Applicative
, tensor not composition, demands an underlying monoid, and a recombination of elements compatible with monoids.