Applicative functors are well-known and well-loved among Haskellers, for their ability to apply functions in an effectful context.
In category-theoretic terms, it can be shown that the methods of Applicative
:
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
are equivalent to having a Functor f
with the operations:
unit :: f ()
(**) :: (f a, f b) -> f (a,b)
the idea being that to write pure
you just replace the ()
in unit
with the given value, and to write (<*>)
you squish the function and argument into a tuple and then map a suitable application function over it.
Moreover, this correspondence turns the Applicative
laws into natural monoidal-ish laws about unit
and (**)
, so in fact an applicative functor is precisely what a category theorist would call a lax monoidal functor (lax because (**)
is merely a natural transformation and not an isomorphism).
Okay, fine, great. This much is well-known. But that's only one family of lax monoidal functors – those respecting the monoidal structure of the product. A lax monoidal functor involves two choices of monoidal structure, in the source and destination: here's what you get if you turn product into sum:
class PtS f where
unit :: f Void
(**) :: f a -> f b -> f (Either a b)
-- some example instances
instance PtS Maybe where
unit = Nothing
Nothing ** Nothing = Nothing
Just a ** Nothing = Just (Left a)
Nothing ** Just b = Just (Right b)
Just a ** Just b = Just (Left a) -- ick, but it does satisfy the laws
instance PtS [] where
unit = []
xs ** ys = map Left xs ++ map Right ys
It seems like turning sum into other monoidal structures is made less interesting by unit :: Void -> f Void
being uniquely determined, so you really have more of a semigroup going on. But still:
- Are other lax monoidal functors like the above studied or useful?
- Is there a neat alternative presentation for them like the
Applicative
one?
The "neat alternative presentation" for
Applicative
is based on the following two equivalenciesThe trick to get this "neat alternative presentation" for
Applicative
is the same as the trick forzipWith
- replace explicit types and constructors in the interface with things that the type or constructor can be passed into to recover what the original interface was.Is replaced with
pure
which we can substitute the type()
and the constructor() :: ()
into to recoverunit
.And similarly (though not as straightforward) for substituting the type
(a,b)
and the constructor(,) :: a -> b -> (a,b)
intoliftA2
to recover**
.Applicative
then gets the nice<*>
operator by lifting function application($) :: (a -> b) -> a -> b
into the functor.To find a "neat alternative presentation" for
PtS
we need to findVoid
into to recoverunit
Either a b
and the constructorsLeft :: a -> Either a b
andRight :: b -> Either a b
into to recover**
(If you notice that we already have something the constructors
Left
andRight
can be passed to you can probably figure out what we can replace**
with without following the steps I used; I didn't notice this until after I solved it)unit
This immediately gets us an alternative to
unit
for sums:operator
We'd like to find an alternative to
(**)
. There is an alternative to sums likeEither
that allows them to be written as functions of products. It shows up as the visitor pattern in object oriented programming languages where sums don't exist.It's what you would get if you changed the order of
either
's arguments and partially applied them.We can see that
Either a b ≅ Sum a b
. This allows us to rewrite the type for(**)
Now it's clear what
**
does. It delaysfmap
ing something onto both of its arguments, and combines the results of those two mappings. If we introduce a new operator,<||> :: f c -> f c -> f c
which simply assumes that thefmap
ing was done already, then we can see thatOr back in terms of
Either
:So we can express everything
PtS
can express with the following class, and everything that could implementPtS
can implement the following class:This is almost certainly the same as the
Alternative
class, except we didn't require that theFunctor
beApplicative
.Conclusion
It's just a
Functor
that is aMonoid
for all types. It'd be equivalent to the following:The
forall a. Monoid (f a)
constraint is pseudo-code; I don't know a way to express constraints like this in Haskell.My background in category theory is very limited, but FWIW, your PtS class reminds me of the
Alternative
class, which looks essentially like this:The only problem of course is that
Alternative
is an extension ofApplicative
. However, perhaps one can imagine it being presented separately, and the combination withApplicative
is then quite reminiscent of a functor with a non-commutative ring-like structure, with the two monoid structures as the operations of the ring? There are also distributivity laws betweenApplicative
andAlternative
IIRC.Before you can even talk about monoidal functors, you need to make sure you're in a monoidal category. It so happens that Hask is a monoidal category in the following way:
()
as identity(,)
as bifunctor(a,()) ≅ ((),a) ≅ a
, and(a,(b,c)) ≅ ((a,b),c)
.Like you observed, it's also a monoidal category when you exchange
()
forVoid
and(,)
forEither
.However, monoidal doesn't get you very far – what makes Hask so powerful is that it's cartesian closed. That gives us currying and related techniques, without which applicative would be pretty much useless.
A monoidal category can be cartesian closed iff its identity is a terminal object, i.e. a type onto which there exists precisely one (of course, we disregard ⟂ here) arrow. There is one function
A -> ()
for any typeA
, namelyconst ()
. There is no functionA -> Void
however. Instead,Void
is the initial object: there exists precisely one arrow from it, namely theabsurd :: Void -> a
method. Such a monoidal category can't be cartesian closed then.Now, of course, you can switch between initial and terminal easily by turning around the arrow direction. That always places you in the dual structure, so we get a cocartesian closed category. But that means you also need to flip the arrows in your monoidal functors. Those are called decisive functors then (and generalise comonads). With Conor's ever-so-amazing naming scheme,