I believe I understand fmap . fmap
for Functors, but on functions it's hurting my head for months now.
I've seen that you can just apply the definition of (.)
to (.) . (.)
, but I've forgot how to do that.
When I try it myself it always turns out wrong:
(.) f g = \x -> f (g x)
(.) (.) (.) = \x -> (.) ((.) x)
\x f -> (.) ((.) x) f
\x f y -> (((.)(f y)) x)
\x f y g-> (((.)(f y) g) x)
\x f y g-> ((f (g y)) x)
\x f y g-> ((f (g y)) x):: t2 -> (t1 -> t2 -> t) -> t3 -> (t3 -> t1) -> t
If "just applying the definition" is the only way of doing it, how did anybody come up with (.) . (.)
?
There must be some deeper understanding or intuition I'm missing.
Your solution diverges when you introduce
y
. It should beWhich matches the original type signature:
(.) . (.) :: (c -> d) -> (a -> b -> c) -> a -> b -> d
(It's easiest to do the expansion in ghci, where you can check each step with
:t expression
)Edit:
The deeper intution is this:
(.)
is simply defined asWhich we can simplify to
So when you supply it two arguments, it's curried and still needs another argument to resolve. Each time you use
(.)
with 2 arguments, you create a "need" for one more argument.(.) . (.)
is of course just(.) (.) (.)
, so let's expand it:We can beta-reduce on
f0
andg0
(but we don't have anx0
!):Substitute the 2nd expression for
f1
...Now it "flips back"! (beta-reduction on
f2
):This is the interesting step -
x0
is substituted forf2
-- This means thatx
, which could have been data, is instead a function.This is what
(.) . (.)
provides -- the "need" for the extra argument.This is starting to look normal... Let's beta-reduce a last time (on
g2
):So we're left with simply
, where the arguments are nicely still in order.
It's easiest to write equations, combinators-style, instead of lambda-expressions:
a b c = (\x -> ... body ...)
is equivalent toa b c x = ... body ...
, and vice versa, provided thatx
does not appear among{a,b,c}
. So,You discover this if, given
f (g x y)
, you want to convert it into a combinatory form (get rid of all the parentheses and variable repetitions). Then you apply patterns corresponding to the combinators' definitions, hopefully tracing this derivation backwards. This is much less mechanical/automatic though.You can also use your understanding of
fmap . fmap
.If you have two
Functor
sfoo
andbar
, thenfmap . fmap
takes a function and produces an induced function for the composition of the twoFunctor
s.Now, for any type
t
,(->) t
is aFunctor
, and thefmap
for thatFunctor
is(.)
.So
(.) . (.)
isfmap . fmap
for the case where the twoFunctor
s are(->) s
and(->) t
, and thusit "composes" a function
f :: a -> b
with a function of two arguments,g :: s -> t -> a
,That view also makes it clear that, and how, the pattern extends to functions taking more arguments,
etc.
Coming up with
(.) . (.)
is actually pretty straightforward, it's the intuition behind what it does that is quite tricky to understand.(.)
gets you very far when rewriting expression into the "pipe" style computations (think of|
in shell). However, it becomes awkward to use once you try to compose a function that takes multiple arguments with a function that only takes one. As an example, let's have a definition ofconcatMap
:Getting rid of
xs
is just a standard operation:However, there's no "nice" way of getting rid of
f
. This is caused by the fact, thatmap
takes two arguments and we'd like to applyconcat
on its final result.You can of course apply some pointfree tricks and get away with just
(.)
:But alas, readability of this code is mostly gone. Instead, we introduce a new combinator, that does exactly what we need: apply the second function to the final result of first one.
Fine, that's it for motivation. Let's get to the pointfree business.
Now, here comes the interesting part. This is yet another of the pointfree tricks that usually helps when you get stuck: we rewrite
.
into its prefix form and try to continue from there.As for intuition, there's this very nice article that you should read. I'll paraphrase the part about
(.)
:Let's think again about what our combinator should do: it should apply
f
to the result of result ofg
(I've been using final result in the part before on purpose, it's really what you get when you fully apply - modulo unifying type variables with another function type - theg
function, result here is just applicationg x
for somex
).What it means for us to apply
f
to the result ofg
? Well, once we applyg
to some value, we'll take the result and applyf
to it. Sounds familiar: that's what(.)
does.Now, it turns out that composition (our of word) of those combinators is just a function composition, that is:
So, this is what I get when I do a slightly more incremental expansion
Which, according to ghci has the correct type
While I don't know the origins of this combinator, it is likely that it was developed for use in combinatory logic, where you work strictly with combinators, so you can't define things using more convenient lambda expressions. There may be some intuition that goes with figuring these things out, but I haven't found it. Most likely, you would develop some level of intuition if you had to do it enough.