I've been getting into the nitty gritty of the haskell typesystem and trying to get at the fine points of type classes. I've learned a heap, but I'm hitting a wall on the following pieces of code.
Using these Class and Instance definitions:
class Show a => C a where
f :: Int -> a
instance C Integer where
f x = 1
instance C Char where
f x = if x < 10 then 'c' else 'd'
Why is it that this passes the type checker:
g :: C a => a -> Int -> a
g x y = f y
yes :: C a => a -> Int -> String
yes x y = show (g x y)
but this doesn't?
g :: C a => a -> Int -> String
g x y = show(f y)
I find the second alternative much more readable, and it seems to be only a minor difference (note the type signatures). However, trying to get that past the typechecker results in:
*Main> :l typetests.hs
[1 of 1] Compiling Main ( typetests.hs, interpreted )
typetests.hs:11:14:
Ambiguous type variable `a0' in the constraints:
(C a0) arising from a use of `f' at typetests.hs:11:14
(Show a0) arising from a use of `show' at typetests.hs:11:9-12
Probable fix: add a type signature that fixes these type variable(s)
In the first argument of `show', namely `(f y)'
In the expression: show (f y)
In an equation for `g': g x y = show (f y)
Failed, modules loaded: none.
And I don't understand why.
Note: Please don't ask "What are you trying to do?" I would hope it is obvious that I'm just messing around in an abstract context in order to probe the way this language works. I have no goal in mind other than learning something new.
Thanks
This is where a fun toy comes into play. Consider the standard Prelude function asTypeOf
.
asTypeOf :: a -> a -> a
asTypeOf = const
It just returns its first argument, no matter what the second argument is. But the type signature on it places the additional constraint that both of its arguments must be the same type.
g :: C a => a -> Int -> String
g x y = show (f y `asTypeOf` x)
Now, GHC knows what the type of f y
is. It's the same as the type of the first argument to g
. Without that information, well, you get the error message you saw. There just wasn't sufficient information to determine the type of f y
. And because the type is important (it's used to determine which instance to use for show
), GHC needs to know what type you mean to generate code.
This is a variant of the notorious show . read
problem. The classic version uses
read :: Read a => String -> a
show :: Show a => a -> String
so the composition might seem to be a plain old String transducer
moo :: String -> String
moo = show . read
except that there is no information in the program to determine the type in the middle, hence what to read
and then show
.
Ambiguous type variable `b' in the constraints:
`Read b' arising from a use of `read' at ...
`Show b' arising from a use of `show' at ...
Probable fix: add a type signature that fixes these type variable(s)
Please not that ghci does a bunch of crazy extra defaulting, resolving ambiguity arbitrarily.
> (show . read) "()"
"()"
Your C
class is a variant of Read
, except that it decodes an Int
instead of reading a String
, but the problem is essentially the same.
Type system enthusiasts will note that underconstrained type variables are not per se a big deal. It's ambiguous instance inference that's the issue here. Consider
poo :: String -> a -> a
poo _ = id
qoo :: (a -> a) -> String
qoo _ = ""
roo :: String -> String
roo = qoo . poo
In the construction of roo
, it is never determined what the type in the middle must be, nor is roo
polymorphic in that type. Type inference neither solves nor generalizes the variable! Even so,
> roo "magoo"
""
it's not a problem, because the construction is parametric in the unknown type. The fact that the type cannot be determined has the consequence that the type cannot matter.
But unknown instances clearly do matter. The completeness result for Hindley-Milner type inference relies on parametricity and is thus lost when we add overloading. Let us not weep for it.
In
g :: C a => a -> Int -> a
g x y = f y
the return type of f y
is fixed by the type signature, so that if you call, e.g. g 'a' 3
, instance C Char
will be used. But in
g :: C a => a -> Int -> String
g x y = show(f y)
there are two constraints on return type of f
: it must be an instance of C
(so that f
can be used) and of Show
(so that show
can be used). And that's all! Coincidence of type variable names a
in definitions of f
and g
doesn't mean anything. So the compiler has no way to choose between instance C Char
and instance C Integer
(or any instances defined in other modules, so removing these instances won't make the program compile).