I have a nested type that I want to partially specify using associated type synonyms. Below is some extremely reduced code that demonstrates the problem:
{-# LANGUAGE TypeFamilies,
FlexibleInstances,
FlexibleContexts,
MultiParamTypeClasses #-}
f :: Int -> Int
f x = x+1
data X t i
newtype Z i = Z i deriving (Eq,Show)
newtype Y q = Y (T q)
class Qux m r where
g :: (T m) -> r
class (Integral (T a)) => Foo a where
type T a
-- ... functions
instance (Integral i) => Foo (X (Z i) i) where
type T (X (Z i) i) = i
instance (Foo q) => Num (Y q) -- where ...
instance (Foo q, Foo (X m' Int)) => Qux (X m' Int) (Y q) where
g x = fromIntegral $ f $ x
which (even with UndecidableInstances) leads to the compiler error:
Could not deduce (T (X m' Int) ~ Int)
I know that adding this constraint to the instance of Qux makes the compiler happy. However, I know that in my program (T (X arg1 arg2)) = arg2, so I am trying to figure out how to not have to write this constraint.
My question is: can I make Haskell realize that when I write 'Int' as the second parameter to X, that this is (identically) the same thing as the synonym T (X a' Int)? I realize I'm using "special" knowledge about how my instances will look, but I think there should be a way to convey this to the compiler.
Thanks
Since I'm not sure I understand the question yet, I'm going to discuss the code you wrote; perhaps some part of my rambling will either point you in a helpful direction or spark some pointed questions that are possible for me to answer. That is to say: warning! rambling non-answer ahead!
First, let's talk about the Bar
class.
-- class (i ~ S b) => Bar b i where
-- type S b
-- ...
Since we know the constraint i ~ S b
, we might as well drop the i
argument, and I will do so for the rest of the discussion.
class Bar b where type S b
-- or, if the class is empty, just
-- type family S b
-- with no class declaration at all
Here's how the instances for this new class would look:
instance Bar (Z i) where type S (Z i) = i
instance Bar (Z' i) where type S (Z' i) = i
If this is meant to be true for any type constructor, you could consider writing this as one instance instead:
-- instance Bar (f i) where type S (f i) = i
Now, let's talk about the Foo
class. To change it to match with the above, we would write
class Bar (T a) => Foo a where type T a
You've declared two instances:
-- instance (Bar (Z i) i) => Foo (X (Z i) i) where
-- type T (X (Z i) i) = Z i
--
-- instance (Bar (Z' i) i) => Foo (X' (Z' i) i) where
-- type T (X (Z' i) i) = Z' i
We can strip the second argument to Bar
as before, but we can also do another thing: since we know there's a Bar (Z i)
instance (we declared it above!), we can take away the instance constraint.
instance Foo (X (Z i) i) where type T (X (Z i) i) = Z i
instance Foo (X (Z' i) i) where type T (X (Z' i) i) = Z' i
Whether you want to keep the i
argument to X
or not depends on what X
is supposed to represent. So far, we haven't changed the semantics of any of the class declarations or data types -- only how they were declared and instanced. Changing X
may not have the same property; it's hard to say without seeing the definition of X
. With the data declarations and sufficiently many extensions, the above prelude compiles.
Now, your complaints:
You say that the following doesn't compile:
class Qux a
instance Foo (X a' Int) => Qux (X a' Int)
instance Foo (X' a' Int) => Qux (X' a' Int)
But, with UndecidableInstances
, that does compile here. And it makes sense for it to need UndecidableInstances
: there's nothing to stop somebody from coming along later and declaring an instance like
instance Qux (X Y Int) => Foo (X Y Int)
Then, checking whether `Qux (X Y Int)` had an instance would require checking whether `Foo (X Y Int)` had an instance and vice versa. Loop.
You say, "It also wants the instance constraint S (T (X a'))) ~ Int
, even though I know that in my program, these are always just synonyms.". I don't know what the first "it" is -- I can't reproduce this error -- so I can't answer this very well. Also, as I complained before, this constraint is ill-kinded: X :: (* -> *) -> * -> *
, therefore X a' :: * -> *
, and T
expects an argument of kind *
. So I'm going to assume you meant S (T (X a' Int)) ~ Int
instead.
Despite these complaints, we can ask why S (T (X a' Int)) ~ Int
is not provable from the assumptions we have so far. So far, we've only declared Foo
instances for types that fit the pattern X (Z i) i
and X (Z' i) i
. So the type function T
can only reduce when its argument type fits one of those patterns. The type X a' Int
doesn't quite fit either of those patterns. We could cram it into the right pattern: we could try reducing with X (Z Int) Int
instead (say). Then we would find that T (X (Z Int) Int) ~ Z Int
, and therefore S (T (X (Z Int) Int) ~ S (Z Int) ~ Int
.
This answers how to fix the type-level reduction, but doesn't explain how to fix whatever code is not building. To do that, we'd have to find out why the type checker needs a coercion from S (T (X a' Int))
to Int
, and see if we can convince it that a more specific (and satisfiable) coercion like S (T (X (Z Int) Int)) ~ Int
, or, with the suggested generalized Bar
instance above, S (T (X (f Int) Int)) ~ Int
. We certainly can't help you with that without having enough code to reproduce your error.
You ask, "can I make Haskell realize that when I write 'Int' as the second parameter to X, that this is (identically) the same thing as the synonym S (T (X a' Int))?". I don't understand this question at all. You want to somehow have a provable equality X a Int ~ S (T (X a' Int))
? That's the interpretation I get from a literal reading of your question.
In context, I think you may have wanted to ask whether you can get a provable equality b ~ S (T (X a b))
; in that case, the answer is, "Definitely!". We'll abuse the fact we know above that b ~ S (Z b)
to reduce this equation to the stronger one Z b ~ T (X a b)
. Then we can just replace the Foo
instances above with one that makes this declaration and nothing more:
instance Foo (X a b) where T (X a b) = Z b
Alternately, we could postulate the other reasonable equation, T (X a b) ~ a
; then, to prove that S (T (X a b)) ~ b
we would just need to prove that S a ~ b
(by reducing T
), so we could write this other alternate Foo
instance:
instance (Bar a, S a ~ b) => Foo (X a b) where T (X a b) = a