Given the following Haskell code snapshot:
class Foo a where
bar :: a -> ...
quux :: a -> ...
...
Where the value of a is determined at runtime - the class dispatches on this value.
I'm assuming that the compiler can statically check the types at compile-time, and ensure that no invalid types can dispatch on it.
Now if we compare this to a dynamic dispatch in Java:
interface Flippable {
Flippable flip();
}
class Left implements Flippable {
Right flip();
}
class Right implements Flippable {
Left flip();
}
class Demo {
public static void main(String args[]) {
Flippable flippable = new Right();
System.out.println(flippable.flip);
}
}
Assumptions:
- Haskell can dispatch on return type as well as multiple arguments making dispatch different to other languages.
My question is: Is the dispatch of a Haskell TypeClass dynamic?
It depends what you mean by "dynamic" dispatch. There aren't subtyping in haskell, so your Java example is hard to translate.
In situation when you have type class, say
Show
and want to put different elements into the list, you can use existential quantification:Here you can say that dispatch happens "dynamically". It happens in the same way as in C++ or Java. The
Show
dictionary is carried with an object, like a vtable in C++ (or class definition ptr in Java, dunno how it's called).Yet, as @MathematicalOrchid explained, this is an anti-pattern.
Yet if you want to flip from
Left
toRight
and back, you can state that in type class definition.In this case
flip'
calls are resolved already at compile-time.You could have have
class Flip a where flip' :: a -> Flippable
using existential quantification too. Then consecutive calls will be dispatched dynamically. As always, it depends on your needs.Hopefully this answers your question.
If your code is Haskell 2010, with no language extensions turned on, Haskell actually doesn't support run-time polymorphism at all!
That's quite surprising. Haskell feels like a very polymorphic language. But in fact, all the types can in principle be decided purely at compile-time. (GHC chooses not to, but that's an implementation detail.)
This is exactly the same situation as C++ templates. When you write something like
std::vector<int>
, the compiler knows, at compile-time, all the times involved. The really surprising thing is just how rarely you actually need true run-time polymorphism!Now, there are scenarios where you want to run different code based on run-time circumstances. But in Haskell, you can do that just by passing a function as an argument; you don't need to create a "class" (in the OOP sense) merely to achieve this.
Now, if you turn on some language extensions (most conspicuously
ExistentialQuantification
) then you get true, run-time polymorphism.Note that the main reason people seem to do this is so you can do
This is widely considered a Haskell anti-pattern. In particular, if you upcast stuff in Java to put it into a list, you then later downcast it again. But the code above offers no possibility to ever downcast. You also can't use run-time reflection (since Haskell doesn't have that either). So really, if you have a list of
AnyFoo
, the only thing you can do with it is callfoo
orbar
on it. So... why not just store the result offoo
andbar
?It lets you do exactly the same stuff, but doesn't require any non-standard extensions. In fact, in some ways, it's actually a bit more flexible. You now don't even need a
Foo
class, you don't need to define a new type for every sort ofFoo
you might have, just a function that constructs theAnyFoo
data structure for it.