I was just curious about some exact implementation details of lists in Haskell (GHC-specific answers are fine)--are they naive linked lists, or do they have any special optimizations? More specifically:
- Do
length
and (!!)
(for instance) have to iterate through the list?
- If so, are their values cached in any way (i.e., if I call
length
twice, will it have to iterate both times)?
- Does access to the back of the list involve iterating through the whole list?
- Are infinite lists and list comprehensions memoized? (i.e., for
fib = 1:1:zipWith (+) fib (tail fib)
, will each value be computed recursively, or will it rely on the previous computed value?)
Any other interesting implementation details would be much appreciated. Thanks in advance!
Lists have no special operational treatment in Haskell. They are defined just like:
data List a = Nil | Cons a (List a)
Just with some special notation: [a]
for List a
, []
for Nil
and (:)
for Cons
. If you defined the same and redefined all the operations, you would get the exact same performance.
Thus, Haskell lists are singly-linked. Because of laziness, they are often used as iterators. sum [1..n]
runs in constant space, because the unused prefixes of this list are garbage collected as the sum progresses, and the tails aren't generated until they are needed.
As for #4: all values in Haskell are memoized, with the exception that functions do not keep a memo table for their arguments. So when you define fib
like you did, the results will be cached and the nth fibonacci number will be accessed in O(n) time. However, if you defined it in this apparently equivalent way:
-- Simulate infinite lists as functions from Integer
type List a = Int -> a
cons :: a -> List a -> List a
cons x xs n | n == 0 = x
| otherwise = xs (n-1)
tailF :: List a -> List a
tailF xs n = xs (n+1)
fib :: List Integer
fib = 1 `cons` (1 `cons` (\n -> fib n + tailF fib n))
(Take a moment to note the similarity to your definition)
Then the results are not shared and the nth fibonacci number will be accessed in O(fib n) (which is exponential) time. You can convince functions to be shared with a memoization library like data-memocombinators.
If so, are their values cached in any way (i.e., if I call length twice, will it have to iterate both times)?
GHC does not perform full Common Subexpression Elimination. For example:
{-# NOINLINE aaaaaaaaa #-}
aaaaaaaaa :: [a] -> Int
aaaaaaaaa x = length x + length x
{-# NOINLINE bbbbbbbbb #-}
bbbbbbbbb :: [a] -> Int
bbbbbbbbb x = l + l where l = length x
main = bbbbbbbbb [1..2000000] `seq` aaaaaaaaa [1..2000000] `seq` return ()
Gives on -ddump-simpl
:
Main.aaaaaaaaa [NEVER Nothing] :: forall a_adp.
[a_adp] -> GHC.Types.Int
GblId
[Arity 1
NoCafRefs
Str: DmdType Sm]
Main.aaaaaaaaa =
\ (@ a_ahc) (x_adq :: [a_ahc]) ->
case GHC.List.$wlen @ a_ahc x_adq 0 of ww_anf { __DEFAULT ->
case GHC.List.$wlen @ a_ahc x_adq 0 of ww1_Xnw { __DEFAULT ->
GHC.Types.I# (GHC.Prim.+# ww_anf ww1_Xnw)
}
}
Main.bbbbbbbbb [NEVER Nothing] :: forall a_ado.
[a_ado] -> GHC.Types.Int
GblId
[Arity 1
NoCafRefs
Str: DmdType Sm]
Main.bbbbbbbbb =
\ (@ a_adE) (x_adr :: [a_adE]) ->
case GHC.List.$wlen @ a_adE x_adr 0 of ww_anf { __DEFAULT ->
GHC.Types.I# (GHC.Prim.+# ww_anf ww_anf)
}
Note that aaaaaaaaa
calls GHC.List.$wlen
twice.
(In fact, because x
needs to be retained in aaaaaaaaa
, it is more than 2x slower than bbbbbbbbb
.)
As far as I know (I don't know how much of this is GHC-specific)
length
and (!!)
DO have to iterate through the list.
I don't think there are any special optimisations for lists, but there is a technique that applies to all datatypes.
If you have something like
foo xs = bar (length xs) ++ baz (length xs)
then length xs
will be computed twice.
But if instead you have
foo xs = bar len ++ baz len
where len = length xs
then it will only be computed once.
Yes.
Yes, once part of a named value is computed, it is retained until the name goes out of scope.
(The language doesn't require this, but this is how I understand the implementations behave.)