clojure cons vs conj with lazy-seq

2020-05-18 11:51发布

Why does cons work in this context with lazy-seq, but conj doesn't?

This works:

(defn compound-interest [p i]
   (cons p (lazy-seq (compound-interest (* p (+ 1 i)) i))))

This doesn't (it gives a stack overflow[1] exception):

(defn compound-interest2 [p i]
   (conj (lazy-seq (compound-interest2 (* p (+ 1 i)) i)) p))

[1] Oh ya! Asking a question involving a stack overflow on stackoverflow.

1条回答
Bombasti
2楼-- · 2020-05-18 11:56

(conj collection item) adds item to collection. To do that, it needs to realize collection. (I'll explain why below.) So the recursive call happens immediately, rather than being deferred.

(cons item collection) creates a sequence which begins with item, followed by everything in collection. Significantly, it doesn't need to realize collection. So the recursive call will be deferred (because of using lazy-seq) until somebody tries to get the tail of the resulting sequence.

I'll explain how this works internally:

cons actually returns a clojure.lang.Cons object, which is what lazy sequences are made of. conj returns the same type of collection which you pass it (whether that is a list, vector, or whatever else). conj does this using a polymorphic Java method call on the collection itself. (See line 524 of clojure/src/jvm/clojure/lang/RT.java.)

What happens when that Java method call happens on the clojure.lang.LazySeq object which is returned by lazy-seq? (How Cons and LazySeq objects work together to form lazy sequences will become clearer below.) Look at line 98 of clojure/src/jvm/clojure/lang/LazySeq.java. Notice it calls a method called seq. This is what realizes the value of the LazySeq (jump to line 55 for the details).

So you could say that conj needs to know exactly what kind of collection you passed it, but cons doesn't. cons just requires that the "collection" argument is an ISeq.

Note that Cons objects in Clojure are different from "cons cells" in other Lisps -- in most Lisps, a "cons" is just an object which holds 2 pointers to other arbitrary objects. So you can use cons cells to build trees, and so on. A Clojure Cons takes an arbitrary Object as head, and an ISeq as tail. Since Cons itself implements ISeq, you can build sequences out of Cons objects, but they can just as well point to vectors, or lists, etc. (Note that a "list" in Clojure is a special type (PersistentList), and is not built from Cons objects.) clojure.lang.LazySeq also implements ISeq, so it can be used as the tail ("cdr" in Lisps) of a Cons. A LazySeq holds a reference to some code which evaluates to an ISeq of some kind, but it doesn't actually evaluate that code until required, and after it does evaluate the code, it caches the returned ISeq and delegates to it.

...is this all starting to make sense? Do you get the idea of how lazy sequences work? Basically, you start with a LazySeq. When the LazySeq is realized, it evaluates to a Cons, which points to another LazySeq. When that one is realized... you get the idea. So you get a chain of LazySeq objects, each holding (and delegating to) a Cons.

About the difference between "conses" and "lists" in Clojure, "lists" (PersistentList objects) contain a cached "length" field, so they can respond to count in O(1) time. This wouldn't work in other Lisps, because in most Lisps, "lists" are mutable. But in Clojure they are immutable, so caching the length works.

Cons objects in Clojure don't have a cached length -- if they did, how could they be used to implement lazy (and even infinite) sequences? If you try to take the count of a Cons, it just calls count on its tail, and then increments the result by 1.

查看更多
登录 后发表回答