可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm used to lazy evaluation from Haskell, and find myself getting irritated with eager-by-default languages now that I've used lazy evaluation properly. This is actually quite damaging, as the other languages I use mainly make lazily evaluating stuff very awkward, normally involving the rolling out of custom iterators and so forth. So just by acquiring some knowledge, I've actually made myself less productive in my original languages. Sigh.
But I hear that AST macros offer another clean way of doing the same thing. I've often heard statements like 'Lazy evaluation makes macros redundant' and vice-versa, mostly from sparring Lisp and Haskell communities.
I've dabbled with macros in various Lisp variants. They just seemed like a really organized way of copy and pasting chunks of code around to be handled at compile time. They certainly weren't the holy grail that Lispers like to think it is. But that's almost certainly because I can't use them properly. Of course, having the macro system work on the same core data structure that the language itself is assembled with is really useful, but it's still basically an organized way of copy-and-pasting code around. I acknowledge that basing a macro system on the same AST as the language that allows full runtime alteration is powerful.
What I want to know is, is how can macros be used to concisely and succinctly do what lazy-evaluation does? If I want to process a file line by line without slurping up the whole thing, I just return a list that's had a line-reading routine mapped over it. It's the perfect example of DWIM (do what I mean). I don't even have to think about it.
I clearly don't get macros. I've used them and not been particularly impressed given the hype. So there's something I'm missing that I'm not getting by reading over documentation online. Can someone explain all of this to me?
回答1:
Lazy evaluation can substitute for certain uses of macros (those which delay evaluation to create control constructs) but the converse isn't really true. You can use macros to make delayed evaluation constructs more transparent -- see SRFI 41 (Streams) for an example of how: http://download.plt-scheme.org/doc/4.1.5/html/srfi-std/srfi-41/srfi-41.html
On top of this, you could write your own lazy IO primitives as well.
In my experience, however, pervasively lazy code in a strict language tends to introduce an overhead as compared to pervasively lazy code in a runtime designed to efficiently support it from the start -- which, mind you, is an implementation issue really.
回答2:
Lazy evaluation makes macros redundant
This is pure nonsense (not your fault; I've heard it before). It's true that you can use macros to change the order, context, etc. of expression evaluation, but that's the most basic use of macros, and it's really not convenient to simulate a lazy language using ad-hoc macros instead of functions. So if you came at macros from that direction, you would indeed be disappointed.
Macros are for extending the language with new syntactic forms. Some of the specific capabilities of macros are
- Affecting the order, context, etc. of expression evaluation.
- Creating new binding forms (i.e. affecting the scope an expression is evaluated in).
- Performing compile-time computation, including code analysis and transformation.
Macros that do (1) can be pretty simple. For example, in Racket, the exception-handling form, with-handlers
, is just a macro that expands into call-with-exception-handler
, some conditionals, and some continuation code. It's used like this:
(with-handlers ([(lambda (e) (exn:fail:network? e))
(lambda (e)
(printf "network seems to be broken\n")
(cleanup))])
(do-some-network-stuff))
The macro implements the notion of "predicate-and-handler clauses in the dynamic context of the exception" based on the primitive call-with-exception-handler
which handles all exceptions at the point they're raised.
A more sophisticated use of macros is an implementation of an LALR(1) parser generator. Instead of a separate file that needs pre-processing, the parser
form is just another kind of expression. It takes a grammar description, computes the tables at compile time, and produces a parser function. The action routines are lexically-scoped, so they can refer to other definitions in the file or even lambda
-bound variables. You can even use other language extensions in the action routines.
At the extreme end, Typed Racket is a typed dialect of Racket implemented via macros. It has a sophisticated type system designed to match the idioms of Racket/Scheme code, and it interoperates with untyped modules by protecting typed functions with dynamic software contracts (also implemented via macros). It's implemented by a "typed module" macro that expands, type-checks, and transforms the module body as well as auxiliary macros for attaching type information to definitions, etc.
FWIW, there's also Lazy Racket, a lazy dialect of Racket. It's not implemented by turning every function into a macro, but by rebinding lambda
, define
, and the function application syntax to macros that create and force promises.
In summary, lazy evaluation and macros have a small point of intersection, but they're extremely different things. And macros are certainly not subsumed by lazy evaluation.
回答3:
Laziness is denotative, while macros are not.
More precisely, if you add non-strictness to a denotative language, the result is still denotative, but if you add macros, the result isn't denotative.
In other words, the meaning of an expression in a lazy pure language is a function solely of the meanings of the component expressions; while macros can yield semantically distinct results from semantically equal arguments.
In this sense, macros are more powerful, while laziness is correspondingly more well-behaved semantically.
Edit: more precisely, macros are non-denotative except with respect to the identity/trivial denotation (where the notion of "denotative" becomes vacuous).
回答4:
Lisp started in the late 50s of the last millennium. See RECURSIVE FUNCTIONS OF SYMBOLIC EXPRESSIONS AND THEIR COMPUTATION BY MACHINE. Macros were not a part of that Lisp. The idea was to compute with symbolic expressions, which can represent all kinds of formulas and programs: mathematical expressions, logical expressions, natural language sentences, computer programs, ...
Later Lisp macros were invented and they are an application of that above idea to Lisp itself: Macros transform Lisp (or Lisp-like) expressions to other Lisp expressions using the full Lisp language as a transformation language.
You can imagine that with Macros you can implement powerful pre-processors and compilers as a user of Lisp.
The typical Lisp dialect uses strict evaluation of arguments: all arguments to functions are evaluated before a function gets executed. Lisp also has several built-in forms which have different evaluation rules. IF
is such an example. In Common Lisp IF
is a so-called special operator.
But we can define a new Lisp-like (sub-) language which uses lazy evaluation and we can write Macros to transform that language into Lisp. This is an application for macros, but by far not the only one.
An example (relatively old) for such a Lisp extension which uses macros to implement a code transformer which provides data structures with lazy evaluation is the SERIES extension to Common Lisp.
回答5:
Macros can be used to handle lazy evaluation, but's just part of it. The main point of macros is that thanks to them basically nothing is fixed in the language.
If programming is like playing with LEGO bricks, with macros you can also change the shape of the bricks or the material they're built with.
Macros is more than just delayed evaluation. That was available as fexpr
(a macro precursor in the history of lisp). Macros is about program rewriting, where fexpr
is just a special case...
As an example consider that I'm writing in my spare time a tiny lisp to javascript compiler and originally (in the javascript kernel) I only had lambda with support for &rest
arguments. Now there's support for keyword arguments and that because I redefined what lambda means in lisp itself.
I can now write:
(defun foo (x y &key (z 12) w) ...)
and call the function with
(foo 12 34 :w 56)
When executing that call, in the function body the w
parameter will be bound to 56 and the z
parameter to 12 because it wasn't passed. I'll also get a runtime error if an unsupported keyword argument is passed to the function. I could even add some compile-time check support by redefining what compiling an expressions means (i.e. adding checks if "static" function call forms are passing the correct parameters to functions).
The central point is that the original (kernel) language did not have support for keyword arguments at all, and I was able to add it using the language itself. The result is exactly like if it was there from the beginning; it's simply part of the language.
Syntax is important (even if it's technically possible to just use a turing machine). Syntax shapes the thoughts you have. Macros (and read macros) give you total control on the syntax.
A key point is that code-rewriting code is not using a crippled dumbed down brainf**k-like language as C++ template metaprogramming (where just making an if
is an amazing accomplishment), or with a an even dumber less-than-regexp substitution engine like C preprocessor.
Code-rewriting code uses the same full-blown (and extensible) language. It's lisp all the way down ;-)
Sure writing macros is harder than writing regular code; but it's an "essential complexity" of the problem, not an artificial complexity because you're forced to use a dumb half-language like with C++ metaprogramming.
Writing macros is harder because code is a complex thing and when writing macros you write complex things that build complex things themselves. It's even not so uncommon to go up one level more and write macro-generating macros (that's where the old lisp joke of "I'm writing code that writes code that writes code that I'm being paid for" comes from).
But macro power is simply boundless.