可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
There is the whole new paradigm of "functional programming", which needs a total change of thought patterns compared to procedural programming. It uses higher order functions, purity, monads, etc., which we don't usually see in imperative and object oriented languages.
My question is how the implementation of these languages differs from imperative or object oriented languages, with respect to, for example, memory management or internals like pointers etc..
There are functional languages that run on top of the JVM. Does this mean that these languages internally work like the other languages on the JVM?
回答1:
Implementations of Functional Programming languages are using a wide range of implementation techniques. An excellent introduction into the implementation of Scheme (a Lisp dialect) gives this book: Lisp in Small Pieces by Christian Queinnec.
回答2:
Code resulting from functional languages uses many features you see to varying degrees in non-functional languages. Garbage collection has passed into general usage. Tail-call optimization is done in GCC and VC++.
Closures, however, are a hallmark of functional programming. You don't see one without the other. If you define "functional languages" to refer only to pure functional languages, the two aren't synonymous as you find closures in imperative languages that support functional programming (e.g. Javascript and Scheme (which is technically imperative, though the functional paradigm is what's mostly used)). Closures might be implemented with a spaghetti stack for the call stack, or by copying out local variables when exiting a stack frame, or by allocating local variables on the heap and letting garbage collection take care of them.
Once you have closures, anonymous functions are relatively easy (with an interpreter, they're really easy). With a compiler, the function is converted to bytecode at compile time, and the bytecode (rather, the address of the entry point) is associated at runtime with the current environment.
Function composition can rely on anonymous function. When a compiler encounters a function composition operator f . g
, it creates an anonymous function that calls the two arguments f
and g
, passing the result of one as the argument to the other.
Monads can be implemented in OO languages, they're just not as necessary as they are in pure functional languages. I/O monads aren't anything too special, they just rely on the fact that he underlying platform allows side effects.
回答3:
I guess there are many aspects that benefit from special attention in functional languages, one that comes to mind is:
Functional languages use recursion a lot. So any implementation should try to optimize this case. E.g. identify tail-recursion and transform into a loop internally (thus saving function call overheads like stack save / restore). (http://en.wikipedia.org/wiki/Tail_recursion)
回答4:
The implementation of a functional programming language such as Haskell are often very different than those of imperative languages. You can read about one way of doing it here. Even though the paper is several years old I believe the ideas are still used.
回答5:
The biggest difference that comes to mind is that functional languages tend to be designed so that the source code is desugared to a mathematically simple and powerful intermediate language. This language usually contains lambda, function calls, if/else, machine types, something like let
, and not a whole lot more. The transformed code is deeply nested, verbose, and not realistically human-readable. The surface syntax is thrown away.
A compiler for a language like this has to do some inlining and a few closure optimizations to produce decent code. (To me these baseline closure optimizations seem nontrivial--escape analysis and so forth--but it might just be lack of familiarity.)
回答6:
Everything runs on the same processor (and thus the same assembly instructions), so as long as you go deep enough, everything's the same internally.
回答7:
@outis: While the language may support them, closures conflict with the mathematical concept of a function in the same way that side effects do: they allow you to get different results from the same arguments. That makes closures procedural, rather than functional.
That said, there are efficiency arguments which favor closures over globals (especially in the context of compiler implementation). [But I know of functional languages which do not provide closures directly, even though "work alikes" can be implemented.]
(However, currying is similar to closures and does not suffer from this conflict, and is indeed routinely present in functional languages.)
Anyways, in my opinion, functional programming languages are languages that make great efforts to make computation be representable as if they were mathematical functions. This means that optimizations are slanted at optimizing functions.
Hypothetically, at least, functional languages allow the machine to work on deeper abstractions than would be useful for a purely procedural approach.