I'm reading 'Functional Programming' by Tomas Petricek & Jon Skeet and I understand the difference between declarative & imperative programming.
What I was wondering is how are the primitive operators & functions implemented, are declarative languages constructed from imperative operators & functions.
Cheers
AWC
If I understand your question correctly, I don't think that is a hard and fast rule. For example, you can use a functional language like Lisp, to create an interpreter for itself. In this case, the implementation details are implemented in a functional manner (because Lisp is a functional language).
Also, if you have a language that is Turing Complete, you can use it to implement a parser/interpreter/compiler for any other language. There are imperative Turing-Complete languages, and functional/declarative Turing-Complete languages.
But all code eventually comes done to assembly or machine code, which is inherently imperative. In theory, what I said above is true, but apparently not in practice :).
As an interesting historical aside, LISP was a completely theoretical construct; it was a mathematical notation for computer languages. It remained theoretical until LISP's eval
function was implemented in machine code by Steve Russel on an IBM 704:
According to what reported by Paul Graham in Hackers & Painters, p. 185, McCarthy said: "Steve Russell said, look, why don't I program this eval..., and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug , and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..." (emphasis mine)
So once again, the subtleties between theory and practice. :)
The low level machine (CPU, assembly language level) is imperative so obviously at some point the implementation will have to take that into account. However, Implementing certain kinds of functional languages like Haskell takes some very non-obvious approaches to create a run-time with decent performance.
Strangely enough, most imperative languages go through a phase where all the code is transformed to be more declarative:
Here is an example of compiling Scheme (functional) directly to C code (imperative):
- The 90 minute Scheme to C compiler
Your question is a bit unclear. Under the hood, processor instructions are imperative in nature. If your code is supposed to run on a von Neumann machine, it should be eventually run as imperative code.
It may be possible to build a machine (with some specific architecture) that inherently supports those operations. In fact LispM had been designed to help run Lisp programs. While I'm not familiar with the hardware characteristics of LispM, it probably does qualify as providing some primitive operations at a more declarative level.
Are declarative languages constructed from imperative operators & functions?
Sometimes. It is a property of the implementation, not the language.
In the 1980s, lots of people compiled functional programs to graphs, then rewrote the graphs to simpler graphs. This computation usually involved updating the graphs in place, but otherwise it was about as a declarative as you would like. To learn more, look up "graph reduction" or read "The four-stroke reduction engine" by Chris Clack and Simon Peyton Jones.
Eventually compiler writers figured out ways to get better performance by compiling functional programs directly to native machine code. If the native machine is a typical commodity machine, this means typical imperative operations. However, if you look up the pioneering work of Professor Arvind at MIT, his group designed and built dataflow machines where the fundamental computational operations are more declarative in nature. It was great work, but all the special-purpose architectures that flourished in the 1980s have been made extinct by the great Microsoft/Intel virtuous cycle (more software -> more PCs sold -> cheaper processors -> more PCs sold -> ... -> $300 netbooks that do really cool stuff).
implementation is that' hidden under the hood. it can be builded with any paradigm.
declarative program is just a data for its some more-or-less "universal" imperative implementation/vm.
pluses:
specifying just a data, in some hardcoded (and checked) format, is simpler and less error-prone than specifying variant of some imperative algorithm directly. some complex specifications just cant be written directly, only in some DSL form.
best and freq used in DSLs data structures is sets and tables. because you not have dependencies between elements/rows. and when you havent dependencies you have freedom to modify and ease of support. (compare for example modules with classes - with modules you happy and with classes you have fragile base class problem)
all goods of declarativeness and DSL follows immediately from benefits of that data structures (tables and sets).
another plus - you can change implementation of declarative language vm, if DSL is more-or-less abstract (well designed). make parallel implementation, for example. or port it to other os etc. all good specifed modular isolating interfaces or protocols gives you such freedom and easyness of support.
minuses:
you guess right. generic (and parameterized by DSL) imperative algorithm/vm implementation may be slower and/or memory hungry than specific one. in some cases.
if that cases is rare - just forget about it, let it be slow. if it's frequient - you always can extend your DSL/vm for that case. somewhere slowing down all other cases, sure...
P.S. Frameworks is half-way between DSL and imperative. and as all halfway solutions ... they combines deficiences, not benefits. they not so safe AND not so fast :) look at jack-of-all-trades haskell - it's halfway between strong simple ML and flexible metaprog Prolog and... what a monster it is. you can look at Prolog as a Haskell with boolean-only functions/predicates. and how simple its flexibility is against Haskell...