-->

Is it possible to build a comparatively fast untyp

2019-03-08 01:09发布

问题:

Pure untyped lambda calculus is a powerful concept. However, building a machine or interpreter for real-world use is often described as (close to) impossible. I want to investigate this. Is it theoretically possible to build a comparatively fast untyped lambda calculus machine?

By comparatively fast I generally mean comparable to modern Turing-like architectures for a similar range of tasks, within a similar amount of resources (gates, operations, physical space, power use, etc).

I place no limitations on the implementation and architectural layers of the machine, except that it must be physically and somewhat realistically realizeable in some way. No restrictions on how to handle IO either.

  • If possible, what are the main challenges?
  • If impossible, why and how?
  • What is the state of research in this area?
  • Which fields and subjects are most relevant?

How much is known about the feasibility of a computer architecture based around lambda calculus?

Questions covering similar ground:

  • Machine model for functional programming
  • Historical reasons for adoption of the Turing machine as the primary model

回答1:

First, it is possible to compile the lambda calculus efficiently to machine code even on existing architectures. After all, scheme is the lambda calculus plus a bit extra, and it can be compiled efficiently. However, scheme & co are the lambda calculus under strict evaluation. It is also possible to compile the lambda calculus under non-strict evaluation efficiently! On this, see SPJ's two books for some background: http://research.microsoft.com/en-us/um/people/simonpj/papers/papers.html

On the other hand, it is also true that if we built hardware designed for functional languages, we could compile code to that hardware and do very well indeed. The best new stuff on this I know of is the Reduceron: http://www.cs.york.ac.uk/fp/reduceron/

The key to the performance of the Reduceron, which is quite compelling, is that it is built around parallel graph reduction, and aims to exploit the opportunities for parallelism made explicit in the reduction of lambda calculus equations.