is erlangs recursive functions not just a goto?

2020-02-05 08:46发布

Just to get it straight in my head. Consider this example bit of Erlang code:

 test() ->
      receive
          {From, whatever} ->
                %% do something
               test();
          {From, somethingelse} ->
               %% do something else
               test();
 end.

Isn't the test() call, just a goto?

I ask this because in C we learned, if you do a function call, the return location is always put on the stack. I can't imagine this must be the case in Erlang here since this would result in a stackoverflow.

In basic. We had 2 different ways of calling functions: goto and gosub. goto just steered the program flow somewhere else, and gosub remembered where you came from so you could return.

Given this way of thinking, I can look at erlang's recursion easier, since if I just read: test() as a goto, then there is no problem at all.

hence my question: isn't erlang just using a goto instead of remembering the return address on a stack?

EDIT:

Just to clarify my point:

I know goto's can be used in some languages to jump all over the place. But just suupose instead of doing someFunction() you can also do: goto someFunction() in the first example the flow returns, in the second example the flow just continues in someFunction and never returns.

So we limit the normal GOTO behaviour by just being able to jump to method starting points.

If you see it like this, than the erlang recursive function call looks like a goto.

(a goto in my opinion is a function call without the ability to return where you came from). Which is exactly what is happening in the erlang example.

9条回答
你好瞎i
2楼-- · 2020-02-05 09:03

It's a goto in the same why that if is goto and while is goto. It is implemented using (the moral equivalent of) goto, but it does not expose the full shoot-self-in-foot potential of goto directly to the programmer.

查看更多
我想做一个坏孩纸
3楼-- · 2020-02-05 09:12

In this case it is possible to do tail-call optimization, since we don't need to do more work or make use of local variables. So the compiler will convert this into a loop.

查看更多
We Are One
4楼-- · 2020-02-05 09:19

Here's a more general answer, which supercedes my earlier answer based on call-stacks. Since the earlier answer has been accepted, I won't replace the text.

Prologue

Some architectures don't have things they call "functions" that are "called", but do have something analogous (messaging may call them "methods" or "message handlers"; event based architectures have "event handlers" or simply "handlers"). I'll be using the terms "code block" and "invocation" for the general case, though (strictly speaking) "code block" can include things that aren't quite functions. You can substitute the appropriately inflected form of "call" for "invocation" or "invoke", as I might in a few places. The features of an architecture that describe invocation are sometimes called "styles", as in "continuation passing style" (CPS), though this isn't previously an official term. To keep things from being too abstract, we'll examine call stack, continuation passing, messaging (à la OOP) and event handling invocation styles. I should specify the models I'm using for these styles, but I'm leaving them out in the interest of space.

Invocation Features

or, C Is For Continuation, Coordination and Context, That's Good Enough For Me

Hohpe identifies three nicely alliterative invocation features of the call-stack style: Continuation, Coordination, Context (all capitalized to distinguish them from other uses of the words).

  • Continuation decides where execution will continue when a code block finishes. The "Continuation" feature is related to "first-class continuations" (often simply called "continuations", including by me), in that continuations make the Continuation feature visible and manipulable at a programmatic level.
  • Coordination means code doesn't execute until the data it needs is ready. Within a single call stack, you get Coordination for free because the program counter won't return to a function until a called function finishes. Coordination becomes an issue in (e.g.) concurrent and event-driven programming, the former because a data producer may fall behind a data consumer and the latter because when a handler fires an event, the handler continues immediately without waiting for a response.
  • Context refers to the environment that is used to resolve names in a code block. It includes allocation and initialization of the local variables, parameters and return value(s). Parameter passing is also covered by the calling convention (keeping up the alliteration); for the general case, you could split Context into a feature that covers locals, one that covers parameters and another for return values. For CPS, return values are covered by parameter passing.

The three features aren't necessarily independent; invocation style determines their interrelationships. For instance, Coordination is tied to Continuation under the call-stack style. Continuation and Context are connected in general, since return values are involved in Continuation.

Hohpe's list isn't necessarily exhaustive, but it will suffice to distinguish tail-calls from gotos. Warning: I might go off on tangents, such as exploring invocation space based on Hohpe's features, but I'll try to contain myself.

Invocation Feature Tasks

Each invocation feature involves tasks to be completed when invoking a code block. For Continuation, invoked code blocks are naturally related by a chain of invoking code. When a code block is invoked, the current invocation chain (or "call chain") is extended by placing a reference (an "invocation reference") to the invoking code at the end of the chain (this process is described more concretely below). Taking into account invocation also involves binding names to code blocks and parameters, we see even non-bondage-and-discipline languages can have the same fun.

Tail Calls

or, The Answer

or, The Rest Is Basically Unnecessary

Tail calling is all about optimizing Continuation, and it's a matter of recognizing when the main Continuation task (recording an invocation reference) can be skipped. The other feature tasks stand on their own. A "goto" represents optimizing away tasks for Continuation and Context. That's pretty much why a tail call isn't a simple "goto". What follows will flesh out what tail calls look like in various invocation styles.

Tail Calls In Specific Invocation Styles

Different styles arrange invocation chains in different structures, which I'll call a "tangle", for lack of a better word. Isn't it nice that we've gotten away from spaghetti code?

  • With a call-stack, there's only one invocation chain in the tangle; extending the chain means pushing the program counter. A tail call means no program counter push.
  • Under CPS, the tangle consists of the extant continuations, which form a reverse arborescence (a directed tree where every edge points towards a central node), where each path back to the center is a invocation chain (note: if the program entry point is passed a "null" continuation, the tangle can be a whole forest of reverse arborescences). One particular chain is the default, which is where an invocation reference is added during invocation. Tail calls won't add an invocation reference to the default invocation chain. Note that "invocation chain" here is basically synonymous with "continuation", in the sense of "first class continuation".
  • Under message passing, the invocation chain is a chain of blocked methods, each waiting for a response from the method before it in the chain. A method that invokes another is a "client"; the invoked method is a "supplier" (I'm purposefully not using "service", though "supplier" isn't much better). A messaging tangle is a set of unconnected invocation chains. This tangle structure is rather like having multiple thread or process stacks. When the method merely echos another method's response as its own, the method can have its client wait on its supplier rather than itself. Note that this gives a slightly more general optimization, one that involves optimizing Coordination as well as Continuation. If the final portion of a method doesn't depend on a response (and the response doesn't depend on the data processed in the final portion), the method can continue once it's passed on its client's wait dependency to its supplier. This is analogous to launching a new thread, where the final portion of the method becomes the thread's main function, followed by a call-stack style tail call.

What About Event Handling Style?

With event handling, invocations don't have responses and handlers don't wait, so "invocation chains" (as used above) isn't a useful concept. Instead of a tangle, you have priority queues of events, which are owned by channels, and subscriptions, which are lists of listener-handler pairs. In some event driven architectures, channels are properties of listeners; every listener owns exactly one channel, so channels become synonymous with listeners. Invoking means firing an event on a channel, which invokes all subscribed listener-handlers; parameters are passed as properties of the event. Code that would depend on a response in another style becomes a separate handler under event handling, with an associated event. A tail call would be a handler that fires the event on another channel and does nothing else afterwards. Tail call optimization would involve re-subscribing listeners for the event from the second channel to the first, or possibly having the handler that fired the event on the first channel instead fire on the second channel (an optimization made by the programmer, not the compiler/interpreter). Here's what the former optimization looks like, starting with the un-optimized version.

  1. Listener Alice subscribes to event "inauguration" on BBC News, using handler "party"
  2. Alice fires event "election" on channel BBC News
  3. Bob is listening for "election" on BBC News, so Bob's "openPolls" handler is invoked
  4. Bob subscribes to event "inauguration" on channel CNN.
  5. Bob fires event "voting" on channel CNN
  6. Other events are fired & handled. Eventually, one of them ("win", for example) fires event "inauguration" on CNN.
  7. Bob's barred handler fires "inauguration" on BBC News
  8. Alice's inauguration handler is invoked.

And the optimized version:

  1. Listener Alice subscribes to event "inauguration" on BBC News
  2. Alice fires event "election" on channel BBC News
  3. Bob is listening for "election" on BBC News, so Bob's "openPolls" handler is invoked
  4. Bob subscribes anyone listening for "inauguration" on BBC News to the inauguration event on CNN*.
  5. Bob fires event "voting" on channel CNN
  6. Other events are fired & handled. Eventually, one of them fires event "inauguration" on CNN.
  7. Alice's inauguration handler is invoked for the inauguration event on CNN.

Note tail calls are trickier (untenable?) under event handling because they have to take into account subscriptions. If Alice were later to unsubscribe from "inauguration" on BBC News, the subscription to inauguration on CNN would also need to be canceled. Additionally, the system must ensure it doesn't inappropriately invoke a handler multiple times for a listener. In the above optimized example, what if there's another handler for "inauguration" on CNN that fires "inauguration" on BBC News? Alice's "party" event will be fired twice, which may get her in trouble at work. One solution is to have *Bob unsubscribe all listeners from "inauguration" on BBC News in step 4, but then you introduce another bug wherein Alice will miss inauguration events that don't come via CNN. Maybe she wants to celebrate both the U.S. and British inaugurations. These problems arise because there are distinctions I'm not making in the model, possibly based on types of subscriptions. For instance, maybe there's a special kind of one-shot subscription (like System-V signal handlers) or some handlers unsubscribe themselves, and tail call optimization is only applied in these cases.

What's next?

You could go on to more fully specify invocation feature tasks. From there, you could figure out what optimizations are possible, and when they can be used. Perhaps other invocation features could be identified. You could also think of more examples of invocation styles. You could also explore the dependencies between invocation features. For instance, synchronous and asynchronous invocation involve explicitly coupling or uncoupling Continuation and Coordination. It never ends.

Get all that? I'm still trying to digest it myself.

References:

  1. Hohpe, Gregor; "Event-Driven Architecture"
  2. Sugalski, Dan; "CPS and tail calls--two great tastes that taste great together"
查看更多
太酷不给撩
5楼-- · 2020-02-05 09:20

You are correct, the Erlang compiler will detect that it is a tail recursive call, and instead of moving on on the stack, it reuses the current function's stack space.

Furthermore it is also able to detect circular tail-recursion, e.g.

test() -> ..., test2().
test2() -> ..., test3().
test3() -> ..., test().

will also be optimized.

The "unfortunate" side-effect of this is that when you are tracing function calls, you will not be able to see each invocation of a tail recursive function, but the entry and exit point.

查看更多
Viruses.
6楼-- · 2020-02-05 09:21

I think the difference here is between a "real" goto and what can in some cases seem like a goto. In some special cases the compiler can detect that it is free to cleanup the stack of the current function before calling another function. This is when the call is the last call in a function. The difference is, of course, that as in any other call you can pass arguments to the new function.

As others have pointed out this optimisation is not restricted to recursive calls but to all last calls. This is used in the "classic" way of programming FSMs.

查看更多
相关推荐>>
7楼-- · 2020-02-05 09:24

You've got two questions here.

First, no, you're not in any danger of overrunning the stack in this case because these calls to test() are both tail-recursive.

Second, no, function calls are not goto, they're function calls. :) The thing that makes goto problematic is that it bypasses any structure in your code. You can jump out of statements, jump into statements, bypass assignments...all kinds of screwiness. Function calls don't have this problem because they have an obvious flow of control.

查看更多
登录 后发表回答