Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In "traditional" recursion, each recursive call is pushed onto the function call stack. This means that your recursive solution will eventually run out of memory when the number of recursive calls is more than what the function call stack can handle.

e.g., consider the following recursive solution for calculating the sum of all positive integers up to a given `n`:

  fn sum(n):
    if n == 0:
      return 0
    else:
      return n + sum(n - 1)    // A
When n = 1000, your function call stack will have to hold ~1000 calls (fn(1000), fn(999), fn(998), fn(997), and so on...), before it's able to compute the sum. This is because the return statement in line A above, needs to keep track of the variable `n` from the calling function in order to be able to compute the return value. If only there was some way to eliminate that variable `n`...

That's where tail-recursion or tail-call optimization comes in:

  fn sum_tail(n, pre_sum):
    if n == 0:
      return pre_sum
    return sum_tail(n - 1, pre_sum + n)    // B
    
  fn sum(n):
    return sum_tail(n, 0)
In this solution, the return statement at line B does not depend on any variable from the calling function (it simply passes that value on to next function call), and so the calling function can immediately be popped off the stack, saving stack space and making your solution more memory-efficient.

It's useful to know that whether any stack space can actually be saved, depends on whether your language of choice implements tail-call optimization. e.g., JavaScript recently started supporting tail-call optimization after ES6 [1], and Python does not support it [2].

[1] http://2ality.com/2015/06/tail-call-optimization.html [2] https://stackoverflow.com/questions/13591970/does-python-opt...



Thanks for the great explanation of tail recursion.

But I still don't see how tail recursion implements memoisation? One might even argue that tail recursion is the opposite of memoisation; while tail recursion saves memory because it eliminates the need to remember previous results of function calls, memoisation on the other hand uses extra memory to save results of earlier function calls.


The memoization occurs in the fact that we are "remembering" the previous value (pre_sum). It's a little different in that we aren't memoizing/caching all previous values, but we are still caching the last computed values aka the "tail".


rishabhparikh is correct, but since the point is quite subtle it might help to be a bit more elaborate.

The crux of the misunderstanding is here: [tail recursion] eliminates the need to remember previous results of function calls.

Emphasis: results. The thing is: tail call recursion essentially means "finish all intermediate computations and pass the results of these to the next function call".

In the non-tail call version, the sum function has no results until you hit the deepest level of recursion; it is the equivalent of writing the sum out in full, which causes all that extra overhead:

    sum = n + (n-1) + (n-2) + ... + 0
You have to recurse until you reach n == 0, and only then does the whole sum "collapse".

The `pre_sum + n` bit in the second example is what fixes this. All intermediate calculations are finished and then "stored" by passing them as arguments to recursive calls. This gives the functional equivalent of a for loop:

    sum = 0
    for i in n..0:
      sum += i
This is why it can be considered memoization: each recursive call builds on the "stored" finished result of the previous call.


> This is why it can be considered memoization: each recursive call builds on the "stored" finished result of the previous call.

Ok, but isn't this then also the case for non-tail recursive calls? And ordinary recursive call builds just as well on the finished result of the previous call and no result is really "stored" in either case.


No, if you look at vanderZwan's explanation for the non-tail recursive call, you'll see that it doesn't build on a finished result of the previous call.

> You have to recurse until you reach n == 0, and only then does the whole sum "collapse".


NB: Although the ES6 spec requires TCO, it turned out to present significant implementation challenges, so it's only implemented in WebKit (Safari), and is, IIRC, marked WONTFIX in V8 (Chrome, Node), SpiderMonkey (Firefox) and Chakra (Edge). There's a good chance it'll get removed from a later edition of the standard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: