rishabhparikh is correct, but since the point is quite subtle it might help to be a bit more elaborate.
The crux of the misunderstanding is here: [tail recursion] eliminates the need to remember previous results of function calls.
Emphasis: results. The thing is: tail call recursion essentially means "finish all intermediate computations and pass the results of these to the next function call".
In the non-tail call version, the sum function has no results until you hit the deepest level of recursion; it is the equivalent of writing the sum out in full, which causes all that extra overhead:
sum = n + (n-1) + (n-2) + ... + 0
You have to recurse until you reach n == 0, and only then does the whole sum "collapse".
The `pre_sum + n` bit in the second example is what fixes this. All intermediate calculations are finished and then "stored" by passing them as arguments to recursive calls. This gives the functional equivalent of a for loop:
sum = 0
for i in n..0:
sum += i
This is why it can be considered memoization: each recursive call builds on the "stored" finished result of the previous call.
> This is why it can be considered memoization: each recursive call builds on the "stored" finished result of the previous call.
Ok, but isn't this then also the case for non-tail recursive calls? And ordinary recursive call builds just as well on the finished result of the previous call and no result is really "stored" in either case.
No, if you look at vanderZwan's explanation for the non-tail recursive call, you'll see that it doesn't build on a finished result of the previous call.
> You have to recurse until you reach n == 0, and only then does the whole sum "collapse".
The crux of the misunderstanding is here: [tail recursion] eliminates the need to remember previous results of function calls.
Emphasis: results. The thing is: tail call recursion essentially means "finish all intermediate computations and pass the results of these to the next function call".
In the non-tail call version, the sum function has no results until you hit the deepest level of recursion; it is the equivalent of writing the sum out in full, which causes all that extra overhead:
You have to recurse until you reach n == 0, and only then does the whole sum "collapse".The `pre_sum + n` bit in the second example is what fixes this. All intermediate calculations are finished and then "stored" by passing them as arguments to recursive calls. This gives the functional equivalent of a for loop:
This is why it can be considered memoization: each recursive call builds on the "stored" finished result of the previous call.