Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Recursion means defining something in terms of itself, so no, using a stack isn't recursion.

How can you differentiate between them? How do you account for these two things being isomorphic? Any algorithm that can be defined by calling itself can also be expressed by way of an explicit stack, just as any algorithm defined iteratively can be implemented via recursion (usually tail recursion). Using an explicit stack and the call stack is formally equivalent.

> That's not recursion, that's organization, modularity and all sorts of other descriptions. Where did you get these ideas?

From the same source as you. Namely the thing you quoted below:

>> Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself

I'd say that a "simpler or previous version of itself" would also be smaller since you're trying to get down to the base case.

Like if you're doing stuff with a binary tree, you usually have to consider the left and right subtrees, and whether they're empty or not, and if they're not empty, maybe do something like push a new entry onto a stack so that the left and right subtrees also get processed, and of course with them being empty being the base case.

The structure is inductively defined, which is why recursion is also the natural way to approach it. This of course being in contrast to dealing with codata and corecursion which deals with infinite structures.



How can you differentiate between them?

One is a concept that means that something is defined in terms of itself, which why I linked you the definition. The other is a data structure where the first item in is the last item out. One is an abstract concept that isn't limited to computer, the other is an ordering.

Why do you think these two completely different things have anything to do with each other? You just keep saying they are the same for some reason. Repeating a claim is not evidence.

How do you account for these two things being isomorphic?

They aren't.

Any algorithm that can be defined by calling itself can also be expressed by way of an explicit stack, just as any algorithm defined iteratively can be implemented via recursion (usually tail recursion).

Being able to do something in a different way with a different tool doesn't make all ways and tools the same. I can hit a nail with a block of wood, that doesn't make the wood a hammer. You can use it as a hammer, but if you ask someone what it is they won't say it's a hammer.

Using an explicit stack and the call stack is formally equivalent.

My point above is that pragmatically they have a big difference, which is that it is easier to debug a static array stack because you can see the whole thing instead of having to walk through a call stack to figure out where the iteration went. It's also probably faster and simpler, but the point is the clarity and the ability to debug.

The structure is inductively defined

The structure is defined by the data. There is nothing "inductive" about it.

which is why recursion is also the natural way to approach it.

Again, what you are really seeing here is that the iteration of a tree matches the first in last out structure of a stack. Recursion just gives you a stack using the call stack, nothing more.

This of course being in contrast to dealing with codata and corecursion which deals with infinite structures.

This has nothing to do with what we are talking about.


> One is a concept that means that something is defined in terms of itself, which why I linked you the definition. The other is a data structure where the first item in is the last item out. One is an abstract concept that isn't limited to computer, the other is an ordering.

Well first of all, both are abstract concepts. In particular, in computer science stacks are considered to be an abstract data type with a push and a pop operation, which both act on the "top" element of the stack.[0] And I'm sure that you already knew this.

The fun part here is, of course, that stacks are also a type that can be defined via structural induction. I.e. that you either have an empty stack, or a stack where the top has an item and then below is another stack which contains the rest of the elements of a stack. So something like this:

  data Stack a = Empty | NonEmpty a (Stack a)
Of course, another way to refer to this kind of a structurally induced data type is to call it recursive. Now we're getting somewhere!

There's of course nothing magical about call stacks versus explicitly instantiated stacks. They're both particular manifestations of the abstract data type. Of course, you can easily argue that there's a difference since call stacks of course tend to get special support in hardware, but as per the original article, that was of course not always the case.

And this is not to mention that you can store "call frames" even with an explicit stack and then pop things off of the stack, until it's empty, in a loop. This is for example how usually recursive algorithms such as depth-first search are implemented in a more iterative manner. And this couldn't be recursion because... it's not written as a function calling itself? Do I understand this right? Because to me that feels like a quite arbitrary way to define what it means to "define in terms of itself".

> They aren't.

Well, I'm not going to ask for a full formal proof, but I would appreciate for some expansion of this point.

> Being able to do something in a different way with a different tool doesn't make all ways and tools the same. I can hit a nail with a block of wood, that doesn't make the wood a hammer. You can use it as a hammer, but if you ask someone what it is they won't say it's a hammer.

Well at least to me, it seems that the argument being put forward here is more that you can't use a block of wood to hammer a nail with, because a block of wood is not a hammer, and that you need to explicitly use a hammer to be able to hammer stuff.

> My point above is that pragmatically they have a big difference, which is that it is easier to debug a static array stack because you can see the whole thing instead of having to walk through a call stack to figure out where the iteration went. It's also probably faster and simpler, but the point is the clarity and the ability to debug.

Oh for sure, it's easier to inspect a stack if it's backed by an array. Of course one could also argue that this should be solvable by improving debugging tools, and making them better at showing call frames, but that's neither here nor there.

> The structure is defined by the data. There is nothing "inductive" about it.

Incorrect.[1] Trees are explicitly defined as inductive/recursive data types. And this isn't just the case for binary trees, but trees where you can have arbitrary amounts of children are defined like this, usually by invoking the concept of a "forest" which is a collection of trees.

> Again, what you are really seeing here is that the iteration of a tree matches the first in last out structure of a stack.

Well no, you could also go through a tree in a breadth-first traversal, in which case you wouldn't want a stack, but a queue, which is explicitly a FIFO. But yes, due to the structure of a tree, and it being a recursive data structure, of course you'd usually use recursion to iterate through it. And for that, you want a stack of some sort.

> Recursion just gives you a stack using the call stack, nothing more.

I don't disagree with this. If anything, it just serves my point. There's nothing special about using the call stack for this. Whether using the call stack or an explicit stack, you're still going through the tree in this case in way that takes advantage of its recursive nature of being defined as nodes with other trees as the nodes' children.

We could come up with an alternative name for this, if calling this general idea "recursion" feels odd, since you don't need to necessarily implement it as a function calling itself, but that doesn't fundamentally change the actual thing being done.

> This has nothing to do with what we are talking about.

Heh, fair enough. I just thought how it's interesting that both induction and corecursion deal with data by defining stuff from a base case and then expanding on that, but of course corecursion is based on coinduction, which of course goes the other way around, going from larger objects to smaller ones, which on the other hand is how recursion is usually thought of.

But yeah, not super relevant.

----

[0]: <https://en.wikipedia.org/wiki/Stack_(abstract_data_type)>

[1]: <https://www.cs.princeton.edu/courses/archive/fall21/cos326/l...>


Well first of all, both are abstract concepts.

No they aren't. A stack is a specific type of data structure. Recursion isn't even specific to computers.

in computer science stacks are considered to be an abstract data type

I think you meant data structure and they aren't both concepts.

Of course, another way to refer to this kind of a structurally induced data type is to call it recursive.

You are making a linked list. A node in a traditional linked list has data and a pointer to the next node. This is what you are making here. Just because you over complicate a stack by using a linked list, it doesn't mean a 'stack' and 'recursion' are the same thing.

Fundamentally this is functional programming silver bullet syndrome. These things really have nothing to do with recursion, haskell is just putting a square peg in a round hole by using recursion for iteration and linked lists.

It's all fun and games until you get past trivial examples. Then pretending complex iteration and data structures are best done with recursion aren't so fun anymore and you want to control what is actually happening.

One example is this rust quicksort being 40x faster than haskell

https://github.com/LightAndLight/how-fast-does-it-quicksort




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: