Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Help this man decipher his cool creative code. (dropbox.com)
109 points by crisnoble on July 17, 2012 | hide | past | favorite | 36 comments


While it's cool, I don't think it's particularly mysterious. When playing around with graphical algorithms you step on these kinds of patterns all the time.


+1 to that. I remember computer graphics classes in college where we'd basically start off with a fixed algorithm and then play around with random variables and steps and end up with stuff that looked amazing (even though noone really knew why!) :P


I seem to remember JWZ saying that some screensavers were mistakes.

And the Fractint stone soup people included many "error" algorithms.


I feel like sharing the strange flood-fill algorithm i once stumbled upon: https://vimeo.com/699396


Gorgeous.


Yep, one does. That's what happened. :)

But this struck me as particularly weird for being so intricate being based on such a simple variation of the basic pixel-by-pixel floodfill formula.


A simple breakdown of the algorithm: 1. Choose an order for the four directions UP, DOWN, LEFT, RIGHT. 2. Choose a starting point and add to list, draw white. 3. Add all neighboring points to list in order chosen in step 1. The points that are already added (even if now removed) or would be outside are omitted. 4. Let n be number of points drawn white, m number of points currently in list. Remove the point at list index n%m. 5. Color said point white and goto 3.

In order to understand what is going on, lets first discount the conditions for a point not to be added to the last. Lets say they were added but ignored when popped. We would then iterate over every other point in the list, because for every step, we both increment our list index, AND remove one element. If the directions at index 0, and 2 were then non opposite (i.e. left - top), then we would flood fill in left-top direction. If the directions were opposite we would first fill a line. Anyway, we need not go deeper into what would happen after that, it was just to get a feel for it. Because we actually have those two conditions for not inserting a point into the list. This could probably be thought as giving rise to a pseudo random disturbance in our index picking.

If this approximation is accepted the algorithm looks like this

1. Choose an order for the four directions UP, DOWN, LEFT, RIGHT. 2. Choose a starting point and add to list, draw white. 3. Add all neighboring points to list in order chosen in step 1. 4. Let n be number of points drawn white, m number of points currently in list. Remove the point at list index Math.floor(n*(1+Math.random())%m. 5. Color said point white and goto 3.

From this we can get a feel for what is happening. The reason we proceed outwards is that we the points we color are more and more recently added (i.e. further out). The reason for the gaps is that we a) effectively increase our index by 2+a random small integer.

The reason that we start filling again from the beginning is not explained by the random model, but can be understood from the fact that in the original algorithm, when we reach the borders, n starts increasing faster than m. This is because there is a high chance that added points are illegal. Since n increases faster than m, n%m will wrap around, and we start drawing points with low indices again.

I am not completely sure about these points: I have not tested them, but they seem a likely explanation.


OP (well, discoverer of the algorithm) here:

Thing is there is no per-step randomness. The small integer (pickOffset) only adds some variation to the algorithm, and is initialized only when you hit Space. I'm pretty sure that if pickOffset was changed on each step(), the magic would be lost.

What I'm wondering is why this simple change of the picking algorithm (open.pop(nStep % open.length) instead of open.pop(0) (simple radial outward fill)) causes this beautiful, organic sort of pattern to emerge.


I haven't thought this through very carefully and it may be twaddle, but:

With pop(0) you have a queue, you always take out the oldest thing in it, and so you get breadth-first search.

Instead you're doing pop(0) then pop(1) then pop(2), etc., which means you're taking out "every other" entry from the queue. (Until the popping overtakes the adding, which happens around about the time when a typical newly explored pixel generates fewer than 2 new neighbours to explore, by which time most of the interesting pattern generation has already happened.)

Now, when any pixel is processed its (not-yet-visited) neighbours get pushed consecutively. That means that when the exploration process reaches them it'll only explore one or two of them. In the usual "early" case there are three such neighbours; half the time one will get explored, half the time two will. The other neighbours get left behind until the exploration wraps (i.e., "step" passes the end of the queue), which doesn't happen for ages.

Hence the branching tree structure you see: right at the start we explore only two of the starting pixel's four neighbours, and the other two get left behind; so now we have two growing "tips", and when we process each one we either grow it one step further or split it two ways. And we don't return to the other neighbours for ages, so the tree grows basically until it fills the available space before the gaps start getting filled in.

(There are plenty of details not explained by the above. More thought required.)


There is no per-step randomness, but per step chaos, which looks like per step randomness.


Looks like some sort of a fractal generator, and by looking at the source, it uses Fisher-Yates.

http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle

Also this link is referenced in the code:

http://sedition.com/perl/javascript-fy.html


Look were fisher-yates is used. It is only used if you press a certain key, otherwise not at all.


Looks very similar to "Diffusion Limited Agregation". There are two main versions - grow from centre, and grow from edge.

You have a sticky pixel, and send it on a random walk. When it hits another pixel it sticks, and you generate another sticky pixel.


Seems like an example of a DLA (Diffusion Limited Aggregation) http://en.wikipedia.org/wiki/Diffusion-limited_aggregation ?


If you like this, you may also enjoy Langton's Ant.

http://en.wikipedia.org/wiki/Langtons_ant


During my undergrad I wrote some simulations of ballistic deposition that produced patterns that looked very similar to this.

edit: Here's a video someone else created of a similar looking situation http://www.youtube.com/watch?v=j-1uotbxrcc



Hi all, basically it has a lot to do with keeping the open array expanding at the same rate, as the open array expands the numbers that get pulled by the selectNode function are kept at the same spacing let pattern grow at the same rate on all size of the pattern.


Looks like a variation of a rapidly exploring random tree: http://en.wikipedia.org/wiki/Rapidly-exploring_random_tree


From the author:

>This is a strange flood-fill algorithm I stumbled upon a while back and I have no idea why it does its thing like it does.

>...take a look, and if you know what's going on, let me know.


Some of the patterns that get generated are really cool. Would be glad to know how it works, so if someone does drop you a line on Twitter, please do update all of us.


Seems like an example of a Rapidly-Exploring Random Tree (RRT) algorithm, mostly used in motion planning. Granted this display seems like the RRT* variant.


It's probably not this, but a neat little page with a request to analyze the code reminds me of some other hiring puzzles.


Possibly the least smart comment on this thread, but it resembles the purple destroyer in the Falling Sand Games.


Similar output is produced by Conway's Game of Life if its thresholds are tweaked away from the defaults.


Would be helpful if the interface had UI controls. Most phones don't have keyboards attached.


Could you explain why this is strange? To me it just looks like a fractal algorithm.


Looks like he read Wolfram's book http://www.wolframscience.com/nksonline/toc.html


Thanks for the link! I just added that one to my must read later list. (not to be confused with my always growing read later list)


I suggest you to buy the book , is a masterpiece imho


For a contrary opinion about the book see http://www.cscs.umich.edu/~crshalizi/reviews/wolfram/

(yes I've read the book and mostly agree with that review)


It is a great primer, particularly if you can look past the Wolfram "I invented everything, how awesome am I" language. Read up on some criticisms of it too, there are some things that are over-the-top, and a lot of "ink" (and bits..) has been spent on the subject.


yes I hate that attitude too actually "I invented everything, how awesome am I" . I'm not here to defend it.

But still he deserve some recognition, afterall he is the first bringing AI to the masses with Alfa, and helped some million researcher's life with Mathematica. And, 10 years after NKS, these are somehow good results -> http://blog.stephenwolfram.com/2012/05/its-been-10-years-wha... ...

You can judge people from many different perspectives, but when it comes to "getting things done", you should step back and congratulate him. again, IMHO


But still he deserve some recognition,

and

but when it comes to "getting things done", you should step back and congratulate him

My first words were "it's a great primer", followed with a suggestion for how to get the most from the work.

I'm not trying to downplay Wolfram's other work either, just pointing out that there are issues and grains of salt to take with this particular work.


Nope, I didn't. I just accidentally stumbled upon this algorithm some years ago and ported it recently to HTML5.


Yeah this is a fantastic book if you look past all the hyperbole about reinventing science, it's chock full of fun experiments like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: