Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately, these twins can't answer the question of neural data incompatibility, because, again, they grew up together.

My guess is that when we start inserting info directly into the brain, the most immediately successful way will be to hijack existing inputs directly (vision, audio, etc), and the most successful long-term approach will simply be to just lob the data in in some suitable encoding with some suitable high-quality feedback and let the neurons do the hard work.



Using existing inputs makes sense, but what format are the neurons in? Would it be possible to convert a digital signal to whatever our brain uses - how would you encode it?


Yes, as ay says the "format" of neurons isn't actually that mysterious. It's actually one of the few things we do know. We also have a pretty decent comprehension of the encoding for audio and visual stimuli, at least as they come out of the sensory organs. It's not a perfect understanding, but it's not a complete mystery either. (Cochlear implants actually to some extent directly interface with the nervous system.)

(Actually we can trace the visual input some ways up the core processing path, too, which is interesting. I don't know what the state of the art is now but a few years ago when I was learning about this in school we had a clue that there's a lot of neurons dedicated to edge detection, orientation detection, and movement detection. It's not a miracle that humans see better than computers, rather a lot of our brains are dedicated to doing a lot of computation in parallel long before this rises to the conscious level.)

Picking an appropriate encoding for a new sense may not be absolutely trivial but it is something we might be able to do today. What I wouldn't expect to happen any time soon is the direct memory interface, or anything that interfaces any more directly than a simulated sense or simulated limb. I can imagine a phantom limb interacting with a simulated computer desktop and overlaying the visual system without more than the expected leaps in technology; imagining something that allows you to simply remember Wikipedia just as if you memorized it is much harder to even imagine.


If its impossible for me to remember wikipedia, I wouldn't mind having a third "eye" that has mobile internet browsing capability.


Probably by "cutting" the input path and trying to mimic it ?

The mathematical model for a spiking neuron is reasonably well understood, as far as I know: http://izhikevich.org/human_brain_simulation/Blue_Brain.htm

I did take the code he mentions and play with it. Is behaviour is fascinating. I hope to be still alive when we have enough computing power for simulating a single brain, and to be able to directly cross-connect it to mine.


Last spring I wrote an implementation of the Hindmarsh-Rose model in C++ with ZeroMQ and GSL, which you may find interesting: https://github.com/michaelmelanson/neuron-playground

The guy you linked to is using a simpler two-variable model with correspondingly lower fidelity. The goal of the Blue Brain Project, which this guy seems to think he has bested, is to model all of the features of a biological neuron and, more importantly, how they wire together.


Very interesting, thanks! I know about Blue Brain, but did not touch on any code of theirs - seeing you did.

How fast is your code performance-wise ?

I think the point he is making is that his model is "good enough".

The code that I was running was able to emulate ~40000 neurons and their connections on a Lenovo T60 laptop in realtime. (the number might be off by an order of 2-4, I remember playing with different numbers, but do not remember the exact value).

Interesting thing was that the "waves" that emerged by themselves, and with a certain size of the network they were sustainable even when I removed the initial "random noise" stimuli.

I did not figure out a decent way to attach the inputs/outputs to this "soup". Since this capacity does approach the brain size of an ant, might be fun to toy with a "virtual world" "inhabited" by connected computers running the simulation.

Could be a fun project, even if a little impractical. (Though I sadly lack the knowledge in the domain to make it happen).


Yes, I did an internship for the BBP last year. This is not based in any way on their work.

That code has problems simulating multiple neurons for some reason (possibly thread safety in GSL?). But, it can simulate one neuron at 20x realtime, with dumping traces to disk. If I were to disable dumping traces, it'd be primarily limited by GSL's differential equation solver.

(Sorry, too tired to respond to the second half of your comment)


That's awesome. Where did you find this algortihm to implement? How do they know if it is biologically accurate?


I pulled the equations from Parameter estimation in hindmarsh-rose neurons (Steur, 2006), equation 4.1.

The Hindmarsh-Rose, Hodgkin–Huxley, and other mathematical models are phenomenological models. They are meant to reproduce the observable characteristics, like the peak potential, bursting behaviour, refractory period, things like that. They can reproduce the traces of biological neurons with good accuracy (like the link in the GP), and the papers describing the model mention ion channels and whatnot, but they're very abstract. They're not made to reproduce the biology.


Perhaps it's easier than that, perhaps the brain adapts and learns to interpret the signal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: