Any article making such a claim need to start with explaining what evidence they have that human brains are not computing devices that can be replicated or simulated.
Because absent evidence that there is something fundamental preventing us from one day copying the structure of a human brain and ending up with a working device, whatever claims they make are hand-waving.
There is quite a bit of evidence that your brain is analog, not digital. There is lots of evidence that it is both chemical and electrical in nature And while it is capable of logic, it does not rely strictly on any specific form of logic.
In other words, the only working models we have for "intelligence" are nothing like the silicon binary switches we are attempting to use to replicate it.
Analog can be represented in digital: we can mathematically prove this. It's not at all reasonable to say that AGI cannot be achieved because it doesn't use a certain type of material.
So if the brain is an analog computer, then it seems reasonable to believe that we might be able to someday construct an equivalent analog computer (or a digital equivalent thereof).
You make the assumption that any computer we'd try to use to replicate it will inherently be digital and electronic. If we prove unable to replicate a brain this way, nothing is stopping us from using biochemical systems instead.
Analog computers exists, and we have used bio-mechanical systems for computation... Heck, the first "computers" were humans.
Getting hung up on the current preferred paradigm of computation as the only possible one is one of the biggest flaws of the article.
>Any article making such a claim need to start with explaining what evidence they have that human brains are not computing devices that can be replicated or simulated.
If X and Y appear dissimilar, the burden of proof is on he who would argue they are similar.
If one contends a brain and a computer and the functions of each appear similar, then one is being disingenuous.
Surely the burden of proof is on someone claiming something is impossible? I skimmed the article but I didn't see any support for his argument beyond pointing to the limitations of existing technology, and asserting that these limits were insurmountable.
If I said I can fly by flapping my arms quickly I would assume the burden of proof would be on me to prove it as opposed to others proving that it's impossible
The more analogous claim would be "it's impossible to flap two arm-like appendages quickly and achieve flight," and the burden of proof would indeed be on you for making that claim. (Of course, in that case it would be easy to disprove by pointing to birds or even with computer simulations and toys if birds did not exist).
You can give argument though. It is concievable it is possible to prove things like: a turing machine (which is an abstract mathematical model, which we absolutely can prove negatives. See for example rice's theorem) can never achieve "human intellegence" (if you come up with a concrete definition of human intelligence). From there you can make the statement: Any physically realized device that is faithfully modelled in terms of computational power by a turing machine, cannot have "human intelligence".
Sure you can't prove that silicon devices behave like turing machines, or even that they really exist, but for the sake of this discussion, what does that matter?
We can and absolutely have proven things to be impossible.
What would you consider to be an example of that? And how does that square with the Problem of Induction which is the hole in our entire system of empirical knowledge generation?
You can’t make a Turing machine which can solve the halting problem or an algorithm which can determine whether any given mathematical statement is true.
You can prove that 2+2!=5. You could even say that, given the rules governing math, it is ‘impossible’ that 2+2=5. The domain, however, is synthetic and composed of a system of axioms and rules.
If I change the underlying axioms and rules, I could certainly prove that 2+2=5, just as I can prove that The sum of a triangles interior angles exceed 180 degrees, or that two identical number squared can equal -1. (Redefining what a straight line means for the former, and inventing an imaginary number system for the latter.)
Proving what can and cannot follow given a set of rules, however, is not what philosophers mean when they speak or impossibility in the real world.
Yes to apply that to the real world, we have to use some assumptions like, the universe isnt a giant trick, the sun will rise tomorrow, we dont live in the matrix, etc. However given the context of this discussion those are fairly reasonable assumptions.
I'm not arguing that it can't. I was merely pointing out the proper 'burden of proof.' The article was criticized for failing to demonstrate that something can't be done... that's not fair. The burden would be on the proponent of the proposition that a machine can attain general AI. That's all.
Perhaps there could be general AI... I'm not saying it can't be done. I would point out, though, that IF it is to be done, it certainly won't be by copying a brain. Nobody even knows the hell the brain works...
Maybe you are stuck on the notion of a computer as a silicon chip. Biological entities are just a special case of machine ergo it is already proved that a machine can attain general AI.
I contend the brain by definition is a computer, and so that any claim that we can not produce an artificial one implies that we'll forever be unable to replicate a process that is repeated over and over by nature through simple biochemical processes.
A duck is an entire organism with a known genetic code, suspected lineage in the tree of life and defined characteristics.
A brain is an organ within another known organism in a differing position in the tree of life with different characteristics.
A computer is a device which takes in input via some means and uses input to transform elements that together represent its internal state based on input and prior internal state resulting in output that is the result of both internal state and input arranged in a fashion such that an actor with sufficient knowledge of functioning can manipulate input in order to achieve desired output.
A programmer is such an actor.
In such a context it appears that a brain is merely programmer and computer and AI is merely the achievement of a sufficiently complex and capable system as to represent the same thing in silicon or whatever medium you prefer.
Arguing that such is impossible seems to be merely a failure of imagination.
You imagine a 'thinking' machine all you want. Doesn't change to reality of the situation that no digital device comes close to thinking.
Some find this frustrating. So, they reduce the idea of a brain to a digital computer.
Ita a confusion of the Model with that being Modelled.
A photo of you, is a representation -- or model -- of you. But, it would be silly to become so enamourwd with the photo that you begin to think YOU are a representation of the photograph.
> You imagine a 'thinking' machine all you want. Doesn't change to reality of the situation that no digital device comes close to thinking.
Doesn't it though. Someone from the 1800s would probably absolutely think it does. We have computers that can identify what is in a photo, computers that can do complex math problems (pretty sure historically, the ability to do logic was considered first and foremost what made humans intelligent and not simple "animals", its only recently with the rise of computers this has taken a back seat), computers that can translate between languages, etc. That's not the same as being human, there is no sense of self or independence of action (nor are we anywhere close to having that) but we've made amazing, almost unconcievable strives, in only 100 years. So i think its unfair to say computers dont think at all.
>Doesn't it though. Someone from the 1800s would probably absolutely think it does.
Convolutional networks and neural nets in general are cool, but hardly magical. It’s just glorified curve-fitting. A person from the 1800’s would not be all that impressed with the idea. (What’s impressive is the bread-and-butter of it... namely, the technology that enables the millions of simple calculations per millisecond.)
I'm not redefining anything. I'm applying reasonable definitions of a "computer" to the brain.
Indeed, the term was originally used about people - our electronic computers today have the name because they were taking over functions carried out by human brains.
The difference is that a brain is provably capable of computation, while a brain is not provably capable of flapping its wings and migrating between Canada and Mexico every year (without the help of the meatsuit it's driving, at least).
The word "computer" was a profession before it was a machine.
Programmers effectively emulate a compiler in their minds when they are programming. I don't see why it's so hard to accept that the brain has many "computer" capabilities, even if it's implemented with different materials.
Appearance is vague to the point of being an argument without merit. If someone can SHOW something to be dissimilar in kind not degree then it would fall upon the recipient of that argument to refute it. It is not enough to point and not explicate. One could point at our pre civilized ancestors and point out hunter gatherers can't fly to the moon or build a computer and would never do so. One could go further back yet to the prior species that would someday become humans and point out that their present limitations and claim that one could never produce the other. Both assertions would be obviously wrong only because of the advantage of hindsight.
I do not accept that there is a difference between computation and thought without some meaningful definition of thought.
> do not accept that there is a difference between computation and thought without some meaningful definition of thought.
If I know everything about the nature of X, yet I know very little about the nature of Y, it would be illogical to say "I will assume Y is like X until proven otherwise."
I think computation as a model of cognition is a perfectly reasonable hypothesis that will bear fruit and at least it IS a hypothesis instead of hand waving.
It is worse to imagine that there is a physical process that exists wholly in the physical world that happens in the world within reasonable parameters cannot possibly be engineered to happen in a controlled fashion.
This is a proof that would require a major shift in physics and math and yet we are expected to accept it purely based on intuition without even a compelling theory of how it does work.
OP should have said "counter-evidence". There is plenty of evidence the brain is basically an information processing organ. It's integrating sensory data that you can interrupt and hack (i.e. visual illusions, ghost limbs), also damaging certain parts reliably affects us in the same ways like going blind or losing motor control.
It reduces to the following:
Given X, if I could create a Y such that Y=X, then X would equal Y!
That adds nothing.
It isn't a given that it is even possible to 'copy' a brain. Why would one think it is? Does not quantum mechanics preclude the possibility of copying something perfectly at the atomic scale? Indeed, even if you could perfectly copy the 'material' aspect, you will still need to copy the configuration of electrical charges that existed at the moment of copying. This too is, from what I gather from QM, inherently impossible.
Does this preclude general AI? No. But it does demonstrate the absurdity of arguments that begin, "if I could copy the brain atom by atom, then..." as such a thing is impossible.
I think you missed the main point, replicated economically. Given unlimited budget, I think simulating the human brain might be possible if one could use a powerful enough computer for evey neuron.
The article title makes an unqualified claim about realisation, and so it was what I responded to, but the article itself also makes the same strong claim:
> A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.
This is not an argument about cost - the article argues that it is "in principle impossible".
Any argument about cost I think is also irrelevant: We know from the existence of the brain and how a brain is produced that it is possible to produce one relatively cheaply via biological processes.
It seems highly unlikely that our ability to produce an artificial brain will not eventually approach the cost of growing one, because the "worst case" scenario is for us to find ethical ways of growing brain matter via biological processes and hook them up to computers, and it would seem unlikely that we will not eventually find cheaper ways of doing so than growing full mammalian bodies with it, and that we can not find any ways of optimising the process.
How the brain grows and develops in humans is not very well understood. We mostly use animal models to understand the nervous system, not humans. As an example of why this is problematic: mouse embryos pretty much turn inside out at day 3, and are the only known mammal to do this. All others, including humans, don't. Why and how are unknown, as are the effects on long term development. Basically, mice are better subjects than zebrafish, but not very good being humans.
The limiting factor is not budget or time, but 'stomach'. How elastic are your ethics?
It's not very well understood, but the process is known to exist and work. To assume it can't even in principle be replicated would be an extraordinary claim. Yet the article suggests artificial intelligence can not exist even in principle.
I mean, from a biochem perspective, it's flabbergasting that intelligence exists at all. The brain is so noisy.
To me, it's not far fetched to say that you can't make it happen again. Though I think it's technically possible.
We're missing something big with intelligence. We know neurons and synapses a little bit. We know a few dozen neurons a little, though it's complexity is crazy big.
But that gap between a normal brain down to a few dozen neurons is just boggling right now. There are just so many questions that need to be studied ethically.
It may not be possible to answer them all in a reasonable time window.
We know it happens again millions of times a year just for human brains. As such the complexity of the brain is irrelevant - what is relevant is the complexity and reliability of the machinery that constructs it.
We know the volume of the machinery that constructs it, which allows us to compute an upper bound on the informational complexity of a system capable of constructing human brains.
To me it is totally unreasonable to suggest we will never be able to at the bare minimum mimic that process and grow whole brains.
To me most of the opposition to the idea of artificial intelligence seems to come down to people assuming we're bound to only try to do this with software on a digital computer. But if that proves fruitless, there's no reason to assume we won't try analog systems, or if all else fails try biochemical systems, all the way down to genetic manipulation and tricking cells into growing into brains for us to hook up to computers.
Counter perspective: The process by which the brain produces intelligence might not be complex at all. Maybe it is just a simple algorithm which, when applied en masse, produces it. See for example the neocortex is made up of billions of cortical columns which are all basically the same architecture.
A few generations of biologists have tried to reason it out. I think it's safe to say that whatever is going on is either very complex or we just do not have the right tool to study it right now.
Something really new, like radios were to communication, or steel hulls to shipping, is needed in bio to get things moving apace. The tools we have are really great, but they are a bit slow it seems.
Also, the neocortex is somewhat stereotypical, but those long range projections that come in/out at every layer are the things that make the brain so hard to understand. Everything is flying everywhere all at once.
Oh yes! Developmental biology is just bonkers tough.
Take all the hard parts of bio normally, and now add in a ton of mitosis, long range movement of cells, hard to detect chemical gradients, cell-to-cell junctions, and pure random chance. All just going bananas fast for 'normal' bio processes.
It is a crazy tough field to get work done in. No wonder no one uses mammals to do anything.
Ah, but this point has yet to be proved to even a small degree. Neural models of very simple organisms do tend to show that adequate connectome models appear to be enough to produce similar behaviors, but we know for a fact that there's a ton of biochemistry going on in the brain that we will not be able to replicate with just the connectome.
If I were a betting person, I would wager that the connectome is enough to get something like intelligence, but that without the biochemistry the entire system is unstable in some way. The chemistry that we see in the brain is far more complicated than what would be required to minimally sustain the cells. There's a reason all of this chemistry is going on, and unfortunately I think we're going to find that intelligence just cannot exist as we know it without the chemistry. If that's true, then we're talking hundreds of years before we have computers powerful enough to model the chemistry at the requisite level.
The authors main point is bunk if his only defense of that point is that humans are smart.
The "learning" problem is solved. We've developed numerous algorithms to take a neural network and have it learn a task. We've solved specific problems, including image recognition, and text synthesis. Companies have figured out how to make sensory equipment including cameras, microphones, touch sensors...
Just because no one has managed to put together all the pieces doesn't mean it will never happen. People for thousands of years did not think it was possible to build a flying machine, until someone did.
The Halting Problem definitely puts bounds on what an algorithm can know about another one. At least this should backup the empirical evidence why analytical approaches haven't gotten so far yet.
I don't know why you think the halting problem is relevant here. The halting problem equally applies to human computation, and it does not preclude our existence.
Because absent evidence that there is something fundamental preventing us from one day copying the structure of a human brain and ending up with a working device, whatever claims they make are hand-waving.