When I am at rest, looking around myself, taking in sensations, observing others runaround me, and aware of myself and my place in the world, I consider myself consciously aware. I don't see what any of that experience has to do with language.
Furthermore, I don't see any reason why that state of conscious awareness would need to have been significantly different in my pre-human ancestors. Would a Homo Erectus have had a significantly different sense of experience? An Australopithecus? Our common ancestor with other apes? If so, why?
It seems to me the parts of the brain active in that state correspond closely to those in other mammals. Yes when I'm solving maths problems, writing a HN post, playing a board game or solving a crossword puzzle I'm using higher brain functions that other mammals don't have. Sure. But what has that got to do with conscious awareness? I have that when I'm doing things very much along the same lines as activities of other mammals. Running a race, climbing a tree, walking through woodland, stalking prey, experiencing love, or loss or panic. I don't see why I would expect their experience of those things to be much different from mine.
There's no 'animal mode' state of awareness that we have when doing 'animal' things that's a vestige of our pre-conscious past. Conscious awareness isn't something that only kicks in when actively engaging higher brain functions. So surely that is evidence it's a foundational part of our mental heritage, not a modern addition?
It seems to me that arguing that a higher mammal's experience must be dramatically different from mine when doing common activities, to the point that it's impossible for us to even imagine what that experience is like, is the extraordinary claim.
Have you ever tried meditating? The first time you do you probably think your mind isn't wandering, it actually takes a bit of practice to even notice quite how much internal dialogue you actually have. It's non stop for most of us.
I think using the terms 'dialogue' for that is very loaded. It's pre-supposing that it's linguistic in nature by using a term that is linguistic in nature. If you think of it as simply a stream of representations of experience and cognition, why can't language be a separate layer constructed on top of it that we have and animals don't?
Have you ever had an idea or thought, and found it initially impossible to find the right words and phrases to express it? I certainly have. babies with no language can identify objects and people, develop emotional connections, learn skills and solve problems long before they can speak. Are they not concious?
Yes, this experience of knowing before verbalising confirms it for me. I find it happens most when I'm solving a problem and I know there is a line of thinking I could go down but have not yet gone down, an intuition for some nuance of the problem, a cross contextual connection. If I follow the feeling it materialises into words that reference the problem I am solving but until I do it remains an abstract notion. Sometimes it feels like memories of similar problems and sometimes it feels entirely abstract. Once you look at it though it disappears as it is verbalised so it can only be seen in the corner of your mind's eye.
Yes, but part of the point, at least in the Buddhist tradition, is that those thoughts are just another phenomenon you're experiencing. Do a lot of the right kind of practice, and you can get it to stop, while still maintaining a high level of conscious awareness. E.g. see the book The Mind Illuminated.
Is it established that internal dialogue is facilitated by our language faculties, though? That’s certainly not was I would guess from my subjective experience.
This seems like something that should be obvious: most people's internal monologue is delineated in their primary language, or in a pastiche of the languages they know well. Just think about your own thoughts - if they crystallize into something and you serialize it out, it's going to be in language form.
We have an awareness of our thoughts and a second-order cognition we can simply notice and use to train ourselves to behave differently. It's not immediately clear other animal's second order thinking is as well developed and that can effect every part of your experience.
There's arguments to be made that we have parts of our brain that are created at a very early age that may give us a reaction to unique symbols. There's arguments that the spread of epigenetics down family lines naturally grant a focus on particular traits or behaviours.
Your conscious awareness is heavily trained by your day to day life. You see what is important to you, even in your 'holiday' from high functioning consciousness. When you are running a race and climbing a tree the ability to pace yourself at a rate, see particular handholds and climbing techniques in a tree, are your consciousness. Orangutans use different climbing techniques and live different lives around and inside trees to us. It's not a past-time that speaks of an adventurous or wildlife-like experience, but something that could be as natural to them as walking and with the same connotations as walking on concrete for the rest of us.
This whole exercise of grounding consciousness is a move that will inevitably come after our heightening of awareness in the last four years. This article is a very poor way to attempt to make that happen.
> ... I consider myself consciously aware. I don't see what any of that experience has to do with language.
How do you consider yourself to be anything if your mind does not use tokenised abstractions, ie some level of language?
Cogito ergo sum and all.
I've got to say I'm still pretty happy with the concept of the superego. That consciousness and ability for supervisory control seems to rely on the same handling of complex abstraction as language does.
Why must the abstractions be tokenized? I can notice myself feeling things, and surely that counts as awareness of the self. If you notice that you are full of anger or joy, you notice that _you_ exist.
Well you can even fall back to saying that all sensations are abstractions. After all when you feel wet there is no part of your brain that “gets wet”. But the constant mirroring of the external world by your mental abstractions is not something over which you have full control and you couldn’t stop it if you wanted to. Language has nothing to do with it. After all you don’t personally assemble the images you see from your rods and cones, nor did you ever get to choose what cinnamon smells like.
If a dog sees two other dogs, one a potential mate and another a potential rival, doesn't it have to mentally manipulate representations of them? Isn't that a form of tokenisation?
I suspect language is a communications layer built on top of a pre-existing mental framework of sensory experience, identification, memory, recognition, emotional responses, etc.
The alternative is to suggest that humans developed an entirely novel way of representing, interacting with and responding to the world from scratch that just happens to emulate many of the experiences we see expressed by animals - excitement, fear, desire, recognition, problem solving, etc, but shares nothing in common with them.
I find the stroke anecdote fairly unconvincing. The man clearly still had the capacity for language - he was miming the action of a tennis racquet. Sign language is still language. Just because something went haywire between his brain and his mouth doesn't mean he lacked language.
I also don't think you can easily dismiss the conscious of animals by saying they lack language. Most animals (dogs, whales, birds, etc) seem to engage in limited communication via audible signals.
As with most philosophical questions, though, it all hangs on the definition of "consciousness". It may not even be a concept worth defining.
It is certainly communication, but I think there's a difference between communication and language. The latter requires more structure: perhaps grammar, a finite fixed set of predetermined words, etc.
You don't really need consciousness to communicate, since it's just transfer of information between different entities - for example, an organism could secrete chemicals that are detected by neighboring members of the species, as I think it's the case for some plants.
Structured communication might be different, as I think there has to be a something that composes the particular "phrase".
What I understood from reading Noam Chomsky is that language was reduced to being able to speak (collectively, not in a human by human case). That in this, we were a unique species.
Your second point is what came to mind when I read it, and found it rare that no one has pointed that out: dogs and cats can definitely express themselves to some degree of complexity, and we can say that they have language.
I mean, at 6 am, when my dog starts barking, he definitely means he's hungry, and we both know that: knowledge has been conveyed, and thus, it's a form of language.
Has anyone any insight on this? Have I misunderstood Chomsky?
See my other reply in this post why I thing he is wrong about universal grammer. I don't have any problem conceiting that humans are the only species with advanced language skills ;)
Fwiw, it appears you're saying "[human] sign language is complex; simple use of physical signalling is not language". But, your statement could be read as disparaging sign language as "not language", something I'd strongly refute.
I said that sign language has linguistic structure so it's language.
Proper language requires this structure, just making sings is not automatically language. Just the ability to signal using single atomic words, in some medium (signs, verbal, written, machine) is not a language.
The whole piece is fluff. It focuses on one or two minor, anecdotal observations regarding aspects of ideas about consciousness. The fact alone that in split-brain patients consciousness is often not preserved in the left hemisphere undermines all speculation involving cytological similarities between species.
I feel like this is a good place to recommend the Radiolab episode Words [1], though it's not so much about consciousness as about language, the development of language, and communication without a shared language. I found it a very intriguing listening experience, hope others will enjoy it as well. Part is about the man without words mentioned elsewhere in the comments as well (a person who learned language as an adult).
I have only published two (unimportant) articles in the Philosophy of Mind about computationalism, so I'm certainly not an expert on this, but from what I gather from colleagues in philosophy who work in this area, the problem with "consciousness" is that everybody defines the notion differently. Maybe there is some convergence on some soft criteria, but these are loose and in the end there is no unified, agreed upon definition or way to measure consciousness.
IMHO, that makes a meaningful discussion difficult. Probably most ways of looking at consciousness don't depend on language - consciousness does not even imply self-consciousness, so it's not easy to see why it should. However, it shouldn't be hard to make a philosophical case for a dependence, too. In the end, it's questionable whether any of those theories are empirically testable.
That's my impression, though admittedly I'm not really working in that area.
As I see it, the problem is that everyone knows what consciousness is - in a sense, it's the only thing they can know, it's direct experience. But it's impossible to inspect the consciousness of someone else; so scientists have trouble with the word, because verifiability.
So "scientists" (I mean those who advocate a scientific approach) tend to prefer the idea that consciousness is an "emergent phenomenon", rather than something fundamental; and that acts of will really follow an action, rather than causing it.
The article touches on Buddhist philosophy. Some Buddhists do indeed take the view that consciousness is something fundamental, that really exists.
I find it really hard to talk about this subject. I have a very strong conviction that consciousness is real and fundamental, because, well, it's all I've got, at the end of the day. But if consciousness is fundamental, then all that other stuff is secondary. There are schools of Buddhist philosophy that claim that all that secondary stuff is literally created by consciousness.
But my very consciousness constantly provides me with very convincing evidence that all that secondary stuff is in fact fundamental. So who's kidding who?
One reason it's good to play with this kind of stuff is that it makes you less certain of anything!
My impression is that the Buddhist view, at least from a zen perspective, is not so much telling you that all that secondary stuff is 'literally' created by consciousness, but rather that attachment to this idea is part of the problem.
Being too attached to the idea that there is an objective reality outside of yourself is just as problematic as being attached to the idea that there is nothing outside of you subjective consciousness/experience.
The truth, possibly impossible to find out, probably lies somewhere in between.
The zen approach, as I understand it, is to focus on your own awareness of these two extremes, and to keep yourself from reifying either one. Instead, focus on your direct experience, on 'wu-wei', and consider these two extremes interesting perspectives that can help you on your path to 'naturalness'.
Personally I find this philosophy maddeningly unclear and subjective, but it's been the most helpful in my life when it comes to 'happiness', or perhaps 'contentment'.
The idea that you can cleanly separate the self from the outside reality is probably the flawed bit. The in-between truth is that the self is intricately interlinked with the "outside" reality. Sensory deprivation tanks do really weird things to people's consciousness. The mind depends upon its inputs for proper functioning.
>her work with Ildefonso, who had grown up without learning sign language or any other form of communication. //
It sounds like she's limiting her definition of language to "ability to communicate". Which I think it's wrong.
One example is how babies can communicate with sign long before they can talk.
When I learnt some BSL I was essentially in a situation like Ildenfonso (sp?) in your link. But, I certainly had language, it just wouldn't let me communicate in that situation, it essentially became a private language only for me. Now, Ildenfonso may not have developed a sophisticated language, but once the penny dropped and he began to communicate in sign with his tutor it seems like he already had a working language to match the signs to, that better explains - IMO - the rush to identify signs for the physical items around him, rather than waiting to learn as things came up.
Blindsight by Peter Watts is a fiction novel with pretty interesting ideas on consciousness. In the book, consciousness is presented as a error during evolution, with no added value for species to survive. I find that idea quite disturbing and extremely interesting!
Consciousness is not so much a scientific problem than a political one. As far as science is concerned, "consciousness" is a word, and is vaguely defined. Therefore this discussion is more relevant about its implications in legal and political matters such as animal rights and abortion.
Of course the question of consciousness is an heavily political issue, that doesn't make it a topic out of science.
First, generally speaking science is largely shaped through political decisions. Sure thought experiments and the kind of manual experiments that can be conducted by a solitary genius in its basement can lead to breakthrough, provided the said genius is indeed not limited by any external social pressure that prevent to freely dedicate its proper resources on its whims. But no single genius will make the Manhattan project or an LHC in its kitchen during its spare time.
Medicine is generally recognized as a science. Some areas of medicine do focus on the topic of consciousness. Medicine is also a sensitive political matter.
In this case, the ability to experience positive and negative mental states, can be used in place of consciousness.
Especially the ability to experience suffering might offer reasonable basis for approaching ethical questions.
In terms of animal rights this is as simple as assuming that at least most vertebrae experience mental states, incl. suffering, from at least shortly after birth (which we can safely do)
In terms of abortion, this still leaves us with the additional question of weighing contrary interests of 2 individuals. This is trivial when weighing the interests of pregnant person vs sth like a 10-week fetus (obviously incapable of experiencing mental states) but maybe not sufficient for arguments about 30+ weeks fetuses, depending on personal values
A 10-week fetus, ie 20 weeks of gestation, can hear [1], suck, swallow. I would not say it's obvious they can't experience "mental states"; especially not as you appear to be arguing almost all animalkind can experience those states.
Well, that wasn't my main point and I did mean 10 weeks of gestation (just as an example).
Still, physiological responses are not the best indicator of mental states - plants and primitive animals (from sponges to bivalves) have them too, despite lacking the neurological development necessary for (or evolutionary need for) "consciousness".
Your second paragraph there seems to refute your position. If you can't tell physiologically then how are you telling that "most vertebrates experience mental states". You seem to be arguing for an assumed position starting from that position (petitio principii).
And I agree that physiological responses aren't the best way to show mental states. I cry sometimes for no reason whatsoever, my eyes just leak; you can't infer fear just because a creature backs away from fire, either.
Well, I'm wondering if consciousness is even a real state that can be defined precisely, or is it simply a made up construct so we humans can feel better about ourselves. For example, if there was AI sufficiently complex to pass Turing test by emulating self-awareness how can we determine where emulation ends and real self-awareness begins?
What if the AI is not even created by emulating neurological processes like we do now with neural nets, but some new fairly transparent mechanism of self-coding is invented like massive collection of if-then-else conditions covering everything the AI can experience and updated in real time? If we knew exactly how that AI operates could we then state definitely that it is or isn't conscious?
> Well, I'm wondering if consciousness is even a real state that can be defined precisely, or is it simply a made up construct so we humans can feel better about ourselves.
Statements like this always baffle me. People who straight up deny that there is something like subjective experience. I mean, I guess it's possible that there are non-conscious humans with no notion of what experiencing existence is like ("p-zombies") and their internal logic systems then will logically argue against the existence of consciousness.
There's obviously no way to "objectively prove" to one another that a human being or sufficiently complex AI is conscious, because that's the very definition of subjectiveness, but if only for moral and ethical reasons I think it's safer to assume all beings have a level of consciousness.
To my read and mind, there was no denial of that subjective experience, merely an attempt to frame a useful question around it.
All of science is based upon shareable ideas, many testable. They may have begun as vague notions bouncing inchoate through our consciousness (or subconsciousness, etc., yada, so forth) but at some point they became expressible, then, eventually, measurable.
If we want to study consciousness, whatever it/they is/are, then first we need to agree to useful operational definitions, things we can tease apart observationally, reason about, form hypotheses, then eventually theories.
If one accepts that all human experience are fundamentally based on biology (since our meat seems to be all we have, at least until some extra-corporeal notion such as the soul is itself capable of being measured, analyzed, and reasoned about), then, fundamentally, our individual subjective experiences are based on what would likely be common chemistry and biology.
So back to the original comment: Is consciousness a real state? We identify it as such, but perhaps it is several, perhaps many, cooperative or even competitive microstates, if you will, all of which together give us the experience of consciousness, but which are inaccessible to us...
...just as the perception of individual hues and saturations within our eyes are inaccessible, since they reach "us" after considerable backend processing (literally, since visual processing is largely occipital).
If we are going to make progress on understanding consciousness, we first need to set it aside and ask "what is going on?" at various levels, then, eventually, draw a complete picture from those dots....
The problem with p-zombies is that they lie when you ask if they have consciousness. It's technically not a lie, because they actually believe they are not p-zombies.
Just imagine that humanity is composed of half normal people and half p-zombies. What experiment can you do to separate them?
What if all humans are normal people? What if all humans are liar p-zombies?
I think the Turing Test, done properly, would actually be a pretty good yardstick. People often forget how complex the conversation could be when talking about Turing Tests.
Daniel Dennett gives an 'example' Turing Test in one of his books to emphasise the point:
Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”
CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.
J: Fair enough but I told you the joke because I want you to explain it to me.
CR: Boring! You should never explain jokes.
J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?
CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?
Dennett: “The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all.”
I wonder if in a few decades the use of AIs in computer games might start to become an ethical issue. As in, when the simulation of NPCs starts becoming _really_ good, won't it all begin to be a bit unsettling?
Exactly, in video games NPCs do panic when bad thing happens. However it's coded
. And what they feel could also be coded according to the purpose of the simulation.
He says: "I have no language-based thoughts at all."
And he creates that idea converting it into external language using various thought processes. Surely he's wrong about his own introspection (as so often we are as people); how could he spell a word of he can't fix any linguistic thoughts?
[fwiw, I fit into the middle of his range, I think in language sometimes, picture often, feeling, smell, imagery, and occasionally in a non-linguistic way that doesn't relate to any other senses (a sort of weird feeling of connectedness)]
Using language to communicate and thinking in language are different uses of language.
English is not my native language and I don't think in English, but I can us it to communicate. It's slower and involves more mechanical translation process from one representation to another.
We're talking about a phenomena that involves an abrupt change in the state-of-being among a subset of unconscious entities shortly after their neighbor's alarm clock goes off at 5am; in others this state emerges much slower, often requiring several caffeinated beverages.
I love how this immediately stumbles upon difficulties: this is a different 'chair' than I was thinking of and the first definition just invokes a recursion into the definition of 'conscious'. I'm not sure whether that was the purpose of the comment, but it brilliantly succeeded.
a physical object manufactured to be sat on by humans.
That said, i don't think "chair" is scientifically defined, because nobody does science of chairs. Consciousness on the other hand appears to be the subject of both scientific and philosophical investigation, therefore a common definition is desirable. What usually happens is the discussion derails around which definition to use.
Anyway yes I am familiar with the difficulty in defining consciousness, but you seemed to be making the point that agreeing the definition should be done before any other discussion, whereas clearly some fruitful discussion is often possible even though the definition is fluid. Indeed the article in the OP is essentially trying to work out one aspect of the definition. I gather he has a book where he puts forward a definition of sorts.
> some fruitful discussion is often possible even though the definition is fluid
I 've seen too many discussions using those "fluid" definitions to be running away from them. Fluidity is usually employed as a smokescreen tactic to derail any productive argumentation by constantly shifting the goalposts
Let me put it this way, I can define and and give the definition to someone and they can recognize it, a computer can recognize a chair, but for consciousness I cannot define it and I cannot give a definition to someone to recognize it, it is not even definable.
That's because "define" is undefined. For most of science (including neuroscience) consciousness would have to be defined in quantifiable, preferably testable terms. Most people (including philosophers) use qualitative or subjective definitions of it.
Exactly, it's this simple. I also like another definition proposed here by another commenter: "the ability to experience positive and negative mental states".
I don't understand the confusion about the definition of consciousness. It's actually the one thing that is most obvious, because it's right there all the time. This is like trying to define the experience of seeing "blue" or any other qualia. It's impossible because it's subjective, but we can all agree that we each have an experience of seeing blue.
I'm surprised that this is coming from leading neuroscientists Christoff Koch, arguing through anecdotes and facts well known by every writer who has explicated a position against this claim. The position he takes is really not one that attempts to substantiate the claim that consciousness doesn't depend on language scientifically, philosophically, mathematically, etc... Instead, this reads more like a vague reflection on some of the thoughts Koch has that give him pause about the question. His answer, that consciousness doesn't depend on language, appears here as little more than a belief he holds. Pehaps the article should have been titled "Why I Feel Like Consciousness Doesn't Depend on Language". That could indeed be the case, but the great why that makes it so will have to be more than a feeling you get about your dachshund.
I would add, as an aside, that the article does go well with the photo of Koch showing his slide deck to the Dalai Lama, as though to pay some kind of metaphysical penance for being a scientist. I feel like the encounter with the Dalai Lama has become a bit of a trope among the darlings of American academia. I remember seeing a lecture with Paul Eckman in which he seemed to brag about the number of hours he had logged with the Dalai Lama, not to be outdone in the breadth of his perspectives.
Animals communicate using not only sound but color and pattern and such. That may not be full language, but it is a reasonable approximation. The same goes for animal consciousness. It may not be full consciousness with awareness of history and planning for the future, but it is close enough to function more or less the same way.
The only examples given of consciousness functioning without were people with major damage to their neurology. If anything that is evidence that the link between consciousness and language is strong enough to only be broken by severe incapacitating injury.
He has a book that seems to take some sort of dualist or perhaps squishy-life-forms-only approach to consciousness
The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed (The MIT Press)
Christof Koch
Koch describes how the theory explains many facts about the neurology of consciousness and how it has been used to build a clinically useful consciousness meter. The theory predicts that many, and perhaps all, animals experience the sights and sounds of life; consciousness is much more widespread than conventionally assumed. Contrary to received wisdom, however, Koch argues that programmable computers will not have consciousness. Even a perfect software model of the brain is not conscious. Its simulation is fake consciousness. Consciousness is not a special type of computation―it is not a clever hack. Consciousness is about being
I might have to read that to see what his take is exactly.
Why you can't program computers to "be" requires explanation. Maybe someone who has read the book can comment.
I'm sympathetic towards "consciousness is about being" idea. I don't see why consciousness requires conscious thinking, much less language. When you sip coffee in the morning looking trough window, you are just experiencing while staying alert. All decisions and cognition is unconscious, yet the level of consciousness is not diminished.
When we are just "being" we are still doing active cognitive processing and reacting, but the process is not conscious, yet the conscious awareness of being is still there.
(concepts like being, awareness, consciousness, experiencing would require better definition in this context)
I'm deeply troubled by the idea of an "unconscious". This is an idea created a century ago by Sigmund Freud; all modern psychotherapy schools seem to have inherited it, many psychologists (I mean experimental scientists) seem fine with the idea, and for the general public it's commonplace that we have "unconscious thoughts".
But how can you have an unconscious cognition? A cognition is an elementary moment of knowledge; how can such a thing be unconscious?
And how do people who treat consciousness as an emergent phenomenon deal with this "unconscious"? Is it an emergent phenomenon that is waiting to emerge? Seriously, I don't know, and I'd like to know.
To me, "unconscious" encompasses all the processes in your brain that you are not aware of, which is most of them. You can become aware of the result of those processes as they emerge as a phenomenon of consciousness (such as becoming aware that you have made a decision, and possibly even being able to trace your own justification for the decision) but it is not really possible to account for everything.
Most of what we experience consciously is already heavily altered by processes that we have no control over. For example, the brain does a whole bunch of signal processing on the input of your eyes before you consciously see anything.
Cognitive scientists sometimes use term nonconsious as instead of unconscious.
The concept of unconscious is not controversial. We simply don't observe much of our own actions or body to be aware enough to make decisions. Most of our cognitive processing is non-conscious. Toughs just pop into our consciousness seemingly from nowhere but are somehow relevant to the task. We can walk and drive around without paying attention while listening radio or podcast. Learning new tasks require more conscious processing. When we have learned stuff it can happen without us paying any attention to the details.
I bought the book. Chapter 13 deals with 'Why Computers Can't Experience'. The author has layers and layers of theory and a very detailed discussion of the mechanics of a modern processor. But the author seems to have some fundamental inability to distinguish between layers of abstraction. That even though the fundamentals of a computer circuit are very simple, such a circuit could be used to simulate a very rich model. The author sees nowhere in the circuit where experience could happen. But the 'experience' would be within the model, not within the circuit.
It's always funny how people try to make the distinction between "people" and everything else.
Other animals don't have feelings. Oh wait, other animals cannot think. Oh wait, other animals don't use language. Oh wait, other animals don't use 'true' language.
Same with computers. Computers can never defeat humans in chess. Go is too complex for computers, and so humans are better.
Sometimes we even have to change the definition a bit to still keep the distinction between 'us' and the rest of the animals and objects.
Very strange.
And if you think humans are the only animals that can be deliberately cruel, you should watch some chimp documentaries.
There was a paper on HN some years ago that in my mind settled the old question whether humans share a universal grammer to process language. The gist was that a visual image is parsed the same way as language is, with noun phrases and verb phrases combined into complex structures. It is convenient for linguists to claim that consciousness requires language. I suspect consciousness is a form of seeing and visualising oneself. Maybe a way of planning ahead by trying different solutions in the mind without doing them in the physical world.
The thing that higher animals and humans can do is imagine alternative futures, predict consequences, and choose to act accordingly. This would probably meet some definition of consciousness.
Classification, abstraction, language and logic are probably not common features except in very limited ways. So animals likely live in a world of continuous special cases.
Is that surprising? As if a just born baby cannot have consciousness because it doesn't know a language yet.
Makes me think of mindfulness, the practice of putting consciousness first by trying to disallow thoughts while experiencing the present, proves it all to me. We think too much.
Good point! Language can express consciousness, which means that consciousness imply the ability to use language. As in, "we don't know what's inside, so the ability to puke sounds is enough for us to say there's a mind in there". For now, we can't separate one from the other...but that doesn't mean that that separation doesn't exist.
It's not obvious to me that a baby has consciousness. I don't remember anything from before I was 2 or 3. Isn't a newborn almost an automaton? Or rather, how can you prove that it is not?
There are people with brain damage, that loose short- or long-term memory. Are they conscious?
This whole debate is very grey area and one could argue from either position and never prove anything (which is one of the reasons I like to read and think about what make us 'us').
It makes sense. You dont need language as a drive in the ambient medium of the wild and natural selection, you need just a bit of sentience as the bare minimum, granted by neurons(cog. thresholds is another topic). Language is a special emergence not universal.
Sigh, it's a bit like reading obsolete scientific papers : "abrahamic exceptionalism" - no shit, Sherlock ?
(At least, the later given examples are still interesting...)
Language is a poor way to comprehend the cosmos. It is very limiting. Words are not enough to explain what happens when rain drops fall on my skin or the wet smell of sand intoxicates me or wind ruffles my hair...
I have seen cows chewing cud and they seem more in touch with the cosmos than 99.99 percent humans.
"The day science begins to study non-physical phenomena, it will make more progress in one decade than in all the previous centuries of its existence."
and
"If you want to find the secrets of the universe, think in terms of energy, frequency and vibration."
Is consciousness a binary state? There are many examples that suggest it is not: toddlers, the dream state, computer programs, communal endeavors (such as a scientific inquiry), not to mention animals. If consciousness is something that is a question of degree, it makes sense to think of the entire universe being conscious to some degree or another. I’m not a very spiritual person, but I find this train of thought hard to avoid.
_shrug_ I have no skin in the game, so to speak, but I can almost imagine a scenario where it might be simpler to accept that consciousness is a physical property of matter than to accept that consciousness doesn’t exist.
IIRC the enuntiation of famous Razor, you must not introduce extra entities in a gratuitous and unnecessary way. And this is exactly what I find extrange: we don't know enough about how natural minds work yet, computers are still far from approximating the minds of simplest organism and yet there's people that says it's an impossible task. Come on! Observe first, start small, see where problems arise. What's the point in making up unknown physics for a problem that we don't have yet?
My opinion was formed after reading more like that and realizing that it's a lot of circular reasoning: cursory observation of reality, assigning labels and, when labels doesn't fit reality, blaming reality instead of their own ability to use labels.
Unfortunately i'm not knowledgable on biology at all :P but i will at least say i think you'd need biological processes for life and not just electricity.
Furthermore, I don't see any reason why that state of conscious awareness would need to have been significantly different in my pre-human ancestors. Would a Homo Erectus have had a significantly different sense of experience? An Australopithecus? Our common ancestor with other apes? If so, why?
It seems to me the parts of the brain active in that state correspond closely to those in other mammals. Yes when I'm solving maths problems, writing a HN post, playing a board game or solving a crossword puzzle I'm using higher brain functions that other mammals don't have. Sure. But what has that got to do with conscious awareness? I have that when I'm doing things very much along the same lines as activities of other mammals. Running a race, climbing a tree, walking through woodland, stalking prey, experiencing love, or loss or panic. I don't see why I would expect their experience of those things to be much different from mine.
There's no 'animal mode' state of awareness that we have when doing 'animal' things that's a vestige of our pre-conscious past. Conscious awareness isn't something that only kicks in when actively engaging higher brain functions. So surely that is evidence it's a foundational part of our mental heritage, not a modern addition?
It seems to me that arguing that a higher mammal's experience must be dramatically different from mine when doing common activities, to the point that it's impossible for us to even imagine what that experience is like, is the extraordinary claim.