I think curve fitting is an important component of future AGI. But it definitely needs causal reasoning baked in, which leads to better models with less data [1,2].
My intuition is that there's a lot of important work to be done using logical representations of models and transforming them back and forth using well understood semantics operators. Deep functions will be part of said models, but the whole model does not necessarily need to be deep. We can already see hints of the field going in this direction in deep generative models [3].
Casual reasoning is one thing that is lacking. But what about creativity? What about drive and desire? What about belief and the will to fail on the road to success? What about collective intelligence and the need to peer up in efforts? What about emotional intelligence?
I personally do not believe in AGI since I also do not believe in psychology, sociology or neurobiology being anywhere near understanding the holistic nature of our own intelligence. We are getting better at emulating human traits for specific tasks with ML. We lack the specific knowledge of what the algorithm should mimic to become equal to us in terms of our intellect though.
>> But what about creativity? What about drive and desire? What about belief and the will to fail on the road to success? What about collective intelligence and the need to peer up in efforts? What about emotional intelligence?
All this resulted from evolutionary processes. Any approximation of AI which will deal with other agents will develop something like that and more in order to be competitive, collaborate and survive.
> All this resulted from evolutionary processes. Any approximation of AI which will deal with other agents will develop something like that and more in order to be competitive, collaborate and survive.
How can we assume that a simulated evolutionary process of a simple mathematical model or some arbitrarily sized multi-dimensional matrices yields similar evolutionary results?
Just think of the ongoing debate about quantum entanglement effects inside the neural signaling process. On a rather onthological level, we are still unable to formulate a mere definition of our consciousness or things like creativity that lasts longer than a few academic decades..
> Causal reasoning is one thing that is lacking. But what about creativity? What about drive and desire? What about belief and the will to fail on the road to success? What about collective intelligence and the need to peer up in efforts? What about emotional intelligence?
Hi, I work at one of the intersections of machine learning with certain schools of thought in neuroscience. The following is based entirely on my own understanding, but is at least based on an understanding.
Your list here really only has three problems in it: causal reasoning, theory of mind, and "emotional intelligence". Emotional intelligence works in the service of "drive and desire", considered broadly. Creativity likewise works for the emotions. To be creative, you need aesthetic criteria.
Most of that, we're still really working on putting into mathematical and computational terms.
Admittedly, that list is an arbitrary poke into areas of debate in your fields of profession.
As a take on your interpretation of creativity: I would argue that the act of forming new and valuable propositions is not related to emotion or aesthetics per se.
Aesthetic theory is observing a very narrow subset of creative processes. And even there, our transition from modernism into the uncertainty of the post-modernist world defies any sound definition of the "aesthetic criteria". Yet we perceive aesthetic human-creativity all the time.
In similar vain is the application of generative machine learning that spurs debate about computational aesthetics today. Nothing proofs better the incapability of modern ML forming real creativity than the imitating nature of adversarial networks spitting out (quite beautiful) permutations of simplified data structures underlying the body of Bach's compositions.
Now we could start on the assumed role of complex neurotransmitters in the creative process of the brain and the trivial way reinforcement learning rewards artificial agents, but that would push the scope of this comment.
>Now we could start on the assumed role of complex neurotransmitters in the creative process of the brain and the trivial way reinforcement learning rewards artificial agents, but that would push the scope of this comment.
You can't really separate emotion and aesthetics from the neurotransmitters helping to implement them! They're considerably more complex than anyone usually gives credit for.
Likewise, to form a valuable proposition, you need a sense of value, which is rooted in the same neurological functionality that creates emotion and aesthetics.
Wow. I want to thank you for engaging on that point! The "Hume's guillotine" dichotomization between "cognitive" processing and "affective" processing tends to be the thing our lab receives the most pushback on.
My intuition is that there's a lot of important work to be done using logical representations of models and transforming them back and forth using well understood semantics operators. Deep functions will be part of said models, but the whole model does not necessarily need to be deep. We can already see hints of the field going in this direction in deep generative models [3].
[1] http://web.stanford.edu/class/psych209/Readings/LakeEtAlBBS....
[2] https://probmods.org/
[3] http://pyro.ai/examples/