The screen resolution issue seems to stem from the fact that I've locked the book to a 16:9 aspect ratio, and I'm still working on it. Thank you for sharing it as well!
Huh... I didn't know about this! But FYI I only wrote the no refund policy as a means to prevent/discourage refund abuse, as the product is only available digitally. I have however refunded two people who have requested a refund. But I completely understand your take!
I believe "the society of mind" contains a bunch of really good but unorganized ideas for building intelligent models, but was written in such a way that it remained virtually impossible to implement them into a working program. Minsky's last book called "The Emotion Machine" tries to reorganize these ideas into one giant architecture composed of at least five interconnected macrolevels of cognitive processes built from specialized agents. Having said that, "The Society of Mind" is one of the most difficult books I've read.
I believe before claiming AGI is possible or impossible, one would need to define the operational features as well as the properties of what an AGI system is or can do. The primary problem with modern day ML research is that all of the folks that do the research including the major labs think that using one or two primary algorithms is enough to simulate general intelligence. But to think that an algorithm or two can have the ability to solve the hundreds of operational requirements needed to fully emulate intelligent behavior is misguided. What are these requirements, you say? Let's start with language. To substantially solve language understanding, you would need: physical world models, quantitative processing, long term memory, working memory, theory of mind, a discrete situational simulator, plan understanding, detection and generation, language grounding, functional and behavioral models of physical objects, temporal representations of events, affect and emotional processing, reflective understanding models and so many others.
While I agree with Prof.Marcus' ideas on the limitations of deep learning, I still don't think neurosymbolic approaches are sufficient for a generalized AI agent. I think a unified cognitive architecture might be necessary based on neuroscience and psychological observations similar to the work done during the 70s and early 80s.
Minsky's subsequent book called "the emotion machine" probably has some of the most complete treatments of topics like intelligence, understanding and consciousness.
As a former structural engineer, I completely agree with this sentiment. For every engineering project I was involved in, the automated components were at most 2 to 5% of the rest of the work.
I had it in August, just two days after getting vaccinated. Headaches, fever and body aches continued for three days after which I had complete loss of taste and smell for about a month. I don't know about you guys, but not being able to taste the food you eat was a big fucking deal for me. Lost a lot of weight that month. I wouldn't recommend getting any variant of this virus, especially if you live with a compromised loved one. However, compared to something like bacterial pneumonia, which I've also had in the past, COVID was very very mild.
With regards to fallacy #2 in fact, it is the other way around, I. E. "Easy things are hard" also known as Moravec's paradox. The symbolic AI generation always assumed that really hard problems like language understanding could be solved entirely through easy symbolic manipulation techniques. At the same time, current generation of AI researchers also claim that neural networks and gradient descent are all you need to solve such problems. But both groups seem to forget that the such problems are barely touchable using a single approach and encompass a large variety of different subproblems that may need wildly different methodologies inorder to solve them.