Hacker Newsnew | past | comments | ask | show | jobs | submit | thomas_rm's commentslogin

To remind people: Yann LeCun worked on artificial neural networks (ANN) during the period where they were actively shunned by most of the scientific community. You could barely publish a paper on ANN.

Just to demonstrate, one the most common books during period, "Artificial Intelligence: A Modern Approach, 2nd ed" by Norvig, 1080 pages, has less than one (1!) page dedicated to ANNs. I personally think Norvig is an idiot with regards to Artificial Intelligence, and his book (used in 1500 schools in 135 countries and regions) singlehandedly slowed down the progress of AI by a few years, until a new generation of students outgrew this archaic book.


> Artificial Intelligence: A Modern Approach, 2nd ed

Published in 2002. At that point, ANN research had reached a pretty hard plateau with very few tangible results. Faulting Russel and Norvig for not going into depth about ANNs is kind of like faulting Richard Feynman for not going into depth about quantum computers in the Feynman Lectures.

Also, a lot of the subsequent work and breakthroughs on ANNs has been done at Google under Norvig's leadership as Director of Research.


As opposed to the AI techniques taught in the AIMA book (KR and logic reasoning), which had plateau'ed in the 70s...?

Norvig had to be pretty clueless to decide ANNs are such a dead-end, that they don't deserve even a chapter in his book, where all around him there are biological living proofs that neural networks are probably a pretty good bet for AI...

(Note: I held the same opinion in the mid 90s when I reviewed his 1st edition and I'm definitely no Feynmann-level. It's just common sense.)


> all around him there are biological living proofs that neural networks are probably a pretty good bet for AI...

100 years ago you would have been arguing that all around you are living proofs that ornithopters are probably a pretty good bet for artificial flight. You would have been wrong about that too.


We seem to be re-enacting the Symbolic vs Connectionist AI debate of the 80s, poorly.

All I'm saying is, Norvig should have been more humble and included a chapter or two about ANNs, with all the research accumulated thus far, instead of betting 100% for the symbolic approach. Let the next generation of students learn both approaches and decide for themselves. It's sad that a whole generation of students was taught AI from this archaic book.


> All I'm saying is, Norvig should have been more humble and included a chapter or two about ANNs

No, that is not all you're saying. You opened with this:

"I personally think Norvig is an idiot with regards to Artificial Intelligence,"

Not only did you lob an ad hominem at one of the most respected members of the community simply for making an editorial decision 18 years ago that you happen not to agree with today, you did it from a newly created anonymous HN account, and then you tried to deny it. Your conduct here has been thoroughly dishonorable. You should be ashamed of yourself.


I think Norvig is a smart person, I enjoy his books, papers and jupyter notebooks, but I always thought he was pretty clueless regarding AI, as history indeed demonstrated. That's not an ad hominem.

What is shameful was the extreme shunning of the mainstream scientific community to ANNs, in 90's-00's decades, to the point that it was considered career suicide to publish a ANN paper. I believe Yann LeCun has said similar things in the past[0], reminiscing the time when it took him several years(!) to get an ANN paper accepted for publication.

[0] http://yann.lecun.com/ex/pamphlets/publishing-models.html


Norvig is not an idiot, and one of the humblest and nicest people I've ever met. A lot of people were wrong about ANNs.


It is not fair to call Norvig an idiot. When I was studying AI in grad school, around 2002-2003, just about everyone thought that artificial neural networks were a dead end, compared to approaches like support vector machines. Sometimes the scientific consensus is wrong, and it takes a few heroic figures plugging away to prove it. That doesn't mean that everyone in the mainstream is an "idiot".


Marvin Minsky's 1969 book "Perceptrons"

https://en.wikipedia.org/wiki/Perceptrons_(book)

applied rigorous math (e.g. when computer science was new) to prove that a certain kind of single-layer neural network couldn't solve certain problems. (Can't learn XOR) It is like proving that it takes N log N comparisons to sort N items.

This dampened interest in neural networks for a long time but the "geometrical thinking in hyperdimensional space" is what the field is all about today.


Interest in neural networks was renewed with Werbos's (1975) backpropagation algorithm. There was continued progress in ANNs all this time.

I think the aversion to ANNs during the 90s was more philosophical and aesthetic - ANNs math is indeed "ugly" compared to symbolic logic, bayesian inference, SVM (in the 00's), and many other traditional AI methods.

https://en.wikipedia.org/wiki/History_of_artificial_neural_n...


For some context. The 2nd edition was published in 2002 (so maybe written in 2001-2002?). The fourth edition published in 2020 seems to have a bunch more things on NNs.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: