Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Such an "AlphaZero approach" will only knock Gowers's GOFAI approach out of business if the AI can also "justify" its proofs, in the sense that Gowers explains in his blogpost. Do you think that will happen in 5 years?


DeepMind's recently introduced 540 billion parameter language model PaLM can already explain complex jokes and and reason logically by explicitly describing reasoning steps. The results are quite impressive. https://ai.googleblog.com/2022/04/pathways-language-model-pa...

To be sure, this happened just two years after GPT-3. So 5 years for something which Gowers wants, but based on a large language model, seems in fact quite realistic.


I find myself doubting that this goal makes any sense.

One mathematician often cannot "justify" their reasoning process to another if their thinking is even slightly different. For example what is intuitive for one side of the algebra/analysis divide is often not to people on the other side.

It is therefore unlikely that a human level computer program will think enough like humans that it can be justified to any human. And vice versa. For it to do so, it has to be better than the human, also have a model of how the human thinks, and then be able to break down thinking it arrived at one way to something that makes sense to the human.

If a computer succeeded in this endeavor, it would almost certainly come up with new principles of doing mathematics that would then have to be taught to humans to for humans to appreciate what it is doing. Something like this has already happened with Go, see https://escholarship.org/uc/item/6q05n7pz for example. Even so, humans are still unable to learn to play the game at the same level as the superhuman program. And the same would be true in mathematics.


I've found for most substantial proofs I've been exposed to I understood the author's justification of how they found the proof. That doesn't mean I would have been able to come up with that step on my own, but once it was done I understood why they took it. If you get sufficiently far into the esoteric weeds of either analysis or algebra, the justifications won't be understood outside that field. But they don't have to be.

Theorem provers starting from basic axioms are often looking for things that aren't in the esoteric weeds of a specific subfield as it often takes too long for them to generate a suitable set of theorems that establish those fields.


> And vice versa. For it to do so, it has to be better than the human, also have a model of how the human thinks, and then be able to break down thinking it arrived at one way to something that makes sense to the human.

This is more or less what Gowers is after. And that's exactly why he wants GOFAI instead of ML. He wants to understand how "doing mathematics" works?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: