My hunch is that building "justified" proofs that avoid computer-driven brute force search ought to be the same as building proofs in a logic where finding a proof is computationally easy. We know that such logics exist (e.g. the multi-modal logics that have been proposed for use in the semantic web have been specifically chosen for computational tractability; in practice, multi-modal logic can also be viewed as a subset of FOL.). If so the problem is roughly equivalent to reducing useful subsets of math to such logics.
(The problem of tractability also comes up wrt. type systems in computer languages, and of course "modality" in logic has been usefully applied to both human language/cognition and commonly used PL constructs such as monads.)
You can also convert your logic model into something that can be viewed as an image completion problem, eg, by translating categorical diagrams into a adjacency matrix and “completing” a partial one. (Hopefully, but preliminary results are encouraging.)
So far, we’ve started publishing about shape types encoded that way and are hoping to get to Zn group models by the end of the year. (Work is done, but I’m a slow writer.)
Very interesting work. Visual and diagrammatic reasoning are part of this problem, yes. It might turn out that some proofs that might be generally thought of as hard are actually "visually" justified, in a way that could even perhaps increase our power to justify mathematical statements. The extended paper Gowers links in his blogpost discusses one such case in depth, namely the well-known problem of whether a chessboard with two opposite corners removed can be tiled by two-square dominoes.
I've worked with logics where finding proofs are computationally easy and even logics where expensive steps can be explicitly bound and controlled for. I'm always very skeptical about claims that assert "the same".
Generally speaking, some proofs will be easy in those specific logics and other proofs will be hard or impossible. The problem won't be equivalent to reducing useful subsets of math to such logics however as you will often want to prove a lemma in one logic and a theorem in another. The fact that you don't have a good vehicle for dealing with the union of statements from each distinct fragmented logic makes the entire exercise fall apart.
Instead, most theorem provers need to operate in a single logic so that they can easily build on previous results.
> Generally speaking, some proofs will be easy in those specific logics and other proofs will be hard or impossible ... you will often want to prove a lemma in one logic and a theorem in another. ... The fact that you don't have a good vehicle for dealing with the union of statements from each distinct fragmented logic makes the entire exercise fall apart.
Not sure why this would inherently be an issue, since one can use shallow embeddings to translate statements across logics, and even a "union" of statements might be easily expressed in a common logical framework. But perhaps I'm missing something about what you're claiming here.
In my experience unions of logics that are computationally bound and nice don't end up computationally bound and nice. There are lots of cases where step A is only permits things that are computationally nice and step B only permits things that are computationally nice, but being able to do both A and B permits things that are computationally bad for just about any definition of badness you want to use.
I'm claiming the common logical framework won't have all the nice properties that come from the careful restriction in each of the individual logics.
> I'm claiming the common logical framework won't have all the nice properties that come from the careful restriction in each of the individual logics.
Sure, but this kinda goes without saying. It nonetheless seems to be true that if you want to come up with "justified" proofs, you'll want to do that proving work in logics that are more restricted. You'll still be able to use statement A for a proof of B; what the restriction ultimately hinders is conflating elements of the proofs of A and B together, especially in a way that might be hard to "justify".
I'd argue what you ultimately get is a handful of near axiom As and Bs that you can prove in the smaller logics and for any sufficiently interesting statement you end up in combined logics that lose all the nice properties you were hoping to have. The involved proofs won't be justifiable because they lose the properties that gave the justifiability that came from the smaller non-union logics.
It's not a promising approach if the only statements of the prover you can justify are the simplest and most basic ones.
I'd argue that you ought to be able to avoid working in powerful logics; instead you'd end up using A (or an easily-derived consequence of A) as an axiom in the proof of non-trivial statement B and such, while still keeping to some restricted logic for each individual proof. This is quite close to how humans do math in a practical sense.
But avoiding working in the powerful logics is akin to working in a single logic as much as possible without merging any. So you've lost the benefit of multiple logics that you're originally claiming and you're back in my "use one logic" case.
There are real difficulties here, and you're right to point them out. But I'd nonetheless posit that staying within a simple logic as much as possible, and only rarely resorting to "powerful" proof steps such as a switch to a different reasoning approach, is very different from what most current ATP systems do. (Though it's closer to how custom "tactics" might be used in ITP. Which intersects in interesting ways with the question of whether current ITP sketches are "intuitive" enough to humans.)
But when you want to combine A and B and start to work in a logic that is the union of their logics you end up doing a proof in a logic that often loses all the nice properties that were essential to the theorem proving being reasonable. This works if combining A and B is the entire proof, but how do you handle searching for good next steps in the new logic that lacks those properties if there are other pieces of the proof to be done still?
(The problem of tractability also comes up wrt. type systems in computer languages, and of course "modality" in logic has been usefully applied to both human language/cognition and commonly used PL constructs such as monads.)