LLMs and Lean are orthogonal, neither subsumes either.
They both can be useful or harmful, do to their respective strengths and trade offs.
PAC/statistical learning is good at needles in the haystack problems assuming that the tail losses, simplicity bias, and corpus representation issues are acceptable and you understand that it is fundamentally existential quantification and control for automation bias etc…
Lean is a wonderful collection of concepts and heuristics but due to Rice and Gödel etc… will not solve all problems with software development.
How Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.
It is horses for courses, and remember that even in sub-TC total functional programming, proving and arbitrary functions is very hard, while constructing one is far more tractable.
Even those proofs don’t demonstrate semantic correctness. History is riddled with examples of people using powerful tools that elegantly explain flawed beliefs.
The 2009 crash and gaussian copula as an example.
Get all the value you can out of these tools, but use caution, especially in math, where superficially similar similarities often have conflicting conventions, constraints, and assumptions.
Obviously if you problem is ergotic with the Markov property, both will help, but Automated theorem proving and PAC learning will never be a meta theory of the other IMHO.
> How Gödel’s second incompleteness theorem shows that you can prove anything,
That is not at all what it says.
> They both can be useful or harmful,
If a proof is admitted into lean, there is no doubt as to its truth. There is no way in which lean can be construed as harmful.
> The 2009 crash and gaussian copula as an example.
There is nothing mathematical about the economics behind the 2009 crash. Such things are statistical measurements, which admit the possibility of failure, not mathematical conclusions that are demonstrably true.
Gödel's incompleteness theorems demonstrate that any computable system that is sufficiently powerful, cannot be both consistent and syntactically complete.
Godel's second proved, a formula Con_κ associated with the consistency of κ is unprovable if κ is consistent.
If it is not consistent, Ex falso quodlibet (principle of explosion) applies and finding that contradiction allows any proposition or the negation of that proposition to be proven.
> They both can be useful or harmful
It is not lean that is harmful, mistaking finding a proof as being the same as truth. A proof that verifies a theorem does not have to explain why it
holds, and the mathematical assumptions that may have been statistical is exactly why that failed.
Probability theory is just as much of a mathematical branch as λ-calculus. But we probably do differ in opinion on how "demonstrably true" much of mathematics is.
But here is a fairly accessible document related to the crash.
They both can be useful or harmful, do to their respective strengths and trade offs.
PAC/statistical learning is good at needles in the haystack problems assuming that the tail losses, simplicity bias, and corpus representation issues are acceptable and you understand that it is fundamentally existential quantification and control for automation bias etc…
Lean is a wonderful collection of concepts and heuristics but due to Rice and Gödel etc… will not solve all problems with software development.
How Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.
It is horses for courses, and remember that even in sub-TC total functional programming, proving and arbitrary functions is very hard, while constructing one is far more tractable.
Even those proofs don’t demonstrate semantic correctness. History is riddled with examples of people using powerful tools that elegantly explain flawed beliefs.
The 2009 crash and gaussian copula as an example.
Get all the value you can out of these tools, but use caution, especially in math, where superficially similar similarities often have conflicting conventions, constraints, and assumptions.
Obviously if you problem is ergotic with the Markov property, both will help, but Automated theorem proving and PAC learning will never be a meta theory of the other IMHO.