Intro

History of Math Provers

Abstract

LISP as Extraction Target

Untyped lambda calculus was discovered as an inner language of the space at the origin (Curry, Church, 1932). This language was manifested as LISP (McCarthy, 1958) that was built upon: cons, nil, eq, atom, car, cdr, lambda, apply and id. It was parts of inductive types lately known as inductive type constructors. Still, untyped lambda calculus is used as an extraction target for many provers (Idris, F*), and also manifests in different domain languages (JavaScript, Erlang).

AUTOMATH

The first computer prover AUTOMATH was created by the lead of de Bruijn, 1967. Type theory beneath AUTOMATH is typed lambda calculus and this was the first prover based on Curry–Howard correspondence.

ML/LCF

Further teardown of inner space language was ML language, founded merely on algebraic datatypes and algebra on higher terms rather than categorical semantic. Lately, it was fixed with categorical methods in CPL (Hagino, 1987) and Charity (Cockett, 1992). Milner, assisted by Morris and Newey designed Meta Language for the purpose of building LCF in early 70-s. LCF was a predecessor family of automated math provers: HOL88, HOL90, HOL98 and HOL/Isabelle which is now built using Poly/ML.

Fully Automated Provers

In that period during 80-90s other automated math systems were appeared: Mizar (Trybulec, 1989), PVS (Owre, Rushby, Shankar, 1995), ACL2 (Boyer, Kaufmann, Moore, 1996) and Otter (McCune, 1996).

MLTT

Contemporary provers (built upon consistent Martin-Löf Type Theory, 1972) like Agda, Coq, Lean, F*, Idris are based on Barendregt and Coquand' CoC with different flavors of infinity universe hierarchies and Calculus of Inductive Constructions. Some of them are automated and some are trying to be and general purpose programming languages with proving facilities, like Idris, Coq (coq.io), Agda (M-Alonso).

Cubical Type Theory

The further extensions of MLTT theory is a Homotopy Types needed for reasoning about abstract shapes and multidimentional path types. The Cubical approach was chosen in favour of Globular and Simplical due to computability of Voevodsky's univalence axiom.

Infinity Language

From PTS to HTS. We want to have flexible detachable layers on top of PTS core. Then Sigma for proving. Then well-founded trees or polynomial functors as known as data and record. Then higher path types, interval arithmetic, glue and comp for HIT. Each layers is driven by differenth math, the common in only the method — category theory.

Extensible Language Design. Encoding of inductive types is based on categorical semantic of compilation to CoC. All other syntax constructions are inductive definitions, plugged into the stream parser. AST of the CoC language is also defined in terms of inductive constructions and thus allowed in the macros. The language of polynomial functors (data and record) and core language of the process calculus (spawn, receive and send) is just macrosystem over CoC language, its syntax extensions.

Changable Encodings. In pure CoC we have only arrows, so all inductive type encodings would be Church-encoding variations. Most extended nowadays is Church-Boehm-Berrarducci the encoding which dedicated to inductive types. Another well known are Scott (laziness), Parigot (laziness and constant-time iterators) and CPS (continuations) encodings.

Proved Categorical Semantic. There was modeled a math model (using higher-order categorical logic) of encoding, which calculates (co)limits in a category of (co)algebras built with given set of (de)constructors. We call such encoding in honor of Lambek lemma that leads us to the equality of (co)initial object and (co)limit in the categories of (co)algebras. Such encoding works with dependent types and its consistency is proved in Lean model.