(#016) How to embed languages in proof assistants - Nitin John Raj
Date & Time: 21-12-2020, 22:00 IST
Abstract
Informally speaking, embedding is the process of reinterpreting a language in a different domain. Interpreters can be seen as embeddings of a formal language into whatever language the interpreter is implemented in. Programs like word2vec and GloVe embed words in natural language into vector spaces. Embedding a programming language in a proof assistant allows us to talk about the programming language using the tools given to us by the proof assistant. This lets us prove nice metatheoretical properties of the language (like safety of its type system, strong normalization, etc.). However, modern proof assistants like Coq and Agda are quite expressive, and there are many design choices that we can make when embedding languages in them. In this talk, I'll attempt to broadly cover some of these design choices. Concretely, we will cover:
- Shallow and deep embeddings
- Intrinsic vs extrinsic embeddings, and a few choices in between
- The contributions of one or two interesting papers, if time permits.
References
- Outrageous but Meaningful Coincidences, by Connor McBride
- Shallow Embedding of Type Theory is Morally Correct by Ambrus Kaposi