Aaron Stump talks about type theory, computational logic, and related topics in Computer Science on his short commute.
I correct what I said in the last episode about the author of the proof of FD from last episode based on intersection types. I also describe AI flopping when I ask it a question about this.
Krivine's book (Section 4.2) has a proof of the Finite Developments Theorem, based on intersection types. I discuss this proof in this episode.
I discuss the paper "A Direct Proof of the Finite Developments Theorem", by Roel de Vrijer. See also the write-up at my blog.
The finite developments theorem in pure lambda calculus says that if you select as set of redexes in a lambda term and reduce only those and their residuals (redexes that can be traced back as existing in the original set), then this process will always terminate. In this episode, I discuss the theorem and why I got interested in it.
In this episode, I discuss the paper Nominal Techniques in Isabelle/HOL, by Christian Urban. This paper shows how to reason with terms modulo alpha-equivalence, using ideas from nominal logic. The basic idea is that instead of renamings, one works with permutations of names.
I discuss what is called the locally nameless representation of syntax with binders, following the first couple of sections of the very nicely written paper "The Locally Nameless Representation," by Charguéraud. I complain due to the statement in the paper that "the theory of λ-calculus identifies terms that are α-equivalent," which is simply not true if one is considering lambda calculus as defined by Church, where renaming is an explicit reduction step, on a par with beta-reduction. I also answer a listener's question about what "computational type theory" means. Feel free to email me any time at aaron.stump@bc.edu, or join the Telegram group for the podcast.
I continue the discussion of POPLmark Reloaded , discussing the solutions proposed to the benchmark problem. The solutions are in the Beluga, Coq (recently renamed Rocq), and Agda provers.
I discuss the paper POPLmark Reloaded: Mechanizing Proofs by Logical Relations, which proposes a benchmark problem for mechanizing Programming Language theory.
In this episode, I begin discussing the question and history of formalizing results in Programming Languages Theory using interactive theorem provers like Rocq (formerly Coq) and Agda.
In this episode, I describe the first proof of normalization for STLC, written by Alan Turing in the 1940s. See this short note for Turing's original proof and some historical comments.
In this episode, after a quick review of the preceding couple, I discuss the property of normalization for STLC, and talk a bit about proof methods. We will look at proofs in more detail in the coming episodes. Feel free to join the Telegram group for the podcast if you want to discuss anything (or just email me at aaron.stump@gmail.com).
Like addition and multiplication on Church-encoded numbers, exponentiation can be assigned a type in simply typed lambda calculus (STLC). But surprisingly, the type is non-uniform. If we abbreviate (A -> A) -> A -> A as Nat_A, then exponentiation, which is defined as x . y . y x, can be assigned type Nat_A -> Nat_(A -> A) -> Nat_A. The second argument needs to have type at strictly higher order than the first argument. This has the fascinating consequence that we cannot define self-exponentiation, x . exp x x. That term would reduce to x . x x, which is provably not typable in STLC.
It is maybe not so well known that arithmetic operations -- at least some of them -- can be implemented in simply typed lambda calculus (STLC). Church-encoded numbers can be given the simple type (A -> A) -> A -> A, for any simple type A. If we abbreviate that type as Nat_A, then addition and multiplication can both be typed in STLC, at type Nat_A -> Nat_A -> Nat_A. Interestingly, things change with exponentiation, which we will consider in the next episode.
I review the typing rules and some basic examples for STLC. I also remind listeners of the Curry-Howard isomorphism for STLC.
In this episode, after a pretty long hiatus, I start a new chapter on simply typed lambda calculus. I present the typing rules and give some basic examples. Subsequent episodes will discuss various interesting nuances...
This episode presents two somewhat more advanced examples in DCS. They are Harper's continuation-based regular-expression matcher, and Bird's quickmin, which finds the least natural number not in a given list of distinct natural numbers, in linear time. I explain these examples in detail and then discuss how they are implemented in DCS, which ensures that they are terminating on all inputs.
In this episode, I continue introducing DCS by comparing it to termination checkers in constructive type theories like Coq, Agda, and Lean. I warmly invite ITTC listeners to experiment with the tool themselves. The repo is here.
In this episode, I talk more about the DCS tool, and invite listeners to check it out and possibly contribute! The repo is here.
DCS is a new functional programming language I am designing and implementing with Stefan Monnier. DCS has a pure, terminating core, around which monads will be layered for possibly diverging, impure computation. In this episode, I talk about this basic design, and its rationale.
I answer a listener's question about the semantics of subtyping, by discussing two different semantics: coercive subtyping and subsumptive subtyping. The terminology I found in this paper by Zhaohui Luo; see Section 4 of the paper for a comparison of the two kinds of subtyping. With coercive subtyping, we have subtyping axioms "A
I continue the discussion of Mitchell's paper Type Inference with Simple Subtypes. Coming soon: a discussion of semantics of subtyping.
In this episode, I wax rhapsodic for the potential of subtyping to improve the practice of pure functional programming, in particular by allowing functional programmers to drop various irritating function calls that are needed just to make types work out. Examples are lifting functions with monad transformers, or even just the pure/return functions for applicative functors/monads.
In this episode, I begin discussing a paper titled "Type Inference with Simple Subtypes," by John C. Mitchell. The paper presents algorithms for computing a type and set of subtype constraints for any term of the pure lambda calculus. I mostly focus here on how subtype constraints allow typing any term (which seems surprising).You can join the telegram group for discussion related to the podcast.
In this episode, I discuss a few of the basics for what we expect from a subtyping relation on types: reflexivity, transitivity, and the variances for arrow types.
We begin a discussion of subtyping in functional programming. In this episode, I talk about how subtyping is a neglected feature in implemented functional programming languages (for example, not found in Haskell), and how it could be very useful for writing lighter, more elegant code. I also talk about how subtyping could help realize a new vision for practical strong functional programming, where the language has a pure, terminating core language, then a monad for pure but possibly diverging computation, and finally a monad for impurity and divergence.
In this episode, I conclude my discussion of some (but hardly all!) points from Pujet and Tabareau's POPL 2022 paper, "Observational Equality -- Now for Good!". I talk a bit about the structure of the normalization proof in the paper, which uses induction recursion. See this paper by Peter Dybjer for more about that feature. Also, feel free to join the new Telegram group for the podcast if you want to discuss episodes.
I continue discussing the Puject and Tabareau paper, "Observational Equality -- Now for Good", in particular discussing more about how the equality type simplifies based on its index (which is the type of the terms being equated by the equality type), and how proofs of equalities can be used to cast terms from one type to another.Also, in exciting news, I created a Telegram group that you can join if you want to discuss topics related to the podcast or particularly podcast episodes. I will be monitoring the group. I believe you have to request to join, and then I approve (it might take me until later in the day to do that, just fyi). The invitation link is here.
In this episode, I introduce an important paper by Pujet and Tabareau, titled "Observational Equality: Now for Good", that develops earlier work of McBride, Swierstra, and Altenkirch (which I will cover in a later episode) on a new approach to making a type theory extensional. The idea is to have equality types reduce, within the theory, to statements of extensional equality for the type of the values being equated.
I pause the chapter on extensionality in type theory to talk about something very exciting that I just learned about (though the project was completed Summer 2022): the so-called Liquid Tensor Experiment, to formalize a recent very difficult proof by a mathematician named Peter Scholze, in Lean. This is the first time in history, that I know of, when a theorem was formalized in a theorem prover, in order to resolve doubts of the mathematician who proved it. An amazing achievement. This episode tells the story, as I have understood it on line. The result apparently sparked this recent workshop.
In this episode, I discuss the basic distinguishing rule of Extensional Martin-Loef Type Theory, namely equality reflection. This rule says that propositional equality implies definitional equality. Algorithmically, it would imply that the type checker should do arbitrary proof search during type checking, to see if two expressions are definitionally equal. This immediately gives us undecidability of type checking for the theory, at least as usually realized.
This episode begins a new chapter on extensionality in type theory, where we seek to equate terms in different ways based on their types. The basic example is function extensionality, where we would like to equate functions from A to B if given equal inputs at type A, they produce equal outputs at type B. With this definition, quicksort and mergesort are equal, even though their codes are not syntactically equivalent. The episode begins by reviewing the distinction between definitional and propositional equality.Also, I am still seeking your small donations ($5 or $10 would be awesome) to pay my podcast-hosting fees at Buzzsprout. To donate, click here, and then under "Gift details" select "Search for additional options" and then search for Computer Science. Select the Computer Science Development Fund, College of Liberal Arts and Sciences. Then add gift instructions saying that this is to support the Iowa Type Theory Commute podcast of Aaron Stump. Sorry it's that complicated.
In this episode, I talk about two papers from the 3rd International Workshop on Formal Methods for Blockchains, 2021. Also, I am continuing my request for your small donations ($5 or $10 would be awesome) to pay my podcast-hosting fees at Buzzsprout. To donate, click here, and then under "Gift details" select "Search for additional options" and then search for Computer Science. Select the Computer Science Development Fund, College of Liberal Arts and Sciences. Then add gift instructions saying that this is to support the Iowa Type Theory Commute podcast of Aaron Stump. Sorry it's that complicated.
In this episode, I discuss this paper, "Mi-Cho-Coq, a Framework for CertifyingTezos Smart Contracts", by Bernardo et al. The paper gives a nice and very clear introduction to the Michelson language, and a formalization of it in Coq. This is used to prove a correctness property about a Multisig contract.I also kindly solicit your small donations ($5 or $10 would be awesome) to pay my podcast-hosting fees at Buzzsprout. To donate, click here, and then under "Gift details" select "Search for additional options" and then search for Computer Science. Select the Computer Science Development Fund, College of Liberal Arts and Sciences. Then add gift instructions saying that this is to support the Iowa Type Theory Commute podcast of Aaron Stump. Sorry it's that complicated.
In this episode (proudly wearing my "I am not an expert" hat), I discuss efforts by Runtime Verification to verify the Dexter2 defi smart contract, using their K-Michelson tool, which provides an executable description of the operational semantics of the Michelson language used for smart contracts on the Tezos blockchain.
I start off a new chapter (seventeen!) of the podcast, to talk about formal methods for blockchain systems. In the next few episodes, we will look at some verification efforts related to the Tezos blockchain.
I discuss separation logic basics some more, as presented in the seminal paper by John C. Reynolds. An important idea is describing data structure using separating conjunction and recursive predicates.
I discuss separation logic, as presented in this seminal paper by the great John C. Reynolds. I did not go very far into the topic, so please expect a follow-up episode.
In this episode, I talk briefly about Rust, which uses compile-time analysis to ensure that code is memory-safe (and also free of data races in concurrent code) without using a garbage collector. Fantastic! The language draws on but richly develops ideas on ownership that originated in academic research.
I discuss the idea of statically typed region-based memory management, proposed by Tofte and Talpin. The idea is to allow programmers to declare explicitly the region from which to satisfy individual allocation requests. Regions are created in a statically scoped way, so that after execution leaves the body of the region-creation construct, the entire region may be deallocated. Type inference is used to make sure that no dangling pointers are dereferenced (after the associated region is deallocated). This journal paper about the idea is not easy reading, but has a lot of good explanations in and around all the technicalities. Even better, I found, is this paper about region-based memory management in Cyclone. There are lots of intuitive explanations of the ideas, and not so much gory technicality.
In this episode, I start a new chapter (we are up to Chapter 16, here), about verifying safe manual management of memory. I have personally gotten pretty interested in this topic, having seen through some simple experiments with Haskell how much time can go into garbage collection for seemingly simple benchmarks. I also talk about why verifying memory-usage properties of programs is challenging in proof assistants.
I laud the Metamath proof checker and its excellent book. I am also looking for suggestions on what to discuss next, as I am ready to wrap up this chapter on proof assistants.
In this episode I share my initial impressions -- very positive! -- of the Metamath system. Metamath allows one to develop theorems from axioms which you state. Typing or other syntactic requirements of axioms or theorems are also expressed axiomatically. The system exhibits an elegant coherent vision for how such a tool should work, and was super easy to download and try out.
I discuss a book edited by Freek Wiedijk (pronounced "Frake Weedike"), which describes the solutions he received in response to a call for formalized proofs of the irrationality of the square root of 2. The book was published in 2006, and made an impression on me then. The provers we have discussed so far all have a solution in the book, except for Lean, which was created after that year. Happily, you can find a PDF of the book here, on Wiedijk's web site.
I talk about my positive experience trying out the tools for Lean, specifically the 'lean' executable and lean-mode in emacs.
In this episode, I talk about what I have learned so far about the Lean prover, especially from an excellent (if somewhat advanced) Master's thesis, "The Type Theory of Lean" by Marco Garneiro.
I talk about my attempts to use Isabelle as a newbie, and reflect a little on the complexity of both Isabelle and Coq.
The Isabelle theorem prover supports different logics, but its most developed seems to be Higher-Order Logic (HOL). In this episode, I talk about the logic and approach of Isabelle/HOL, as far as I have understood them.
In this episode I talk a bit about the Agda proof assistant.
I talk about a couple good resources for learning Coq, the problem of too many ways to do things in type theory, and issues trying to explain and document a very complex language.
I discuss Coq, a widely used proof assistant based on a constructive type theory. One episode definitely cannot do justice to the complexity of a tool like this -- but I take a first try at covering its features at a high level.