| Year | Status | Authors |
| ---- | -------- | -------------------------------------- |
| 2025 | Preprint | Corinthia Beatrix Aberlé (sole author) |
The following is a lightly edited transcript of a talk I gave at Carnegie-Mellon in Fall 2025 on a work-in-progress framework for dependently-typed compositional program verification using the theory of polynomial functors, which I am currently in the process of writing up/developing more fully for publication. Watch this space!
---
* Large programs are often opaque; their **capabilities** need not be.
* To verify the behavior of complex programs, we want to be able to break them down into simpler components/specifications, verify those, and compose the results.
* In other words, we want a **compositional** approach to program verification.
* In this talk, I'll sketch a framework for compositional program verification based on **polynomial functors** in dependent type theory.
These slides are literate Agda and contain a complete formalization of the material presented.
```agda
{-# OPTIONS --without-K --guardedness #-}
module compositional-program-verification-slides where
open import Agda.Builtin.Unit
open import Agda.Builtin.Sigma
-- binary products
_×_ : Set → Set → Set
A × B = Σ A (λ _ → B)
infixr 5 _×_
-- binary coproducts
data _+_ (A : Set) (B : Set) : Set where
inl : A → A + B
inr : B → A + B
-- empty type
data ⊥ : Set where
```
# Polynomial Functors
Abstractly, a **polynomial functor** is an endofunctor $P$ on the category of types/sets of the form $
P(y) = \sum_{x : A} y^{B[x]}
$ where $A$ is a type and $B : A \to \mathsf{Set}$ is a *family* of types indexed by $A$.
Concretely, we can represent such a polynomial functor in Agda as the pair consisting of $A$ and $B$:
```agda
Poly : Set₁
Poly = Σ Set (λ A → A → Set)
```
We can *realize* a polynomial `(A , B)` as its corresponding endofunctor on `Set` as follows:
```agda
_⊙_ : Poly → (Set → Set)
(A , B) ⊙ y = Σ A (λ x → B x → y)
```
Given a polynomial functor `P = (A , B)` and a functor `F : Set → Set`, the type of *morphisms* (natural transformations) from `P` to `F` is given by:
```agda
_⇒_ : Poly → (Set → Set) → Set
(A , B) ⇒ F = (x : A) → F (B x)
```
## Polynomial Functors as Interfaces
A morphism `(A , B) ⇒ (C , D) ⊙_` equivalently consists of:
* `f₀ : A → C`, and
* `f¹ : (x : A) → D (f₀ x) → B x`.
Intuitively, this corresponds to a program component that yields a function `g : (a : A) → B a`, given a function `h : (c : C) → D c`, by calling `h` on `f₀ a` and then supplying the result as input to `f¹ a`, i.e.
```text
g a = let d = h (f₀ a) in
f¹ a d
```
The polynomials `(A , B)` and `(C , D)` in the preceding example can be thought of as **interfaces** for the functions `g` and `h`, while the morphism gives an implementation of `g` *depending* on one of `h`.
This basic pattern reflects the fundamental idea of **assume-guarantee reasoning**—assuming an implementation of `h` with interface `(C , D)` (i.e. a function `h : (c : C) → D c`), we *guarantee* an implementation of `g` with interface `(A , B)`.
## Free monads
However, this notion of morphisms of polynomial functors only captures situations where the assumed function is called *exactly once*.
To express more general patterns of interaction, we can use the **free monad** on a polynomial functor:
```agda
data Free (p : Poly) (C : Set) : Set where
return : C → Free p C
bind : (x : p .fst) → (p .snd x → Free p C) → Free p C
-- monadic bind/Kleisli lift for Free monads
_>>=_ : {p : Poly} {A B : Set}
→ Free p A → (A → Free p B) → Free p B
(return a) >>= f = f a
(bind x k) >>= f = bind x (λ b → k b >>= f)
```
The type `Free p C` represents *computations* that `return` a value of type `C` by sequencing calls to an assumed function with interface `p`, by successively `bind`ing the results of such calls.
Those familiar with constructs such as Haskell's `do` notation will recognize this as a familiar pattern, and we can extend the syntax of Agda to allow for a compact way of writing such programs as follows:
```agda
syntax bind a (λ b → e) = call[ b ← a ] e
>>=-syntax = _>>=_
syntax >>=-syntax m (λ b → e) = do[ b ← m ] e
```
This brings us to our first key idea:
* An **interface** for a program module is a polynomial functor, and given two such interfaces `p` and `q`, an **implementation** of `p` depending on `q` is a morphism from `p` to the free monad on `q`, i.e. an element of `p ⇒ Free q`.
At first blush, this may seem like a rather restrictive notion of *interface*, since we have so far only considered interfaces for *individual functions*, when in practice program modules and their interfaces define *many* such functions.
However, we can get around this issue using the *sum* of polynomial functors.
## Sums of Polynomial Functors
Given a type `U` and a family of polynomial functors `P : U → Poly`, the **sum** over `U` of the family P is defined as
$
\sum_{u : U} P[u](y) = \sum_{u : U} \sum_{x : P[u]\mathsf{.fst}} y^{P[u]\mathsf{.snd}[x]}
$
and this itself is a polynomial functor! We can represent this operation in Agda as follows:
```agda
sum : (U : Set) → (p : U → Poly) → Poly
sum U p .fst = Σ U (λ u → p u .fst)
sum U p .snd (u , x) = p u .snd x
```
Intuitively, if `U` is a type/set of *labels*, then a morphism from `sum U p` to `free q` is a family of morphisms
`f : (u : U) → p u ⇒ free (sum V q)`
i.e. for each label `u : U`, an implementation of the interface `p u` that depends on the interface `q`. This allows us to represent multiple functions in a single interface by using distinct labels for each function, and also to represent dependencies on multiple interfaces by using a sum in the codomain.
As special cases of the above, we have the *binary sum* of two polynomial functors `p` and `q`, and the *zero polynomial* (nullary sum):
```agda
-- binary sum
_⊕_ : Poly → Poly → Poly
((A , B) ⊕ (C , D)) .fst = A + C
((A , B) ⊕ (C , D)) .snd (inl a) = B a
((A , B) ⊕ (C , D)) .snd (inr c) = D c
-- nullary sum
zeroPoly : Poly
zeroPoly .fst = ⊥
zeroPoly .snd ()
```
## Example: Fold
Using these ideas, we can give our first example of a program module, a generic `fold` operation on lists:
```agda
module example1 where
-- simple list type
data List (A : Set) : Set where
nil : List A
cons : A → List A → List A
-- labels for fold dependencies
data foldlabels : Set where
base : foldlabels
step : foldlabels
```
---
```agda
-- interface for base case of a fold
Base : (A C : Set) → Poly
Base A C = (A , λ _ → C)
-- interface for recursive step of a fold
Step : (A B C : Set) → Poly
Step A B C = (A × B × C , λ _ → C)
-- interface for a fold
Fold : (A B C : Set) → Poly
Fold A B C = (A × List B , λ _ → C)
-- fold as a program module
fold : {A B C : Set}
→ Fold A B C
⇒ Free (sum foldlabels
(λ{ base → Base A C
; step → Step A B C}))
fold (a , nil) =
call[ c ← (base , a) ]
return c
fold (a , cons b bs) =
do[ c ← fold (a , bs) ]
call[ c' ← (step , (a , b , c)) ]
return c'
```
## Example: Append
Note that, rather than implementing fold as a higher-order function, in this case we have implemented it as a program module that *depends on* the interfaces `Base` and `Step` for the base case and recursive step.
---
We can then define separate modules that implement these interfaces for specific functions, e.g. the following for `append`
```agda
appendBase : {A : Set}
→ Base (List A) (List A) ⇒ Free zeroPoly
appendBase xs = return xs
appendStep : {A : Set}
→ Step (List A) A (List A) ⇒ Free zeroPoly
appendStep (_ , x , xs) = return (cons x xs)
```
## Example: Concat
Similarly, assuming an implementation of the interface for `append` (i.e. `Fold (List A) A (List A)`), we can implement base and recursive steps for `concat` as follows:
```agda
concatBase : {A : Set}
→ Base ⊤ (List A)
⇒ Free zeroPoly
concatBase _ = return nil
concatStep : {A : Set}
→ Step ⊤ (List A) (List A)
⇒ Free (Fold (List A) A (List A))
concatStep (_ , xs , ys) =
call[ zs ← (ys , xs) ]
return zs
```
It should be possible, then, for us to *compose* these modules to obtain new modules that implement `append` and `concat`, respectively.
# Composing Implementations
We have the following combinators for composing implementations of polynomial functor interfaces:
```agda
-- mapping a polynomial kleisli morphism over a free monad
Free⇒ : {p q : Poly} {E : Set}
→ Free p E → (p ⇒ Free q) → Free q E
Free⇒ (return e) f = return e
Free⇒ (bind a k) f = (f a) >>= (λ b → Free⇒ (k b) f)
-- Kleisli composition for the free-monad-monad
_∘_ : {p q r : Poly}
→ (p ⇒ Free q) → (q ⇒ Free r) → p ⇒ Free r
f ∘ g = λ a → Free⇒ (f a) g
-- U-ary composition
comp : (U : Set) {p : Poly} {q : U → Poly} {r : Poly}
→ (p ⇒ Free (sum U q))
→ ((u : U) → q u ⇒ Free r)
→ p ⇒ Free r
comp U {p = p} {q = q} {r = r} f g = f ∘ (λ (u , x) → g u x)
```
Hence we can indeed compose the modules defined above to obtain modules implementing `append` and `concat` as follows:
```agda
module example2 where
open example1
append : {A : Set}
→ Fold (List A) A (List A) ⇒ Free zeroPoly
append =
comp foldlabels fold
(λ{ base → appendBase
; step → appendStep})
concat : {A : Set}
→ Fold ⊤ (List A) (List A) ⇒ Free zeroPoly
concat =
comp foldlabels fold
(λ{ base → concatBase
; step → concatStep ∘ append})
```
## Wiring Diagrams
More generally, given some modules we would like to "wire together" we can represent the dependencies between these modules using a *wiring diagram*.
```agda
module WD (Boxes : Set)
(Arity : Boxes → Set)
(Dom : (b : Boxes) → Arity b → Poly)
(Cod : Boxes → Poly)
(Inputs : Set) (inputs : Inputs → Poly) where
data Wiring : (output : Poly) → Set₁ where
wire : (i : Inputs) → Wiring (inputs i)
box : (b : Boxes) → ((a : Arity b) → Wiring (Dom b a))
→ Wiring (Cod b)
open WD public
```
Then, given a wiring diagram and an assignment of each `b : Boxes` to a polynomial morphism of the appropriate type, we can compose these to obtain a new morphism implementing the interface of the wiring diagram as a whole:
```agda
compose : {Boxes : Set} {Arity : Boxes → Set}
→ {Dom : (b : Boxes) → Arity b → Poly}
→ {Cod : Boxes → Poly} {Inputs : Set}
→ {inputs : Inputs → Poly} {output : Poly}
→ Wiring Boxes Arity Dom Cod Inputs inputs output
→ ((b : Boxes) → Cod b ⇒ Free (sum (Arity b) (Dom b)))
→ output ⇒ Free (sum Inputs inputs)
compose (wire i) f x = bind (i , x) return
compose (box b f) g = comp _ (g b) (λ a → compose (f a) g)
```
# Coalgebraic Operational Semantics
So far we have not said anything about actually *running* programs. For this, we make use of the concept of a *Mealy machine*.
In Agda, we can define the type of Mealy machines with interface `p` as the following coinductive type:
```agda
record Mealy (p : Poly) : Set where
coinductive
field
app : (x : p .fst) → p .snd x × Mealy p
open Mealy public
```
Intuitively, a Mealy machine with interface `(A , B)` is a stateful computation/automaton that, given an input `x : A`, produces an output of type `B x` and moves to a new state of the machine.
The following operations then allow us to *run* programs, represented as polynomial morphisms, using Mealy machines:
```agda
-- run a mealy machine on a "program" of type Free p C
run-mealy : {p : Poly} {C : Set}
→ Free p C → Mealy p → C × Mealy p
run-mealy (return c) m = (c , m)
run-mealy (bind x f) m =
let (b , m') = m .app x in
run-mealy (f b) m'
-- use a polynomial morphism p ⇒ Free q to convert a Mealy
-- machine implementing q to a Mealy machine implementing p
prog→mealy : {p q : Poly}
→ (f : p ⇒ Free q) → Mealy q → Mealy p
prog→mealy {p = p} {q = q} f m .app a =
let (b , m') = run-mealy (f a) m in
(b , prog→mealy f m')
```
## Effects
The fact that Mealy machines represent *stateful* computations, yet can be used to run programs specified by morphisms of polynomial functors illustrates the generality of our approach: the "functions" specified by a polynomial functor qua interface need not be *pure* functions.
This gives us another perspective on the meaning of a polynomial morphism `f : p ⇒ Free q`: it represents a program of interface `p` that uses *effects* encoded by the interface `q`. A mealy machine implementing `q` then corresponds to Ahman and Bauer's notion of *runners* for algebraic effects.
For instance, one such effect we can represent in this way is *state* itself. Specifically, given a type `S` of states, we can define the interface `State S` as follows:
```agda
-- type of inputs for the state effect
data StateI (S : Set) : Set where
get : StateI S
put : S → StateI S
State : Set → Poly
State S .fst = StateI S
State S .snd get = S
State S .snd (put s) = ⊤
```
We can then straightforwardly implement this effect with a Mealy machine as follows:
```agda
state-mealy : {S : Set} → S → Mealy (State S)
state-mealy s .app get = (s , state-mealy s)
state-mealy s .app (put s') = (_ , state-mealy s')
```
## Sum of Mealy Machines
We can also combine Mealy machines implementing different interfaces using the sum of polynomials:
```agda
-- closure of Mealy machines under binary sum
_mealy⊕_ : {p q : Poly}
→ Mealy p → Mealy q → Mealy (p ⊕ q)
_mealy⊕_ m₁ m₂ .app (inl a) =
let (b , m') = m₁ .app a in (b , (m' mealy⊕ m₂))
_mealy⊕_ m₁ m₂ .app (inr c) =
let (d , m') = m₂ .app c in (d , (m₁ mealy⊕ m'))
-- closure of Mealy machines under nullary sum
mealyzero : Mealy zeroPoly
mealyzero .app ()
```
## Example: Fibonacci
As an example, we can implement a Mealy machine that outputs the Fibonacci numbers:
```agda
module example4 where
data Nat : Set where
zero : Nat
succ : Nat → Nat
plus : Nat → Nat → Nat
plus zero n = n
plus (succ m) n = succ (plus m n)
fib-update : (⊤ , λ _ → Nat) ⇒ Free (State (Nat × Nat))
fib-update _ =
call[ (x , y) ← get ]
call[ _ ← put (y , plus x y) ]
return x
fib : Mealy (⊤ , λ _ → Nat)
fib = prog→mealy fib-update (state-mealy (zero , succ zero))
```
# Dependent Polynomials
We come now to the heart of the matter: representing *specifications* of program modules in a way that allows for *compositional verification*.
To this end, we introduce the notion of *dependent polynomials*. Given a polynomial `p = (A , B)`, a *dependent polynomial* over `p` consists of a family of types `C : A → Set` and a family of types
`D : (x : A) → C x → B x → Set`.
```agda
DepPoly : Poly → Set₁
DepPoly (A , B) = Σ (A → Set) (λ C → (x : A) → C x → B x → Set)
```
Intuitively, if `p = (A , B)` represents the interface of a dependent function `f : (x : A) → B x`, then a dependent polynomial over `p` represents a *precondition* `C` for the inputs to that function, and a *postcondition* `D` on the outputs of that function.
A dependent polynomial `r = (C , D)` on a polynomial `p = (A , B)` can be realized as an endofunctor on the category of *type families* as follows:
```agda
⊙Dep : (p : Poly) → DepPoly p → (E : Set) → (E → Set) → p ⊙ E → Set
⊙Dep (A , B) (C , D) E F (a , f) =
Σ (C a) (λ c → (b : B a) → D a c b → F (f b))
```
Given a functor `F : Set → Set` and a *dependent functor* `G : (X : Set) → (X → Set) → F X → Set` over `F`, the type of morphisms from `r` to `G` *over* a polynomial morphism `f : p ⇒ F` is defined as follows:
```agda
⇒Dep : (p : Poly) → DepPoly p → {F : Set → Set}
→ ((X : Set) → (X → Set) → F X → Set)
→ (f : p ⇒ F) → Set
⇒Dep (A , B) (C , D) G f =
(a : A) (c : C a) → G (B a) (D a c) (f a)
```
## Dependent Free Monads
We can then define the *free dependent monad* on a dependent polynomial:
```agda
data FreeDep (p : Poly) (r : DepPoly p)
(E : Set) (F : E → Set) : Free p E → Set where
returnD : {e : E} → F e → FreeDep p r E F (return e)
bindD : {a : p .fst} (c : r .fst a)
→ {k : (p .snd a) → Free p E}
→ ((b : p .snd a) → r .snd a c b
→ FreeDep p r E F (k b))
→ FreeDep p r E F (bind a k)
-- monadic bind for FreeDep
>>=Dep : {p : Poly} {r : DepPoly p}
→ {E : Set} {F : E → Set} {G : Set} {H : G → Set}
→ (f : Free p E) → FreeDep p r E F f
→ {g : E → Free p G}
→ ((e : E) → F e → FreeDep p r G H (g e))
→ FreeDep p r G H (f >>= g)
>>=Dep (return e) (returnD f) gg = gg e f
>>=Dep (bind a k) (bindD c h) gg =
bindD c (λ b x → >>=Dep (k b) (h b x) gg)
```
This defines a type of *verifications* of *computations* `f : Free p E`—if the inputs passed by `f` to the interface `p` satisfy their preconditions, then assuming the outputs of those calls satisfy their postcondition, the result returned by `f` satisfies its postcondition.
Hence given a program module represented by a polynomial morphism `f : p ⇒ Free q`, and dependent polynomials `r` and `s` over `p` and `q`, respectively, a dependent polynomial morphism `g` from `r` to `FreeDep q s` over `f` represents a *verification* that `f`'s implementation of `p` satisfies its specification `r`, assuming that the implementation of `q` satisfies its specification `s`.
By the same token, dependent polynomials are closed under sums:
```agda
-- sums of dependent polynomials
sumDep : (U : Set) (p : U → Poly)
→ (r : (u : U) → DepPoly (p u)) → DepPoly (sum U p)
sumDep U p r .fst (u , x) = r u .fst x
sumDep U p r .snd (u , x) y = r u .snd x y
-- binary coproduct of dependent polynomials
depPoly⊕ : {p q : Poly} (r : DepPoly p) (s : DepPoly q)
→ DepPoly (p ⊕ q)
depPoly⊕ (A , B) (C , D) .fst (inl a) = A a
depPoly⊕ (A , B) (C , D) .fst (inr c) = C c
depPoly⊕ (A , B) (C , D) .snd (inl a) y = B a y
depPoly⊕ (A , B) (C , D) .snd (inr c) y = D c y
-- nullary coproduct of dependent polynomials
depPolyzero : DepPoly zeroPoly
depPolyzero .fst ()
depPolyzero .snd ()
```
And likewise for compositions of dependent polynomial morphisms:
```agda
-- mapping a dependent polynomial morphism over
-- a free dependent monad
FreeDep⇒ : {p q : Poly} {r : DepPoly p} {s : DepPoly q}
→ {E : Set} {F : E → Set}
→ (f : Free p E) → FreeDep p r E F f
→ (g : p ⇒ Free q) → (⇒Dep p r (FreeDep q s) g)
→ FreeDep q s E F (Free⇒ f g)
FreeDep⇒ (return e) (returnD f) g gg = returnD f
FreeDep⇒ (bind a k) (bindD c h) g gg =
>>=Dep (g a) (gg a c) (λ e x → FreeDep⇒ (k e) (h e x) g gg)
-- Kleisli composition for the free dependent monad
∘Dep : {p : Poly} {q : Poly} {r : Poly}
→ {s : DepPoly p} {t : DepPoly q} {v : DepPoly r}
→ {f : p ⇒ Free q} {g : q ⇒ Free r}
→ (⇒Dep p s (FreeDep q t) f)
→ (⇒Dep q t (FreeDep r v) g)
→ ⇒Dep p s (FreeDep r v) (f ∘ g)
∘Dep ff gg a c = FreeDep⇒ _ (ff a c) _ gg
-- multi-ary Kleisli composition of
-- dependent polynomial morphisms
compDep : (U : Set) {p : Poly} {q : U → Poly} {r : Poly}
{s : DepPoly p} {t : (u : U) → DepPoly (q u)}
{v : DepPoly r} {f : p ⇒ Free (sum U q)}
→ {g : (u : U) → q u ⇒ Free r}
→ (⇒Dep p s (FreeDep (sum U q) (sumDep U q t)) f)
→ ((u : U) → ⇒Dep (q u) (t u) (FreeDep r v) (g u))
→ ⇒Dep p s (FreeDep r v) (comp U f g)
compDep U ff gg = ∘Dep ff (λ (u , x) c → gg u x c)
```
## Example: Verified Fold
As an example, we can define a family of verifications over the previously defined `fold` module, which captures the general pattern of proofs by *induction* over the structure of the input to the fold:
```agda
module example5 where
open example1
open example2
-- specification for the fold
FoldSpec : (A B C : Set) (D : A → Set)
→ (E : (a : A) → D a → List B → C → Set)
→ DepPoly (Fold A B C)
FoldSpec A B C D E .fst (a , _) = D a
FoldSpec A B C D E .snd (a , bs) d c = E a d bs c
-- specification for the base case
BaseSpec : (A B C : Set) (D : A → Set)
→ (E : (a : A) → D a → List B → C → Set)
→ DepPoly (Base A C)
BaseSpec A B C D E .fst a = D a
BaseSpec A B C D E .snd a d c = E a d nil c
-- specification for the recursive step
StepSpec : (A B C : Set) (D : A → Set)
→ (E : (a : A) → D a → List B → C → Set)
→ DepPoly (Step A B C)
StepSpec A B C D E .fst (a , _ , c) =
Σ (D a) (λ d → Σ (List B) (λ bs → E a d bs c))
StepSpec A B C D E .snd (a , b , _) (d , bs , _) c =
E a d (cons b bs) c
-- verification of the fold
foldInd : {A B C : Set} {D : A → Set}
→ {E : (a : A) → D a → List B → C → Set}
→ ⇒Dep _ (FoldSpec A B C D E)
(FreeDep _ (sumDep foldlabels _
(λ{ base → BaseSpec A B C D E
; step → StepSpec A B C D E})))
fold
foldInd (_ , nil) d =
bindD d (λ _ e → returnD e)
foldInd (a , cons b bs) d =
>>=Dep _ (foldInd (a , bs) d) (λ _ e →
bindD (d , (bs , e)) (λ _ e' →
returnD e'))
```
This allows us to reduce (e.g.) the problem of verifying that our implementation of `append` really implements list-append to verifying that the base case and recursive step of the fold satisfy their respective specifications, which we can do as follows:
We first define a relational specification for the `append` operation:
```agda
-- relational specification for the append operation
data Append {A : Set} : List A → ⊤ → List A → List A → Set where
append-nil : {xs : List A} → Append xs _ nil xs
append-cons : {xs ys zs : List A} {x : A}
→ Append ys _ xs zs
→ Append ys _ (cons x xs) (cons x zs)
```
Then verification of the previously-defined components is straightforward:
```agda
-- verification of the base and step of append
appendBaseSpec : {A : Set}
→ ⇒Dep _ (BaseSpec (List A) A (List A) (λ _ → ⊤) Append)
(FreeDep _ depPolyzero) appendBase
appendBaseSpec _ _ = returnD append-nil
appendStepSpec : {A : Set}
→ ⇒Dep _ (StepSpec (List A) A (List A) (λ _ → ⊤) Append)
(FreeDep _ depPolyzero) appendStep
appendStepSpec (_ , _ , _) (_ , (_ , e)) = returnD (append-cons e)
-- verification of the append operation
appendSpec : {A : Set}
→ ⇒Dep _ (FoldSpec (List A) A (List A) (λ _ → ⊤) Append)
(FreeDep _ depPolyzero)
append
appendSpec =
compDep _ foldInd
(λ{ base → appendBaseSpec
; step → appendStepSpec})
```
## Dependent Mealy Machines
We can likewise define a notion of *dependent Mealy machine* for a dependent polynomial `r` over a polynomial `p`, as follows:
```agda
record DepMealy (p : Poly) (r : DepPoly p) (m : Mealy p) : Set where
coinductive
field
appD : (x : p .fst) (y : r .fst x)
→ r .snd x y (m .app x .fst)
× DepMealy p r (m .app x .snd)
open DepMealy public
```
A dependent Mealy machine as above over a Mealy machine `m` witnesses that, if all inputs to `m` satisfy their preconditions, then all outputs produced by `m` will satisfy their postconditions, and the state of `m` will evolve in such a way that this continues to hold.
## Example: State Invariants
A simple dependent Mealy machine witnesses that, if the initial state `s` of `state-mealy s` satisfies an invariant `T s`, then all subsequent states of the machine will also satisfy `T` provided all inputs passed to the machine satisfy `T` as well:
```agda
-- specification for state invariants
Invariant : (S : Set) (T : S → Set) → DepPoly (State S)
Invariant S T .fst get = ⊤
Invariant S T .fst (put s) = T s
Invariant S T .snd get _ s = T s
Invariant S T .snd (put s) t _ = ⊤
-- verification of a state invariant
invariant : {S : Set} {T : S → Set}
→ (s : S) → T s
→ DepMealy (State S) (Invariant S T) (state-mealy s)
invariant s t .appD get _ = (t , (invariant s t))
invariant s t .appD (put s') t' = (_ , invariant s' t')
```
## Compositionality for Dependent Mealy Machines
Dependent Mealy machines are also closed under the same operations defined previously for Mealy machines:
```agda
-- run a verified Mealy machine on a verified program
run-mealyD : {p : Poly} {r : DepPoly p} {E : Set} {F : E → Set}
→ {e : Free p E} {m : Mealy p}
→ FreeDep p r E F e → DepMealy p r m
→ (F (run-mealy e m .fst)
× DepMealy p r (run-mealy e m .snd))
run-mealyD (returnD f) mm = (f , mm)
run-mealyD (bindD c h) mm =
let (d , mm') = mm .appD _ c in
run-mealyD (h _ d) mm'
-- apply a verified program to a verified Mealy
-- machine to obtain a new verified Mealy machine
prog→mealyD : {p q : Poly} {r : DepPoly p} {s : DepPoly q}
→ {f : p ⇒ Free q} {m : Mealy q}
→ (⇒Dep p r (FreeDep q s) f) → DepMealy q s m
→ DepMealy p r (prog→mealy f m)
prog→mealyD ff mm .appD a c =
let (b , mm') = run-mealyD (ff a c) mm in
(b , (prog→mealyD ff mm'))
-- closure of DepMealy under binary coproducts
_depMealy⊕_ : {p q : Poly} {r : DepPoly p} {s : DepPoly q}
→ {m₁ : Mealy p} {m₂ : Mealy q}
→ DepMealy p r m₁ → DepMealy q s m₂
→ DepMealy (p ⊕ q) (depPoly⊕ r s) (m₁ mealy⊕ m₂)
_depMealy⊕_ m₁ m₂ .appD (inl a) c =
let (b , m') = m₁ .appD a c in
(b , (m' depMealy⊕ m₂))
_depMealy⊕_ m₁ m₂ .appD (inr c) d =
let (e , m') = m₂ .appD c d in
(e , (m₁ depMealy⊕ m'))
```
## Example: Verified Fibonacci
Using this, we can verify that the `fib` Mealy machine defined previously actually computes Fibonacci numbers.
```agda
module example6 where
open example4
-- relational specification of Fibonacci numbers
data IsFib : Nat → Nat → Set where
isFibZero : IsFib zero zero
isFibOne : IsFib (succ zero) (succ zero)
isFibRec : {x y z : Nat}
→ IsFib x y → IsFib (succ x) z
→ IsFib (succ (succ x)) (plus y z)
-- specification for fib
FibDep : DepPoly (⊤ , λ _ → Nat)
FibDep .fst _ = ⊤
FibDep .snd _ _ y = Σ Nat (λ x → IsFib x y)
-- invariant for state of fib machine
FibInvariant : (Nat × Nat) → Set
FibInvariant (y , z) =
Σ Nat (λ x → IsFib x y × IsFib (succ x) z)
-- verification of fib-update
fib-updateSpec :
⇒Dep _ FibDep
(FreeDep _ (Invariant _ FibInvariant))
fib-update
fib-updateSpec _ _ =
bindD _ (λ _ (x , fy , fz) →
bindD (succ x , fz , isFibRec fy fz) (λ _ _ →
returnD (_ , fy)))
-- verification of fib
fibSpec : DepMealy _ FibDep fib
fibSpec =
prog→mealyD fib-updateSpec
(invariant _ (zero , isFibZero , isFibOne))
```
# Categorical Structure
Zooming out and taking stock, we have the following:
Let $\mathbf{Int}$ be the category whose objects are polynomial functors `p, q` and whose morphisms are polynomial morphisms `f : p ⇒ Free q`. This is the category of *interfaces* for program modules. Moreover, this category has a *monoidal structure* given by the coproduct.
Let $\mathbf{Spec}$ be the category whose objects are pairs `(p , r)` where `p` is a polynomial functor and `r` is a dependent polynomial over `p`, and whose morphisms are pairs `(f , g)` where `f : p ⇒ Free q` is a polynomial morphism and `g : ⇒Dep p r (FreeDep q s) f` is a dependent polynomial morphism from `r` to `s` over `f`. This is the category of *specifications* for program modules. Like $\mathbf{Int}$, this category has a monoidal structure given by the coproduct, and moreover, the projection functor $\pi : \mathbf{Spec} → \mathbf{Int}$ that forgets the dependent polynomial structure is a *monoidal functor* with respect to this structure. Our first main compositionality theorem—that we can compose verifications of program modules using wiring diagrams, essentially amounts to this fact.
Moreover, `Mealy` is a *presheaf* on $\mathbf{Int}$ that assigns to each polynomial functor `p` the set of Mealy machines with interface `p`. The functoriality of this assignment is given by the function `prog→mealy`. Moreover, this presheaf is a *lax monoidal functor* $\mathbf{Int}^{op} \to \mathbf{Set}$, where the lax structure is given by the `mealy⊕` and `mealyzero` operations. Similarly, `DepMealy` is a presheaf on $\mathbf{Spec}$ that assigns to each pair `(p , r)` the set of dependent Mealy machines with interface `p` and specification `r`, and the functoriality of this assignment is similarly given by the function `prog→mealyD`. The fact that `DepMealy p r` depends upon an element of `Mealy p` then corresponds to a *monoidal natural transformation* $\texttt{DepMealy} \Rightarrow \texttt{Mealy} \circ \pi$.
This accounts for essentially all of the structure we have used to obtain compositionality results for program modules and their specifications. We can therefore abstract from this situation to obtain a general setting in which such compositional verification is possible, namely, if we have:
* A monoidal category $\mathbf{Int}$, and a lax monoidal functor $\text{Sys} : \mathbf{Int}^{op} \to \mathbf{Set}$ that assigns to each interface `i` a set of *systems* with interface `i`.
* A monoidal category $\mathbf{Spec}$ with a monoidal functor $\pi : \mathbf{Spec} \to \mathbf{Int}$, and a lax monoidal functor $\text{Verif} : \mathbf{Spec}^{op} \to \mathbf{Set}$ that assigns to each specification `s` a set of *verified systems* with specification `s`.
* A monoidal natural transformation $\text{Verif} \Rightarrow \text{Sys} \circ \pi$ that assigns to each verified system its *underlying system*.
## Concurrency and Relational Verification
So far, the program modules we have considered have been *sequential*, in that they may only call one of their dependencies at a time. However, our operational semantics in terms of Mealy machines does not require this, and we can extend our framework to allow for *concurrent* program modules that may call multiple dependencies at once.
To this end, we make use of the *conjunctive sum* of types, which is defined as $A \vee B = A + B + (A \times B)$, representing the possibility of either an $A$, a $B$, or both at the same time. In Agda, we can define this as follows:
```agda
data _∨_ (A B : Set) : Set where
left : A → A ∨ B
right : B → A ∨ B
both : A → B → A ∨ B
```
Using this, we define the *parallel sum* of polynomials as follows:
```agda
_∣∣_ : poly → poly → poly
_∣∣_ (A , B) (C , D) .fst = A ∨ C
_∣∣_ (A , B) (C , D) .snd (left a) = B a
_∣∣_ (A , B) (C , D) .snd (right c) = D c
_∣∣_ (A , B) (C , D) .snd (both a c) = B a × D c
```
Thinking of this as an interface to a pair of functions/program modules, what this allows is for us to call either one or both of the functions/modules at once, and to obtain a corresponding result of the appropriate type.
The parallel sum of dependent polynomials is defined similarly:
```agda
_∣∣Dep_ : {p q : poly} → DepPoly p → DepPoly q → DepPoly (p ∣∣ q)
_∣∣Dep_ (A , B) (C , D) .fst (left a) = A a
_∣∣Dep_ (A , B) (C , D) .fst (right c) = C c
_∣∣Dep_ (A , B) (C , D) .fst (both a c) = A a × C c
_∣∣Dep_ (A , B) (C , D) .snd (left a) y = B a y
_∣∣Dep_ (A , B) (C , D) .snd (right c) y = D c y
_∣∣Dep_ (A , B) (C , D) .snd (both a c) (y , z) (x , w) =
(B a y x) × (D c z w)
```
The parallel sum is in fact a (symmetric) monoidal product on the category of polynomials, with unit `zeroPoly`. We can exhibit this by defining the following operations that allow us to put program modules *in parallel* using the parallel sum:
```agda
-- injections into the parallel sum
leftProg : {p q : poly} {E : Set} → Free p E → Free (p ∣∣ q) E
leftProg (return e) = return e
leftProg (bind a k) = bind (left a) (λ b → leftProg (k b))
rightProg : {p q : poly} {F : Set} → Free q F → Free (p ∣∣ q) F
rightProg (return f) = return f
rightProg (bind c h) = bind (right c) (λ d → rightProg (h d))
-- parallel composition of programs
bothProg : {p q : poly} {E F : Set}
→ Free p E → Free q F → Free (p ∣∣ q) (E × F)
bothProg (return e) (return f) = return (e , f)
bothProg (bind a k) (return f) =
bind (left a) (λ b → bothProg (k b) (return f))
bothProg (return e) (bind c h) =
bind (right c) (λ d → bothProg (return e) (h d))
bothProg (bind a k) (bind c h) =
bind (both a c) (λ (b , d) → bothProg (k b) (h d))
-- parallel composition of program modules
_∣∣Prog_ : {p q r s : poly} → (p ⇒ Free r) → (q ⇒ Free s)
→ ((p ∣∣ q) ⇒ Free (r ∣∣ s))
_∣∣Prog_ f g (left a) = leftProg (f a)
_∣∣Prog_ f g (right c) = rightProg (g c)
_∣∣Prog_ f g (both a c) = bothProg (f a) (g c)
```
And similarly for dependent polynomials morphisms/verifications:
```agda
-- injections into the parallel sum of dependent polynomials
leftDep : {p q : poly} {r : DepPoly p} {s : DepPoly q}
→ {X : Set} {Y : X → Set}
→ {f : Free p X}
→ FreeDep p r X Y f
→ FreeDep (p ∣∣ q) (r ∣∣Dep s) X Y (leftProg f)
leftDep (returnD f) = returnD f
leftDep (bindD c h) = bindD c (λ d e → leftDep (h d e))
rightDep : {p q : poly} {r : DepPoly p} {s : DepPoly q}
→ {X : Set} {Y : X → Set}
→ {g : Free q X}
→ FreeDep q s X Y g
→ FreeDep (p ∣∣ q) (r ∣∣Dep s) X Y (rightProg g)
rightDep (returnD f) = returnD f
rightDep (bindD c h) = bindD c (λ d e → rightDep (h d e))
-- parallel composition of verifications
bothDep : {p q : poly} {r : DepPoly p} {s : DepPoly q}
→ {X Y : Set} {Z : X → Set} {W : Y → Set}
→ {f : Free p X} {g : Free q Y}
→ FreeDep p r X Z f → FreeDep q s Y W g
→ FreeDep (p ∣∣ q) (r ∣∣Dep s) (X × Y)
(λ (x , y) → Z x × W y) (bothProg f g)
bothDep (returnD z) (returnD w) = returnD (z , w)
bothDep (bindD c h) (returnD w) =
bindD c (λ b x → bothDep (h b x) (returnD w))
bothDep (returnD z) (bindD d k) =
bindD d (λ b x → bothDep (returnD z) (k b x))
bothDep (bindD c h) (bindD d k) =
bindD (c , d) (λ (a , b) (x , y) → bothDep (h a x) (k b y))
-- parallel composition of dependent polynomial morphisms
_∣∣DepProg_ : {p q r s : poly}
→ {p' : DepPoly p} {q' : DepPoly q}
→ {r' : DepPoly r} {s' : DepPoly s}
→ {f : p ⇒ Free r} {g : q ⇒ Free s}
→ ⇒Dep p p' (FreeDep r r') f → ⇒Dep q q' (FreeDep s s') g
→ ⇒Dep (p ∣∣ q) (p' ∣∣Dep q')
(FreeDep (r ∣∣ s) (r' ∣∣Dep s')) (f ∣∣Prog g)
_∣∣DepProg_ ff gg (left a) b = leftDep (ff a b)
_∣∣DepProg_ ff gg (right c) d = rightDep (gg c d)
_∣∣DepProg_ ff gg (both a c) (b , d) = bothDep (ff a b) (gg c d)
```
Hence, since our generalized account of compositional verification is based on the existence of a monoidal structure on the category of interfaces and the category of specifications, we can apply this same framework for compositional verification of concurrent program modules, by simply using the parallel sum as the monoidal structure instead of the coproduct.
To complete the picture, we also show that Mealy machines and their verifications are closed under the parallel sum:
```agda
-- parallel sum of Mealy machines
_mealy||_ : {p q : poly} → Mealy p → Mealy q → Mealy (p ∣∣ q)
_mealy||_ m₁ m₂ .app (left a) =
let (b , m') = m₁ .app a in
(b , m' mealy|| m₂)
_mealy||_ m₁ m₂ .app (right c) =
let (d , m') = m₂ .app c in
(d , m₁ mealy|| m')
_mealy||_ m₁ m₂ .app (both a c) =
let ((b , m₁') , (d , m₂')) = (m₁ .app a , m₂ .app c) in
((b , d) , m₁' mealy|| m₂')
-- parallel sum of dependent Mealy machines
_depMealy||_ : {p q : poly} {r : DepPoly p} {s : DepPoly q}
→ {m₁ : Mealy p} {m₂ : Mealy q}
→ DepMealy p r m₁ → DepMealy q s m₂
→ DepMealy (p ∣∣ q) (r ∣∣Dep s) (m₁ mealy|| m₂)
_depMealy||_ m₁ m₂ .appD (left a) c =
let (b , m') = m₁ .appD a c in
(b , (m' depMealy|| m₂))
_depMealy||_ m₁ m₂ .appD (right c) d =
let (b , m') = m₂ .appD c d in
(b , (m₁ depMealy|| m'))
_depMealy||_ m₁ m₂ .appD (both a c) (b , d) =
let ((e , m₁') , (f , m₂')) = (m₁ .appD a b , m₂ .appD c d) in
((e , f) , (m₁' depMealy|| m₂'))
```
Hence we have another instance of our general framework for compositional verification, this time for concurrent program modules and their operational semantics via parallel composition of Mealy machines. A slight caveat to this is that the notion of *wiring diagram* for the parallel sum is slightly different from the one for the coproduct, in that while the latter possesses a *codiagonal* $p ⊕ p ⇒ Free ~ p$, the former does not. Intuitively, this means that, when composing parallel programs/processes, a process cannot be "in two places at once," since otherwise we might have race conditions (and indeed, by design, concurrent programs structured in this way will be race-free). In the jargon of monoidal category theory, ⊕ is a *cocartesian* monoidal structure, while ∣∣ is only an (affine) *symmetric monoidal* structure. Hence our wiring diagrams for composition of program modules via the parallel sum must be *affine*, in that they do not allow wires to be duplicated (though they may be deleted). Unlike (co)Cartesian wiring diagrams, affine wiring diagrams do not have as clean a representation in Agda as an inductive type, so we do not present them here and leave their use somewhat informal. However, the general principles of compositional verification still apply, and we can still use the same techniques to verify concurrent program modules.
A novel possibility that arises from use of the parallel sum is to be able to verify *relational* properties of program modules. Consider e.g. two polynomials `p q : poly`, and a dependent polynomial `r : DepPoly (p ∣∣ q)`. Such a dependent polynomial is equivalent to the data of a dependent polynomial `r₁ : DepPoly p` and a dependent polynomial `r₂ : DepPoly q` and a dependent polynomial `r₃ : DepPoly (p ⊗ q)`, where `⊗` is the *tensor product* of polynomials, defined as follows:
```agda
_⊗_ : poly → poly → poly
(A , B) ⊗ (C , D) = (A × C , λ (a , c) → B a × D c)
```
(Note that `p ∣∣ q = p ⊕ q ⊕ (p ⊗ q)`.)
In particular, the precondition of `r₃` may be a *relation* between the inputs of `p` and `q`, and the postcondition of `r₃` may be a relation between the outputs of `p` and `q`. Then given polynomials `p, q, r, s` and dependent polynomials `s : DepPoly (p ∣∣ q)` and `t : DepPoly (r ∣∣ s)`, representing relational specifications for `p, q` and `r, s`, respectively, given program modules `f : p ⇒ Free r` and `g : q ⇒ Free s`, a verification `h : ⇒Dep (p ∣∣ q) (s ∣∣Dep t) (FreeDep (r ∣∣ s) (r₁ ∣∣Dep r₂)) (f ∣∣Prog g)` amounts to a verification that `f` and `g` together preserve the relations encoded by `s` and `t`. This can be used, e.g. to verify properties such as *monotonicity* of program modules with respect to orderings on their inputs and outputs, noninterference, etc., that could not previously be expressed using a simple (unary) precondition-postcondition specification. And because these verifications are just dependent polynomial morphisms, they can still be composed using the same techniques as before, allowing us to verify relational properties of complex program modules by verifying their components separately.
# Conclusion & Future Work
**Automation and implementation:** the framework presented here has been formalized/implemented in Agda, which is particularly well-suited to this kind of work due to its strong support for inductive and coinductive types and highly-extensible syntax.
However, the general principles behind this framework should be applicable in a wide variety of programming languages and proof assistants (Lean, Idris, Istari, etc.)
Moreover, the compositional nature of this framework makes it highly amenable to automation, and I am interested in exploring this further.
**Safety of complex systems:** wiring diagrams make information flow explicit. If all paths through the diagram to a potentially dangerous operation (e.g. network access, file I/O, etc.) are mediated by verified modules (that e.g. escape or sanitize untrusted inputs), then the entire system can be trusted to be safe (no "Robert DROP TABLES" incidents, etc.)
Moreover, in a world where digital systems are increasingly being built out of opaque black-box components (API calls, deep learning models, etc.) techniques such as the above to formally guarantee their safety and reliability can help to make these systems formally auditable and trustable.
## Thank you!