Paper-Conference

Provably Safe Neural Network Controllers via Differential Dynamic Logic
Provably Safe Neural Network Controllers via Differential Dynamic Logic

We present the first approach for the combination of differential dynamic logic (dL) and NN verification. By joining forces, we can exploit the efficiency of NN verification tools while retaining the rigor of dL. This yields infinite-time horizon safety guarantees for neural network control systems.

10. Dec 2024

Towards Combining the Cognitive Abilities of Large Language Models with the Rigor of Deductive Progam Verification

We evaluate the potential of Large Language Models (specifically GPT 3.5 and GPT 4o) for the generation of code annotations in the Java Modelling Language using a prototypical integration of the Java verification tools KeY and JJBMC with OpenAI's API.

27. Oct 2024

An Information-Flow Perspective on Algorithmic Fairness
An Information-Flow Perspective on Algorithmic Fairness

An exploration of the relationships between qualitative and quantitative information flow and various causal and non-causal fairness definitions with applications to program analysis.

20. Feb 2024

Quantifying Software Correctness by Combining Architecture Modeling and Formal Program Analysis

A formal approach for the quantiative assessment of service-oriented software which combines high-level software architecture modelling with deductive verification.

8. Jan 2024

Geometric Path Enumeration for Equivalence Verification of Neural Networks
Geometric Path Enumeration for Equivalence Verification of Neural Networks

We present and evaluate an approach for proving equivalence properties on neural networks and show that the verification of $\epsilon$-equivalence is coNP-complete

21. Dec 2021

Quantifying Software Reliability via Model-Counting
Quantifying Software Reliability via Model-Counting

We present and evaluate a pipeline allowing for the quantification of C-programs according to their specification adherence.

23. Aug 2021