Events 2021

POSTPONED! Model Checking Under Weak Memory Concurrency

Viktor Vafeiadis Max Planck Institute for Software Systems
03 Mar 2021, 12:15 pm - 1:15 pm
Saarbrücken building Virtual, room Virtual
Joint Lecture Series
Model checking is an automatic approach for testing and verifying programs, and has proven to be very effective in a number of settings ranging from hardware designs to intricate low-level systems software. In this talk, I will present our recent research on applying model checking to weakly consistent concurrent programs. I will explain the key challenges involved in making model checking effective in this setting and how to address them.

---

Join Zoom Meeting https://zoom.us/j/91226662016?pwd=UmdLZ01WRW1YUnVmZ0NjQXVBSDQrdz09

Meeting ID: 912 2666 2016 Passcode: jls-20-21
Model checking is an automatic approach for testing and verifying programs, and has proven to be very effective in a number of settings ranging from hardware designs to intricate low-level systems software. In this talk, I will present our recent research on applying model checking to weakly consistent concurrent programs. I will explain the key challenges involved in making model checking effective in this setting and how to address them.

---

Join Zoom Meeting https://zoom.us/j/91226662016?pwd=UmdLZ01WRW1YUnVmZ0NjQXVBSDQrdz09

Meeting ID: 912 2666 2016 Passcode: jls-20-21

Vellvm: Verifying LLVM IR Code

Steve Zdancewic University of Pennsylvania
13 Jan 2021, 3:00 pm - 4:00 pm
Saarbrücken building Virtual, room Virtual
SWS Distinguished Lecture Series
LLVM is an industrial-strength compiler that's used for everything from day-to-day iOS development (in Swift) to pie-in-the-sky academic research projects. This makes the LLVM framework a sweet spot for bug-finding and verification technologies--any improvements to it are amplified across its many applications.

This talk asks the question: what does LLVM code _mean_, and, how can we ensure that LLVM-based tools (compilers, optimizers, code instrumentation passes, etc.) do what they're supposed to -- especially for safety- or security-critical applications? …
LLVM is an industrial-strength compiler that's used for everything from day-to-day iOS development (in Swift) to pie-in-the-sky academic research projects. This makes the LLVM framework a sweet spot for bug-finding and verification technologies--any improvements to it are amplified across its many applications.

This talk asks the question: what does LLVM code _mean_, and, how can we ensure that LLVM-based tools (compilers, optimizers, code instrumentation passes, etc.) do what they're supposed to -- especially for safety- or security-critical applications? The Verified LLVM project (Vellvm) is our attempt to provide an answer. Vellvm gives a semantics to LLVM IR programs in the Coq interactive theorem prover, which can be used for developing machine-checkable formal properties about LLVM IR programs and transformation passes.

Our approach to modeling LLVM IR semantics uses _interaction trees_, a data structure that is suitable for representing impure, possibly nonterminating programs in dependent type theory. Interaction trees support compositional and modular reasoning about program semantics but are also executable, and hence useful for implementation and testing. We'll see how interaction trees are used in Vellvm and, along the way, we'll get a taste of what LLVM code looks like including some of its trickier semantic aspects. We'll also see (at a high level) how modern interactive theorem provers--in this case, Coq--can be used to verify compiler transformations.

No experience with LLVM or formal verification technologies will be assumed.

--

Please contact office for the Zoom details.
Read more

The Programmer, The Unknown Being: Program Comprehension Research in the Neuroage

Sven Apel Fachrichtung Informatik - Saarbrücken
13 Jan 2021, 12:15 pm - 1:15 pm
Saarbrücken building Virtual, room Virtual
Joint Lecture Series
The pivotal role of software in our modern world imposes strong requirements on quality, correctness, and reliability of software systems. The ability to understand program code plays a key role for programmers to fulfill these requirements. Despite significant progress, research on program comprehension has had a fundamental limitation: program comprehension is a cognitive process that cannot be directly observed, which leaves considerable room for (mis)interpretation, uncertainty, and confounding factors. Thus, central questions such as "What makes a good programmer?" and "How should we program?" are surprisingly difficult to answer based on the state of the art. …
The pivotal role of software in our modern world imposes strong requirements on quality, correctness, and reliability of software systems. The ability to understand program code plays a key role for programmers to fulfill these requirements. Despite significant progress, research on program comprehension has had a fundamental limitation: program comprehension is a cognitive process that cannot be directly observed, which leaves considerable room for (mis)interpretation, uncertainty, and confounding factors. Thus, central questions such as "What makes a good programmer?" and "How should we program?" are surprisingly difficult to answer based on the state of the art.

In this talk, I will report on recent attempts to lift research on program comprehension to a new level. The key idea is to leverage methods from cognitive neuroscience to obtain insights into the cognitive processes involved in program comprehension. Opening the "black box" of human cognition will lead to a breakthrough in understanding the why and how of program comprehension and to a completely new perspective and methodology of measuring program comprehension, with direct implications for programming methodology, language design, and education.
Read more