Upcoming events

Toward Practical and Scalable Systems Evaluation for Post-Moore Datacenters

Hejing Li Max Planck Institute for Software Systems
02 Feb 2026, 10:00 am - 11:00 am
SaarbrĂĽcken building E1 5, room 029
SWS Student Defense Talks - Thesis Proposal
Having a solid system evaluation under realistic workloads and environments is essential for datacenter network research. Modern datacenter systems are shaped by a wide range of factors - including hardware behavior, software components, and complex interactions between them - whose combined effects on end-to-end performance is often difficult to predict. However, evaluating such systems on physical testbeds is frequently infeasible due to scale, cost, limited experimental control, and the increasing reliance on specialized hardware that may be unavailable or still under development. ...
Having a solid system evaluation under realistic workloads and environments is essential for datacenter network research. Modern datacenter systems are shaped by a wide range of factors - including hardware behavior, software components, and complex interactions between them - whose combined effects on end-to-end performance is often difficult to predict. However, evaluating such systems on physical testbeds is frequently infeasible due to scale, cost, limited experimental control, and the increasing reliance on specialized hardware that may be unavailable or still under development. As a result, researchers often turn to simulation. Existing simulators, however, typically focus on isolated components, such as network protocols, host architectures, or hardware RTL, making it challenging to conduct faithful end-to-end evaluations or to scale experiments to realistic datacenter sizes. The goal of this thesis is to provide researchers with practical and scalable tools and methodologies for conducting faithful end-to-end system evaluation targeting modern datacenters. To this end, this thesis is structured around three components. First, it introduced SimBricks, an end-to-end simulation framework that enables the modular composition of best-of-breed simulators, allowing unmodified hardware and software system implementations to be evaluated together within a single virtual testbed. Second, it presented SplitSim, a simulation framework designed to make large-scale end-to-end evaluation practical by supporting mixed-.delity simulation, controlled decomposition, and efficient resource utilization. Finally, the thesis will include a set of case studies that apply SplitSim to the evaluation of large-scale networked systems, demonstrating a concrete evaluation workflow and distilling lessons on navigating trade-offs between fidelity, scalability, and simulation cost.
Read more

The Skolem Problem: a century-old enigma at the heart of computation

Joël Ouaknine Max Planck Institute for Software Systems
04 Feb 2026, 12:15 pm - 1:15 pm
SaarbrĂĽcken building E1 5, room 002
Joint Lecture Series
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.

Pushing the Boundary on Automated Modular Floating-Point Verification

Rosa Abbasi Max Planck Institute for Software Systems
27 Mar 2026, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
Floating-point numbers often represent real numbers in computer systems. They are applicable in many domains, including embedded systems, machine learning, and scientific computing. Despite their widespread use, they pose some difficulties. Floating-point numbers and operations typically suffer from roundoff errors, making computations over floating-points inaccurate with respect to a real-valued specification. Moreover, the IEEE 754 floating-point standard, a fundamental element in formalizing floating-point arithmetic for today’s computers, presents additional challenges due to special values and resulting unintuitive behaviors. ...
Floating-point numbers often represent real numbers in computer systems. They are applicable in many domains, including embedded systems, machine learning, and scientific computing. Despite their widespread use, they pose some difficulties. Floating-point numbers and operations typically suffer from roundoff errors, making computations over floating-points inaccurate with respect to a real-valued specification. Moreover, the IEEE 754 floating-point standard, a fundamental element in formalizing floating-point arithmetic for today’s computers, presents additional challenges due to special values and resulting unintuitive behaviors. This thesis has three main contributions that address existing gaps in automated reasoning about floating-point arithmetic, making it easier for developers and researchers to understand, verify, and trust the floating-point computations in their programs. First, we introduce the first floating-point support in a deductive verifier for the Java programming language. Our support in the KeY verifier automatically handles floating-point arithmetic and transcendental functions. We achieve this with a combination of delegation to external SMT solvers on one hand, and rule-based reasoning within KeY on the other, exploiting the complementary strengths of both approaches. As a result, this approach can prove functional floating-point properties for realistic programs. Second, inspired by KeY’s treatment of method calls and the need for a scalable roundoff error analysis, we present the first modular optimization-based roundoff error analysis for non-recursive procedural floating-point programs. Our key idea is to achieve modularity while maintaining reasonable accuracy by automatically computing procedure summaries that are a function of the input parameters. Technically, we extend an existing optimization-based roundoff error analysis and show how to effectively use first-order Taylor approximations to compute precise procedure summaries, and how to integrate those to obtain end-to-end roundoff error bounds. Third, our experience using SMT solvers to discharge KeY’s floating-point verification conditions revealed unexpected performance behavior, motivating a systematic study of floating-point reasoning in SMT solvers. We propose a metamorphic testing approach that uses semantics-preserving rewrite rules, focusing on floating-point special values, to uncover unexpected performance behavior in SMT solvers’ handling of floating-point formulas, such as an increase in solving time when the SMT queries are simplified. Using real-world test inputs, our approach can identify such performance bugs for every SMT solver tested.
Read more