Events

Upcoming events

LaissezCloud: A Resource Exchange Platform for the Public Cloud

Tejas Harith Max Planck Institute for Software Systems
16 Dec 2025, 3:00 pm - 4:00 pm
SaarbrĂĽcken building E1 5, room 029
SWS Student Defense Talks - Qualifying Exam
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. ...
Resource and carbon efficiency in public clouds is poor and costs are high. This affects operators and tenants alike. We argue that the current rigid cloud pricing interface is to blame. Improving efficiency requires dynamic coordination between operator and tenants, but also among tenants. While comparatively easy for clusters where operator and applications belong to a single administrative domain, the cloud setting makes this challenging with mutually untrusted tenants and operators and the broad set of workloads and applications. We address this with LaissezCloud, a new cloud resource management platform that enables continuous resource re-negotiation and re- allocation. Rather than agreeing to a fixed price on resource allocation, LaissezCloud operators and tenants continuously re- negotiate resource prices during execution. Our key insight is that pricing provides a narrow waist that enables the cloud to align incentives between operator and tenants as well as among tenants. Tenants decide the price they are willing to pay based on the current value of a resource during execution. Operators in turn price in current demand as well as infrastructure concerns, such as current power availability, cooling capacity, or carbon intensity. We demonstrate that LaissezCloud scales to typical cloud infrastructure, that applications are easy to adapt to dynamic resource negotiation, and that LaissezCloud improves resource efficiency.
Read more

The Skolem Problem: a century-old enigma at the heart of computation

Joël Ouaknine Max Planck Institute for Software Systems
04 Feb 2026, 12:15 pm - 1:15 pm
SaarbrĂĽcken building E1 5, room 002
Joint Lecture Series
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.
It has been described as the most important problem whose decidability is still open: the Skolem Problem asks how to determine algorithmically whether a given integer linear recurrence sequence (such as the Fibonacci numbers) has a zero term. This deceptively simple question arises across a wide range of topics in computer science and mathematics, from program verification and automata theory to number theory and logic. This talk traces the history of the Skolem Problem: from the early 1930s to the current frontier of one of the most enduring open questions in computer science.

Recent events

Collaborative Prediction via Tractable Agreement Protocols

Ira Globus-Harris Cornell University
(hosted by Manuel Gomez Rodriguez)
09 Dec 2025, 10:30 am
Kaiserslautern building G26, room 111
SWS Colloquium
Designing effective collaboration between humans and AI systems is crucial for leveraging their complementary abilities in complex decision tasks. But how should agents possessing unique knowledge—like a human expert and an AI model— interact to reach decisions better than either could alone? In this talk, I will introduce a collection of tools based in machine learning theory and algorithmic game theory which allow us to develop efficient "collaboration protocols", where parties iteratively exchange only low-dimensional information— their current predictions or best-response actions—without needing to share underlying features and which guarantee that the agents' final predictions are provably competitive with an optimal predictor with access to their joint features. ...
Designing effective collaboration between humans and AI systems is crucial for leveraging their complementary abilities in complex decision tasks. But how should agents possessing unique knowledge—like a human expert and an AI model— interact to reach decisions better than either could alone? In this talk, I will introduce a collection of tools based in machine learning theory and algorithmic game theory which allow us to develop efficient "collaboration protocols", where parties iteratively exchange only low-dimensional information— their current predictions or best-response actions—without needing to share underlying features and which guarantee that the agents' final predictions are provably competitive with an optimal predictor with access to their joint features. Together, these results offer a new foundation for building systems that achieve the power of pooled knowledge through tractable interaction alone.
Read more

Illuminating Generative AI: Mapping Knowledge in Large Language Models

Abhilasha Ravichander Max Planck Institute for Software Systems
03 Dec 2025, 12:15 pm - 1:15 pm
SaarbrĂĽcken building E1 5, room 002
Joint Lecture Series
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, ...
Millions of everyday users are interacting with technologies built with generative AI, such as voice assistants, search engines, and chatbots. While these AI-based systems are being increasingly integrated into modern life, they can also magnify risks, inequities, and dissatisfaction when providers deploy unreliable systems. A primary obstacle to having more reliable systems is the opacity of the underlying large language models— we lack a systematic understanding of how models work, where critical vulnerabilities may arise, why they are happening, and how models must be redesigned to address them. In this talk, I will first describe my work in investigating large language models to illuminate when models acquire knowledge and capabilities. Then, I will describe my work on building methods to enable data transparency for large language models, that allows practitioners to make sense of the information available to models. Finally, I will describe work on understanding why large language models produce incorrect knowledge, and implications for building the next generation of responsible AI systems. 
Read more

Boosting — Empowering Citizens with Behavioral Science

Ralph Hertwig Max Planck Institute for Human Development
(hosted by Krishna Gummadi)
26 Nov 2025, 12:15 pm - 1:15 pm
Kaiserslautern building G26, room 111
AICS Distinguished Speaker Colloquium
Behavioral public policy came to the fore with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice. Responding to critiques of nudging (e.g., that it does not promote agency and relies on benevolent choice architects), other behavioral policy approaches focus on empowering citizens. Here we review boosting, a behavioral policy approach that aims to foster people's agency, self-control, and ability to make informed decisions. It is grounded in evidence from behavioral science showing that human decision making is not as notoriously flawed as the nudging approach assumes. ...
Behavioral public policy came to the fore with the introduction of nudging, which aims to steer behavior while maintaining freedom of choice. Responding to critiques of nudging (e.g., that it does not promote agency and relies on benevolent choice architects), other behavioral policy approaches focus on empowering citizens. Here we review boosting, a behavioral policy approach that aims to foster people's agency, self-control, and ability to make informed decisions. It is grounded in evidence from behavioral science showing that human decision making is not as notoriously flawed as the nudging approach assumes. We argue that addressing the challenges of our time—such as climate change, pandemics, and the threats to liberal democracies and human autonomy posed by digital technologies and choice architectures—calls for fostering capable and engaged citizens as a first line of response to complement slower, systemic approaches. Boosts can be delivered through different means, one being digital tools — the talk will give a few illustrative examples.
Read more

Archive