Events

Upcoming events

Foundations for Applied Cryptography: Post-quantum Authentication in Modern Applications

Michael Reichle ETH Zurich
(hosted by Catalin Hritcu)
24 Feb 2026, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB/1-84/90
CIS@MPG Colloquium
Modern cryptography must navigate the transition to post-quantum security while meeting the demands of real-world systems. Due to the tension between post-quantum security and performance, this transition entails complex challenges on several fronts. In this talk, I will discuss the intricacies of provable security and how to establish solid theory for cryptographic building blocks that underpin our digital society. First, I will briefly cover my past work on searchable encryption and zero-knowledge proofs. In the main part of the talk, ...
Modern cryptography must navigate the transition to post-quantum security while meeting the demands of real-world systems. Due to the tension between post-quantum security and performance, this transition entails complex challenges on several fronts. In this talk, I will discuss the intricacies of provable security and how to establish solid theory for cryptographic building blocks that underpin our digital society. First, I will briefly cover my past work on searchable encryption and zero-knowledge proofs. In the main part of the talk, I will highlight my research on tools for advanced authentication in a post-quantum world. In particular, I will present my recent works on threshold signatures and blind signatures and discuss how to achieve stronger security guarantees under weak assumptions. In the end, I will highlight some open problems that are urgent to solve.
Read more

Designing for Society: AI in Networks, Markets, and Platforms

Ana-Andreea Stoic Max Planck Institute for Intelligent Systems , Tübingen
(hosted by Manuel Gomez Rodriguez)
26 Feb 2026, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
AI systems increasingly reshape our networks, markets, and platforms. When deployed in social environments— online platforms, labor markets, and information ecosystems—AI interacts with complex human behavior, strategic incentives, and structural inequality. This talk focuses on foundational challenges and opportunities for AI systems: how to design and evaluate algorithmic interventions in complex social environments. I will present recent work on causal inference under competing treatments, which formalizes how competition for user attention and strategic behavior among experimenters distort experimental data and invalidate naïve estimates of algorithmic impact. ...
AI systems increasingly reshape our networks, markets, and platforms. When deployed in social environments— online platforms, labor markets, and information ecosystems—AI interacts with complex human behavior, strategic incentives, and structural inequality. This talk focuses on foundational challenges and opportunities for AI systems: how to design and evaluate algorithmic interventions in complex social environments. I will present recent work on causal inference under competing treatments, which formalizes how competition for user attention and strategic behavior among experimenters distort experimental data and invalidate naïve estimates of algorithmic impact. By modeling experimentation as a strategic data acquisition problem, we show how evaluation itself becomes an optimization problem, and we derive mechanisms that recover meaningful estimates despite interference and competition. I connect this problem to deriving foundational properties of systems that enable reliable experimentation. Beyond this case study, the talk highlights broader implications for the design and evaluation of AI systems in networks, markets, and platforms. I argue that responsible deployment requires rethinking evaluation methodologies to account for incentives, feedback loops, and system-level effects, and I outline directions for how algorithmic and statistical tools can tackle these challenges.
Read more

Security: A Next Frontier in AI Coding

Jingxuan He UC Berkeley
(hosted by Thorsten Holz)
02 Mar 2026, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB/1-84/90
CIS@MPG Colloquium
AI is reshaping software development, yet this rapid adoption risks introducing a new generation of security debt. In this talk, I will present my research program aimed at transforming AI from a source of vulnerabilities to a security enabler. I will begin by introducing a systematic framework for quantifying AI-induced cybersecurity risks through two benchmarks: CyberGym, which evaluates AI agents’ offensive capabilities in vulnerability reproduction and discovery, and BaxBench, which measures LLMs’ propensity to introduce security flaws when generating code. ...
AI is reshaping software development, yet this rapid adoption risks introducing a new generation of security debt. In this talk, I will present my research program aimed at transforming AI from a source of vulnerabilities to a security enabler. I will begin by introducing a systematic framework for quantifying AI-induced cybersecurity risks through two benchmarks: CyberGym, which evaluates AI agents’ offensive capabilities in vulnerability reproduction and discovery, and BaxBench, which measures LLMs’ propensity to introduce security flaws when generating code. Building on these findings, I will present a secure-by-design approach for AI-generated code. This includes security-centric fine-tuning that embeds secure coding practices directly into models, as well as a decoding-time constraining mechanism based on type systems to enforce safety guarantees. Finally, I will conclude by discussing my future research on building broader security and trust in AI-driven software ecosystems.
Read more

Recent events

Uncovering the Mechanics of Vision AI Model Failures: Textures and Beyond

Blaine Hoak University of Wisconsin-Madison
(hosted by Christof Paar)
16 Feb 2026, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB/1-84/90
CIS@MPG Colloquium
Artificial Intelligence (AI) models now serve as core components to a range of mature applications but remain vulnerable to a wide spectrum of attacks. Yet, the research community has yet to develop systematic understanding of model vulnerability. In this talk, I approach uncovering the mechanics of model failure from two complementary perspectives: the design of attack techniques and the features models exploit. First, I introduce The Space of Adversarial Strategies, a robustness evaluation framework constructed through a decomposition and reformulation of current attacks. ...
Artificial Intelligence (AI) models now serve as core components to a range of mature applications but remain vulnerable to a wide spectrum of attacks. Yet, the research community has yet to develop systematic understanding of model vulnerability. In this talk, I approach uncovering the mechanics of model failure from two complementary perspectives: the design of attack techniques and the features models exploit. First, I introduce The Space of Adversarial Strategies, a robustness evaluation framework constructed through a decomposition and reformulation of current attacks. With this, I isolate the components that drive attack success and provide insights for future defenses. Motivated by the widespread failure observed, I then turn to the feature space, where I uncover differences in visual processing and the human visual system that explain failures in AI systems. My work reveals that textures, or repeated patterns, are a core mechanism for driving model generalization, yet are also a primary source of vulnerability. I present new methodologies to quantify a model’s bias toward texture, uncover learned associations between textures and objects, and identify textures in images. With this, I find that up to 90% of failures can be explained by mismatches in texture information, highlighting texture as an important, yet overlooked, influence in model robustness. I conclude by outlining future work for addressing trustworthiness issues in both classification and generative settings, with particular attention to (mis)alignment between biological and artificial intelligence.
Read more

Building Private, Secure and Transparent Digital Identity at Scale

Harjasleen Malvai University of Illinois Urbana–Champaign
(hosted by Peter Schwabe)
12 Feb 2026, 10:00 am - 11:00 am
Bochum building MPI-SP, room .
CIS@MPG Colloquium
Digital identity controls access to many everyday essentials, from getting paid to accessing banking and benefits, and sits in the critical path of modern security. Even end-to-end encrypted messaging depends on authentic cryptographic identity, i.e., a trustworthy way to learn the cryptographic keys needed to encrypt messages to the right person. In practice, centralized identity providers become both a single point of failure and a single point of control: they decide what assertions are supported, and their compromise or coercion enables targeted attacks that are hard to detect. ...
Digital identity controls access to many everyday essentials, from getting paid to accessing banking and benefits, and sits in the critical path of modern security. Even end-to-end encrypted messaging depends on authentic cryptographic identity, i.e., a trustworthy way to learn the cryptographic keys needed to encrypt messages to the right person. In practice, centralized identity providers become both a single point of failure and a single point of control: they decide what assertions are supported, and their compromise or coercion enables targeted attacks that are hard to detect. Yet, many strong proposals assume new ecosystems or significant user effort – assumptions that don’t hold for the systems and users we have today.

My thesis is that it is possible to make identity infrastructure more private, secure, and transparent at scale while designing for existing user and ecosystem constraints. I’ll present two case studies across the identity lifecycle.

First, I’ll discuss key transparency for end-to-end encrypted messaging: how to make centralized key directories auditable so that even a compromised server cannot quietly swap keys for targeted users. I’ll show how this line of work evolved from formal privacy and history guarantees (SEEMless) to a billion-user architecture (Parakeet) built for real operational constraints (such as distributed storage and long time horizons), which now underpins the key transparency deployments in WhatsApp and Facebook Messenger.

Second, I’ll briefly describe credential bootstrapping with accountability (CanDID): a path to establishing privacy-preserving credentials from existing web sources without assuming a fully mature verifiable-credential ecosystem, while supporting practical requirements like revocation, recovery, and compliance checks.

I’ll close by highlighting ongoing work and open problems motivated by these systems and sketch a research agenda for building auditable and privacy-preserving infrastructure at internet scale, for identity and beyond.
Read more

Beyond Static Alignment: Advancing Trustworthy and Socially Intelligent AI Assistant

Jieyu Zhao University of Southern California
(hosted by Abhilasha Ravichander, Asia Biega)
11 Feb 2026, 3:00 pm - 4:00 pm
Virtual talk
CIS@MPG Colloquium
Large language models have transformed how we interact with technology, but most deployed systems remain reactive and rely on static, one-size-fits-all alignment, limiting trust in real-world, high-stakes settings. This talk explores a path toward personalized, trustworthy AI assistants that can reason, continually adapt, and align with user values while remaining safe and socially appropriate. I will introduce Computer-Using Agents that combine GUI operations and code generation to efficiently complete real-world tasks, and present CoAct-1, a multi-agent system that coordinates planning and execution. ...
Large language models have transformed how we interact with technology, but most deployed systems remain reactive and rely on static, one-size-fits-all alignment, limiting trust in real-world, high-stakes settings. This talk explores a path toward personalized, trustworthy AI assistants that can reason, continually adapt, and align with user values while remaining safe and socially appropriate. I will introduce Computer-Using Agents that combine GUI operations and code generation to efficiently complete real-world tasks, and present CoAct-1, a multi-agent system that coordinates planning and execution. I will then discuss SEA, a black-box auditing algorithm for uncovering LLM knowledge deficiencies and probing failure modes such as hallucination under limited query budgets. Next, I will present WildFeedback, a framework that learns in-situ user preferences from natural, multi-turn interactions, enabling continual personalization beyond lab-style preference data. Finally, I will highlight ongoing work on proactive social intelligence and culturally grounded evaluation, spanning intention understanding, reasoning consistency, and value-aligned collaboration. Together, these advances move us closer to AI systems that don’t just respond, but adapt responsibly and assist people in ways that are reliable, equitable, and context-aware.
Read more

Archive