Events

Upcoming events

Spatio-Temporal AI for Long-term Human-centric Robot Autonomy

Lukas Schmid Massachusetts Institute of Technology (hosted by Bernt Schiele)
10 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
The ability to build an actionable representation of the environment of a robot is crucial for autonomy and prerequisite to a large variety of applications, ranging from home, service, and consumer robots to social, care, and medical robotics to industrial, agriculture and disaster response applications. Notably, a large part of the promise of autonomous robots depends on long-term operation in domains shared with humans and other agents. These environments are typically highly complex, semantically rich, and highly dynamic with agents frequently moving through and interacting with the scene. ...
The ability to build an actionable representation of the environment of a robot is crucial for autonomy and prerequisite to a large variety of applications, ranging from home, service, and consumer robots to social, care, and medical robotics to industrial, agriculture and disaster response applications. Notably, a large part of the promise of autonomous robots depends on long-term operation in domains shared with humans and other agents. These environments are typically highly complex, semantically rich, and highly dynamic with agents frequently moving through and interacting with the scene. This talk presents an autonomy pipeline combining perception, prediction, and planning to address these challenges. We first present methods to detect and represent complex semantics, short-term motion, and long-term changes for real-time robot perception in a unified framework called Khronos. We then show how Dynamic Scene Graphs (DSGs) can represent semantic symbols in a task-driven fashion and facilitate reasoning about the scene, such as the prediction of likely future outcomes based on the data the robot has already collected. Lastly, we show how robots as embodied agents can leverage these actionable scene representations and predictions to complete tasks such as actively gathering data that helps them improve their world models, perception, and action capabilities fully autonomously over time. The presented methods are demonstrated on-board fully autonomous aerial, legged, and wheeled robots, run in real-time on mobile hardware, and are available as open-source software.
Read more

How Do We Evaluate and Mitigate AI Risks?

Maksym Andriushchenko EPFL (hosted by Christof Paar)
11 Feb 2025, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB1SMMW106
CIS@MPG Colloquium
AI has made remarkable progress in recent years, enabling groundbreaking applications but also raising serious safety concerns. This talk will explore the robustness challenges in deep learning and large language models (LLMs), demonstrating how seemingly minor perturbations can lead to critical failures. I will present my research on evaluating and mitigating AI risks, including adversarial robustness, LLM jailbreak vulnerabilities, and the broader implications of AI safety. By developing rigorous benchmarks, novel evaluation methods, and foundational theoretical insights, ...
AI has made remarkable progress in recent years, enabling groundbreaking applications but also raising serious safety concerns. This talk will explore the robustness challenges in deep learning and large language models (LLMs), demonstrating how seemingly minor perturbations can lead to critical failures. I will present my research on evaluating and mitigating AI risks, including adversarial robustness, LLM jailbreak vulnerabilities, and the broader implications of AI safety. By developing rigorous benchmarks, novel evaluation methods, and foundational theoretical insights, my work aims to provide effective safeguards for AI deployment. Ultimately, I advocate for a systematic approach to AI risk mitigation that integrates technical solutions with real-world considerations to ensure the safe and responsible use of AI systems.
Read more

Measuring and Improving Fairness and Resilience Across the Blockchain Ecosystem

Lioba Heimbach ETH Zurich (hosted by Krishna Gummadi)
12 Feb 2025, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
The multi-layer blockchain architecture presents unique challenges, as issues in one layer can amplify or cause problems in another. These layers include the network layer, a peer-to-peer (P2P) network responsible for information dissemination; the consensus layer, where nodes reach agreement on the blockchain’s state; and the application layer, which hosts decentralized finance (DeFi) applications. My research investigates the interactions between these layers and the resulting challenges, with the goal of enhancing fairness and resilience in the blockchain ecosystem. ...
The multi-layer blockchain architecture presents unique challenges, as issues in one layer can amplify or cause problems in another. These layers include the network layer, a peer-to-peer (P2P) network responsible for information dissemination; the consensus layer, where nodes reach agreement on the blockchain’s state; and the application layer, which hosts decentralized finance (DeFi) applications. My research investigates the interactions between these layers and the resulting challenges, with the goal of enhancing fairness and resilience in the blockchain ecosystem. In the first part of this talk, I will explore how financial value originating at the application layer can threaten the consensus layer, focusing on non-atomic arbitrage — arbitrage between on-chain and off-chain exchanges. I will demonstrate how this value, despite originating at the application layer, introduces centralizing forces and security vulnerabilities in the consensus layer. In the second part, I will show how these dynamics operate in the opposite direction. Specifically, I will highlight privacy flaws in Ethereum’s P2P network that threaten the consensus layer by enabling attacks targeting application layer value. As I will demonstrate, the P2P network leaks validator locations. This vulnerability allows malicious validators (i.e., consensus layer participants) to launch targeted attacks on validators handling blocks with significant application layer value and scoop the value from those blocks.
Read more

Recent events

Leveraging Sociotechnical Security and Privacy to Address Online Abuse

Miranda Wei University of Washington (hosted by Krishna Gummadi)
06 Feb 2025, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
CIS@MPG Colloquium
The prevalence and severity of online abuse are on the rise, from toxic content on social media to image-based sexual abuse, as new technologies are weaponized by people who do harm. Further, this abuse disproportionately harms people already marginalized in society, creating unacceptable disparities in safety and reinforcing oppression. Working in the areas of both computer security and privacy (S&P) and human-computer interaction (HCI), I address online abuse as the next frontier of S&P challenges. In this talk, ...
The prevalence and severity of online abuse are on the rise, from toxic content on social media to image-based sexual abuse, as new technologies are weaponized by people who do harm. Further, this abuse disproportionately harms people already marginalized in society, creating unacceptable disparities in safety and reinforcing oppression. Working in the areas of both computer security and privacy (S&P) and human-computer interaction (HCI), I address online abuse as the next frontier of S&P challenges. In this talk, I discuss my approach to sociotechnical threat modeling that (1) characterizes emerging S&P threats in digital safety, with particular attention to the technical and societal factors at play, (2) evaluates the existing support for online abuse, taking an ecosystem-level perspective, and (3) develops conceptual tools that bridge S&P and HCI towards societally informed S&P research. I conclude by outlining how sociotechnical security and privacy can work towards a world where all people using technology feel safe and connected.
Read more

Practical Privacy via New Systems and Abstractions

Kinan Dak Albab Brown University (hosted by Peter Schwabe)
05 Feb 2025, 1:00 pm - 2:00 pm
Virtual talk
CIS@MPG Colloquium
Data privacy has become a focal point for public discourse. In response, Data protection and privacy regulations have been enacted across the world, including the GDPR and CCPA, and companies make various promises to end-users in their privacy policies. However, high profile privacy violations remain commonplace, in part because complying with privacy regulations and policies is challenging for applications and developers. This talk demonstrates how we can help developers achieve privacy compliance by designing new privacy-conscious systems and abstractions. ...
Data privacy has become a focal point for public discourse. In response, Data protection and privacy regulations have been enacted across the world, including the GDPR and CCPA, and companies make various promises to end-users in their privacy policies. However, high profile privacy violations remain commonplace, in part because complying with privacy regulations and policies is challenging for applications and developers. This talk demonstrates how we can help developers achieve privacy compliance by designing new privacy-conscious systems and abstractions. This talk focuses on my work on Sesame (SOSP24), my system for end-to-end compliance with privacy policies in web applications. To provide practical guarantees, Sesame combines new static analysis for data leakage with advances in memory safe languages and lightweight sandboxing, as well as standard industry practices like code review. My work in this area also includes K9db (OSDI23), a privacy-compliant database that supports compliance-by-construction with GDPR-style subject access requests. By creating privacy abstractions at the systems level, we can offer applications privacy guarantees by design, in order to simplify compliance and improve end-user privacy.
Read more

Pushing the Boundaries of Modern Application-Aware Computing Stacks

Christina Giannoula University of Toronto (hosted by Peter Druschel)
03 Feb 2025, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
CIS@MPG Colloquium
Modern computing systems encounter significant challenges related to data movement in applications, such as data analytics and machine learning. Within a compute node, the physical separation of the processor from main memory necessitates retrieving data through a narrow memory bus. In big-data applications running across multiple nodes, data must be exchanged via narrow network interconnects. This movement of data —both within and across compute nodes— causes significant performance and energy overheads in modern and emerging applications. ...
Modern computing systems encounter significant challenges related to data movement in applications, such as data analytics and machine learning. Within a compute node, the physical separation of the processor from main memory necessitates retrieving data through a narrow memory bus. In big-data applications running across multiple nodes, data must be exchanged via narrow network interconnects. This movement of data —both within and across compute nodes— causes significant performance and energy overheads in modern and emerging applications. Moreover, today’s general-purpose computing stacks overlook the particular data needs of individual applications, missing crucial opportunities for untapped performance optimization. In this talk, I will present a cross-stack approach to designing application-aware computing stacks for cutting-edge applications, enabling new synergies between algorithms, systems software, and hardware. Specifically, I will demonstrate how integrating fine-grained application characteristics —such as input features, data access and synchronization patterns— across the layers of general-purpose computing stacks allows for tailoring stack components to meet the application’s specific data needs. This integration enables the stack components to work synergistically to reduce unnecessary or redundant data movement during application execution. I will present a few of my research contributions that propose hardware and software solutions for emerging applications, such as deep learning, and by capitalizing on the emerging processing-in-memory paradigm. Finally, I will conclude by outlining my future plans to design application-adaptive and sustainable computing stacks to significantly enhance performance and energy efficiency in cutting-edge applications.
Read more

Archive