Events

Upcoming events

Computing Graph Isomorphisms and Similarities

Daniel Neuen MPI-INF - D1
02 Oct 2024, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Deciding if two graphs are isomorphic, i.e., whether they are structurally identical, is a fundamental computational problem that has many important applications. Despite extensive efforts in the past decades, GI has remained one of only few natural problems contained in NP that are neither known to be NP-complete nor known to be contained in P. In 20216, Babai proved in a breakthrough result that GI can be solved in quasipolynomial time. In the first part of my talk, ...
Deciding if two graphs are isomorphic, i.e., whether they are structurally identical, is a fundamental computational problem that has many important applications. Despite extensive efforts in the past decades, GI has remained one of only few natural problems contained in NP that are neither known to be NP-complete nor known to be contained in P. In 20216, Babai proved in a breakthrough result that GI can be solved in quasipolynomial time. In the first part of my talk, I will give an overview on recent developments on testing isomorphism. A closely related problem is that of determining the similarity between two graphs. Arguably, this problem is even more relevant for many applications such as machine learning tasks on graph-structured data. Natural approaches such as counting edge modifications required to turn one graph into the other are computationally intractable even on very restricted input graphs. As a result, various other methods have been proposed to measure similarity between graphs such as graph motif counts, i.e., measuring similarity based on counting certain patterns appearing in the graphs. In the second part of the talk, we will consider different ways of measuring similarity and discuss relations to standard heuristics for isomorphism testing that have been discovered in recent years.
Read more

Mechanistic interpretability of neural networks

Jonas Fischer MPI-INF - D2
06 Nov 2024, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Modern machine learning (ML) has largely been driven by neural networks, delivering outstanding results in not only in traditional ML applications, but also permeating into classical sciences solving open problems in physics, chemistry, and biology. This success came at a cost, as typical neural network architectures are inherently complex and their reasoning processes opaque. In domains were insights weigh more than prediction, such as modeling of systems biology, or in high-stakes decision making, such as healthcare and finance, ...
Modern machine learning (ML) has largely been driven by neural networks, delivering outstanding results in not only in traditional ML applications, but also permeating into classical sciences solving open problems in physics, chemistry, and biology. This success came at a cost, as typical neural network architectures are inherently complex and their reasoning processes opaque. In domains were insights weigh more than prediction, such as modeling of systems biology, or in high-stakes decision making, such as healthcare and finance, the reasoning process of a neural network however needs to be transparent. Recently, the mechanistic interpretation of this reasoning process has gained significant attention, which is concerned with understanding the *internal reasoning process* of a network, including what information particular neurons respond to and how these specific neurons are organized into larger circuits. In this talk, I will (1) gently introduce the topic of mechanistic interpretability in machine learning, (2) show how to discover mechanistic circuits within a neural network, (3) discuss the relevance of mechanistic interpretability in real-world applications, and (4) discuss what is still missing in the field.
Read more

Recent events

Scaling Up Relaxed Memory Verification with Separation Logics

Hoang-Hai Dang Max Planck Institute for Software Systems
10 Sep 2024, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 005
SWS Student Defense Talks - Thesis Defense
Reasoning about concurrency in a realistic, non-toy language like C/C++ or Rust, which encompasses many interweaving complex features, is very hard. Yet, realistic concurrency involves relaxed memory models, which are significantly harder to reason about than the simple, traditional concurrency model that is sequential consistency. To scale up verifications to realistic concurrency, we need a few ingredients: (1) strong but abstract reasoning principles so that we can avoid the too tedious details of the underlying concurrency model; ...
Reasoning about concurrency in a realistic, non-toy language like C/C++ or Rust, which encompasses many interweaving complex features, is very hard. Yet, realistic concurrency involves relaxed memory models, which are significantly harder to reason about than the simple, traditional concurrency model that is sequential consistency. To scale up verifications to realistic concurrency, we need a few ingredients: (1) strong but abstract reasoning principles so that we can avoid the too tedious details of the underlying concurrency model; (2) modular reasoning so that we can compose smaller verification results into larger ones; (3) reasoning extensibility so that we can derive new reasoning principles for both complex language features and algorithms without rebuilding our logic from scratch; and (4) machine-checked proofs so that we do not miss potential unsoundness in our verifications. With these ingredients in hand, a logic designer can flexibly accommodate the intricacy of relaxed memory features and the ingenuity of programmers who exploit those features. In this dissertation, I present how to develop strong, abstract, modular, extensible, and machine-checked separation logics for realistic relaxed memory concurrency in the Iris framework, using multiple layers of abstractions. I report two main applications of such logics: (i) the verification of the Rust type system with a relaxed memory model, where relaxed memory effects are encapsulated behind the safe interface of libraries and thus are not visible to clients, and (ii) the compositional specification and verification of relaxed memory libraries, in which relaxed memory effects are exposed to clients.
Read more

Reaching New Shores with Modern Separation Logic

Simon Spies Max Planck Institute for Software Systems
09 Sep 2024, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Proposal
The question of how to scalably verify large, stateful programs is one of the oldest—and still unsolved—questions of computer science. Over the last two decades, considerable progress has been made toward this goal with the advent of separation logic, a verification technique designed for modularly reasoning about stateful programs. While the original separation logic was only geared at verifying imperative, pointer-manipulating programs, modern versions of separation logic have become an essential tool in the toolbox of the working semanticist for modelling programming languages and verifying programs. ...
The question of how to scalably verify large, stateful programs is one of the oldest—and still unsolved—questions of computer science. Over the last two decades, considerable progress has been made toward this goal with the advent of separation logic, a verification technique designed for modularly reasoning about stateful programs. While the original separation logic was only geared at verifying imperative, pointer-manipulating programs, modern versions of separation logic have become an essential tool in the toolbox of the working semanticist for modelling programming languages and verifying programs.

With my thesis, I will present a line of work that focuses on the weak spots of modern separation logic. Concretely, in the context of the separation logic framework Iris, I will target two broader areas, *step-indexing* and *automation*. Step-indexing is a powerful technique for recursive definitions that is crucial for handling many of the more advanced features of modern languages. But if one does not closely follow the path laid out by its inventors, perfectly natural proof strategies turn into dead ends. Automation, on the other hand, is important for reducing the overhead of verification, which is all too often preventing verification from scaling to larger code bases.

Regarding step-indexing, the thesis will present two projects, Transfinite Iris and Later Credits. Transfinite Iris shows how to generalize step-indexing—which traditionally applies only to safety properties—to proving liveness properties. Later Credits leverage separation logic to develop an amortized form of step-indexing that enables more flexible proof patterns. Regarding automation, the thesis will present Quiver and, potentially, Daenerys. Quiver introduces a new form of guided specification inference to reduce the specification overhead of separation logic verification. Daenerys brings heap-dependent assertions— logic-level assertions containing program-level expressions—to separation logic. Unlike traditional separation-logic assertions about programs, these heap-dependent assertions validate first-order reasoning principles thus laying the groundwork for SMT-solver based automation.
Read more

Towards Better Foundations for Foundational Models: A Cognitivist Approach to Studying Large Language Models (LLMs)

Krishna Gummadi Max Planck Institute for Software Systems
04 Sep 2024, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
I will begin by demoing and releasing an LLM-based assistant that allows scientists to convert their papers (with a simple drag and drop) into short podcasts for communicating their research to a general audience. While we built the tool, we can’t explain its unreasonable effectiveness, i.e., we don’t really understand why it works or when it might fail. So in the rest of the talk, I will present our investigations into some basic curiosity-driven questions about LLMs; ...
I will begin by demoing and releasing an LLM-based assistant that allows scientists to convert their papers (with a simple drag and drop) into short podcasts for communicating their research to a general audience. While we built the tool, we can’t explain its unreasonable effectiveness, i.e., we don’t really understand why it works or when it might fail. So in the rest of the talk, I will present our investigations into some basic curiosity-driven questions about LLMs; specifically, how do LLMs receive, process, organize, store, and retrieve information. 

Our analysis is centered around engaging LLMs in two specific types of cognitive tasks: first, syntactically-rich (semantically-poor) tasks such as recognizing formal grammars, and next, semantically-rich (syntactically-poor) tasks such as answering factual knowledge questions about real-world entities. Using carefully designed experimental frameworks, we attempt to answer the following foundational questions: (a) how can we estimate what latent skills and knowledge a (pre-trained) LLM possesses? (b) (how) can we distinguish whether some LLM has learnt some training data by rote vs. by understanding? (c) what is the minimum amount of training data/costs needed for a (pre-trained) LLM to acquire a new skill or knowledge?  (d) when solving a task, is training over task examples better, worse, or similar to providing them as in-context demonstrations?

I will present some initial empirical results from experimenting with a number of large open-source language models and argue that our findings they have important implications for the privacy of training data (including potential for memorization), the reliability of generated outputs (including potential for hallucinations), and the robustness of LLM-based applications (including our podcast assistant for science communication). 
Read more

Archive