Recent events

CANCELLED: Computer Science Competitions @ SIC

Julian Baldus, Marian Dietz, Simon Schwarz Fachrichtung Informatik - Saarbrücken
01 Apr 2020, 12:15 pm - 1:15 pm
Saarbrücken building E2 2, room Günther-Hotz-HS
Joint Lecture Series
-

*Remote Talk* Improve Operations of Data Center Networks with Physical-Layer Programmability

Yiting Xia Facebook
26 Mar 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
Physical-layer programmability enables the network topology to be changed dynamically. In this talk, the speaker will make a case that cloud data center networks can be significantly easier to manage with physical-layer programmability. Three example network architectures will be shown as different use cases of this approach. ShareBackup enhances reliability through sharing backup switches efficiently network-wide, where a backup switch can be brought online instantaneously to recover from failures. Flat-tree solves the problem of choosing the right network topology for different cloud services by dynamically changing topological clustering characteristics of the network. …
Physical-layer programmability enables the network topology to be changed dynamically. In this talk, the speaker will make a case that cloud data center networks can be significantly easier to manage with physical-layer programmability. Three example network architectures will be shown as different use cases of this approach. ShareBackup enhances reliability through sharing backup switches efficiently network-wide, where a backup switch can be brought online instantaneously to recover from failures. Flat-tree solves the problem of choosing the right network topology for different cloud services by dynamically changing topological clustering characteristics of the network. OmniSwitch is a universal building block of data center networks that supports automatic device wiring and easy device maintenance. At the end of the talk, the speaker will briefly introduce an ongoing follow-up research that extends physical-layer programmability from data center networks to backbone networks.
Read more

*Remote Talk* Learning efficient representations for image and video understanding

Yannis Kalantidis Facebook AI
18 Mar 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
Two important challenges in image and video understanding are designing more efficient deep Convolutional Neural Networks and learning models that are able to achieve higher-level understanding. In this talk, I will present some of my recent works towards tackling these challenges. Specifically, I will introduce the Octave Convolution [ICCV 2019], a plug-and-play replacement for the convolution operator that exploits the spatial redundancy of CNN activations and can be used without any adjustments to the network architecture. …
Two important challenges in image and video understanding are designing more efficient deep Convolutional Neural Networks and learning models that are able to achieve higher-level understanding. In this talk, I will present some of my recent works towards tackling these challenges. Specifically, I will introduce the Octave Convolution [ICCV 2019], a plug-and-play replacement for the convolution operator that exploits the spatial redundancy of CNN activations and can be used without any adjustments to the network architecture. I will also present the Global Reasoning Networks [CVPR 2019], a new approach for reasoning over arbitrary sets of features of the input, by projecting them from a coordinate space into an interaction space where relational reasoning can be efficiently computed.  The two methods presented are complementary and achieve state-of-the-art performance on both image and video tasks. Aiming for higher-level understanding, I will also present our recent works on vision and language modeling, specifically our work on learning state-of-the-art image and video captioning models that are also able to better visually ground the generated sentences with [CVPR 2019] or without [arXiv 2019] explicit localization supervision. The talk will conclude with current research and a brief vision for the future.
Read more

*Remote Talk* Fairness in machine learning

Niki Kilbertus University of Cambridge and MPI for Intelligent Systems, Tübingen
16 Mar 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
Machine learning increasingly supports consequential decisions  in health, lending, criminal justice or employment that affect the wellbeing of individual members or entire groups of our society. Such applications raise concerns about fairness, privacy violations and the long-term consequences of automated decisions in a social context. After a brief introduction to fairness in machine learning, I will highlight concrete settings with specific fairness or privacy ramifications and outline approaches to address them. I will conclude by embedding these examples into a broader context of socioalgorithmic systems and the complex interactions therein.
Machine learning increasingly supports consequential decisions  in health, lending, criminal justice or employment that affect the wellbeing of individual members or entire groups of our society. Such applications raise concerns about fairness, privacy violations and the long-term consequences of automated decisions in a social context. After a brief introduction to fairness in machine learning, I will highlight concrete settings with specific fairness or privacy ramifications and outline approaches to address them. I will conclude by embedding these examples into a broader context of socioalgorithmic systems and the complex interactions therein.

*Remote Talk* Democratizing Error-Efficient Computing

Radha Venkatagiri University of Illinois at Urbana-Champaign
12 Mar 2020, 1:00 pm - 2:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
We live in a world where errors in computation are becoming ubiquitous and come from a wide variety of sources -- from unintentional soft errors in shrinking transistors to deliberate errors introduced by approximation or malicious attacks. Guaranteeing perfect functionality across a wide range of future systems will be prohibitively expensive. Error-Efficient computing offers a promising solution by allowing the system to make controlled errors. Such systems can be considered as being error-efficient: they only prevent as many errors as they need to for an acceptable user experience. …
We live in a world where errors in computation are becoming ubiquitous and come from a wide variety of sources -- from unintentional soft errors in shrinking transistors to deliberate errors introduced by approximation or malicious attacks. Guaranteeing perfect functionality across a wide range of future systems will be prohibitively expensive. Error-Efficient computing offers a promising solution by allowing the system to make controlled errors. Such systems can be considered as being error-efficient: they only prevent as many errors as they need to for an acceptable user experience. Allowing the system to make errors can lead to significant resource (time, energy, bandwidth, etc.) savings. Error-efficient computing can transform the way we design hardware and software to exploit new sources of compute efficiency; however, excessive programmer burden and a lack of principled design methodologies have thwarted its adoption. My research addresses these limitations through foundational contributions that enable the adoption of error-efficiency as a first-class design principle by a variety of users and application domains. In this talk, I will show how my work (1) enables an understanding of how errors affect program execution by providing a suite of automated and scalable error analysis tools, (2) demonstrates how such an understanding can be exploited to build customized error-efficiency solutions targeted to low-cost hardware resiliency and approximate computing and (3) develops methodologies for principled integration of error-efficiency into the software design workflow.Finally, I will discuss future research avenues in error-efficient computing with multi-disciplinary implications in core disciplines (programming languages, software engineering, hardware design, systems) and emerging application areas (AI, VR, robotics, edge computing).
Read more

Search-based automated program repair and testing

Shin Hwei Tan Southern University of Science and Technology, Shenzhan, China
10 Mar 2020, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Colloquium
Software testing remains the key technique for detection of functionality errors and vulnerabilities. Automated test generation typically involves searching in a huge input domain. Similarly many other software engineering activities such as software debugging and automated repair can be seen as large search problems. Despite advances in constraint solving and symbolic reasoning, major practical challenges in the use of automated testing and automated program repair, as opposed to manual testing and manual repair include (a) the lack of specifications, …
Software testing remains the key technique for detection of functionality errors and vulnerabilities. Automated test generation typically involves searching in a huge input domain. Similarly many other software engineering activities such as software debugging and automated repair can be seen as large search problems. Despite advances in constraint solving and symbolic reasoning, major practical challenges in the use of automated testing and automated program repair, as opposed to manual testing and manual repair include (a) the lack of specifications, leading to problems such as overfitting patches and test-suite bias, and (b) learning curve in using these automated tools, leading to the lack of deployment. In this talk, I will present pragmatic solutions to address these challenges. Lack of specifications can be alleviated by implicitly exploiting lightweight specifications from comment-code inconsistencies, past program versions, or other program artifacts. Lack of deployment can be alleviated via systematic approaches towards collaborative bug finding. Such a collaborative approach can contribute to automated program repair as well as testing.
Read more

CANCELLED: Model checking under weak memory concurrency

Viktor Vafeiadis Max Planck Institute for Software Systems
04 Mar 2020, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Model checking is an automatic approach for testing and verifying programs, and has proven to be very effective in a number of settings ranging from hardware designs to intricate low-level systems software. In this talk, I will present our recent research on applying model checking to weakly consistent concurrent programs. I will explain the key challenges involved in making model checking effective in this setting and how to address them.

Learning by exploration in an unknown and changing environment

Qingyun Wu University of Virginia
27 Feb 2020, 2:00 pm - 3:00 pm
Saarbrücken building E1 5, room SB 029
simultaneous videocast to Kaiserslautern building G26, room KL 111 / Meeting ID: 6312
SWS Colloquium
Learning is a predominant theme for any intelligent system, humans or machines. Moving beyond the classical paradigm of learning from past experience, e.g., supervised learning from given labels, a learner needs to actively collect exploratory feedback to learn the unknowns. Considerable challenges arise in such a setting, including sample complexity, costly and even outdated feedback.

In this talk, I will introduce our themed efforts on developing solutions to efficiently explore the unknowns and dynamically adjust to the changes through exploratory feedback. …
Learning is a predominant theme for any intelligent system, humans or machines. Moving beyond the classical paradigm of learning from past experience, e.g., supervised learning from given labels, a learner needs to actively collect exploratory feedback to learn the unknowns. Considerable challenges arise in such a setting, including sample complexity, costly and even outdated feedback.

In this talk, I will introduce our themed efforts on developing solutions to efficiently explore the unknowns and dynamically adjust to the changes through exploratory feedback. Specifically, I will first present our studies in leveraging special problem structures for efficient exploration. Then I will present our work on empowering the learner to detect and adjust to potential changes in the environment adaptively. Besides, I will also highlight the impact our research has generated in top-valued industry applications, including online learning to rank and interactive recommendation.
Read more

Compactness in Cryptography

Giulio Malavolta UC Berkeley and CMU
25 Feb 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
The communication complexity of secure protocols is a fundamental question of the theory of computation and has important repercussions in the development of real-life systems. As an example, the recent surge in popularity of cryptocurrencies has been enabled and accompanied by advancements in the construction of more compact cryptographic machinery. In this talk we discuss how to meet the boundaries of compactness in cryptography and how to exploit succinct communication to construct systems with new surprising properties. …
The communication complexity of secure protocols is a fundamental question of the theory of computation and has important repercussions in the development of real-life systems. As an example, the recent surge in popularity of cryptocurrencies has been enabled and accompanied by advancements in the construction of more compact cryptographic machinery. In this talk we discuss how to meet the boundaries of compactness in cryptography and how to exploit succinct communication to construct systems with new surprising properties. Specifically, we consider the problem of computing functions on encrypted data: We show how to construct a fully-homomorphic encryption scheme with message-to-ciphertext ratio (i.e. rate) of 1 – o(1), which is optimal. Along the way, we survey the implication of cryptographic compactness in different contexts, such as proof systems, scalable blockchains, and fair algorithms.
Read more

Software Testing as Species Discovery

Marcel Böhme Monash University
10 Feb 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
A fundamental challenge of software testing is the statistically well-grounded extrapolation from program behaviors observed during testing. For instance, a security researcher who has run the fuzzer for a week has currently no means (i) to estimate the total number of feasible program branches, given that only a fraction has been covered so far, (ii) to estimate the additional time required to cover 10% more branches (or to estimate the coverage achieved in one more day, …
A fundamental challenge of software testing is the statistically well-grounded extrapolation from program behaviors observed during testing. For instance, a security researcher who has run the fuzzer for a week has currently no means (i) to estimate the total number of feasible program branches, given that only a fraction has been covered so far, (ii) to estimate the additional time required to cover 10% more branches (or to estimate the coverage achieved in one more day, resp.), or (iii) to assess the residual risk that a vulnerability exists when no vulnerability has been discovered. Failing to discover a vulnerability, does not mean that none exists—even if the fuzzer was run for a week (or a year). Hence, testing provides no formal correctness guarantees.

In this talk, I establish an unexpected connection with the otherwise unrelated scientific field of ecology, and introduce a statistical framework that models Software Testing and Analysis as Discovery of Species (STADS). For instance, in order to study the species diversity of arthropods in a tropical rain forest, ecologists would first sample a large number of individuals from that forest, determine their species, and extrapolate from the properties observed in the sample to properties of the whole forest. The estimation (i) of the total number of species, (ii) of the additional sampling effort required to discover 10% more species, or (iii) of the probability to discover a new species are classical problems in ecology. The STADS framework draws from over three decades of research in ecological biostatistics to address the fundamental extrapolation challenge for automated test generation. Our preliminary empirical study demonstrates a good estimator performance even for a fuzzer with adaptive sampling bias—AFL, a state-of-the-art vulnerability detection tool. The STADS framework provides statistical correctness guarantees with quantifiable accuracy.
Read more

Hybrid optimization techniques for multi-domain coupling in cyber-physical systems design

Debayan Roy TU Munich
07 Feb 2020, 3:00 pm - 4:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Colloquium
In a cyber-physical system (CPS), a physical process is controlled by software running on a cyber platform. And there exists a strong interaction between the physical dynamics, the control software, the sensors and actuators, and the cyber resources (i.e., computation, communication, and memory resources). These systems are common in domains such as automotive, avionics, health-care, smart manufacturing, smart grid, among others. The state-of-practice is to design CPSs using a disjoint set of tools handling different design domains. …
In a cyber-physical system (CPS), a physical process is controlled by software running on a cyber platform. And there exists a strong interaction between the physical dynamics, the control software, the sensors and actuators, and the cyber resources (i.e., computation, communication, and memory resources). These systems are common in domains such as automotive, avionics, health-care, smart manufacturing, smart grid, among others. The state-of-practice is to design CPSs using a disjoint set of tools handling different design domains. Such a design methodology has proved to be inefficient with respect to resource usage and performance. In this talk, I will discuss how models from different engineering disciplines can be integrated to design efficient cyber-physical systems. In particular, I will show two use-cases. First, I will talk about a multi-resource platform consisting of high- and low-quality resources. Correspondingly, I will show that a cost-efficient switching control strategy can be designed exploiting heterogeneous resources and by effectively managing the interplay between control theory, scheduling and formal verification. Second, I will talk about the cyber-physical battery management systems (BMS) for high-power battery packs. I will specifically discuss the problem of cell balancing which is an important task of BMS. I will show how integrated modeling of the individual cells, battery architecture, control circuits, and cyber architecture, can lead to energy- and time-efficient scheduling for active cell balancing.
Read more

Designing responsible information systems

Asia J. Biega Microsoft Research Montreal, Canada
07 Feb 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
Information systems have the potential to enhance or limit opportunities when mediating user interactions. They also have the potential to violate privacy by accumulating observational data into detailed user profiles or by exposing people in sensitive contexts. This talk will cover measures and mechanisms we have proposed for mitigating various threats to user well-being in online information ecosystems. In particular, I am going to focus on two contributions in the areas of algorithmic fairness and privacy. …
Information systems have the potential to enhance or limit opportunities when mediating user interactions. They also have the potential to violate privacy by accumulating observational data into detailed user profiles or by exposing people in sensitive contexts. This talk will cover measures and mechanisms we have proposed for mitigating various threats to user well-being in online information ecosystems. In particular, I am going to focus on two contributions in the areas of algorithmic fairness and privacy. The first contribution demonstrates how to operationalize the notion of equity in the context of search systems and how to design optimization models that achieve equity while accounting for human cognitive biases. The second ties our empirical work on profiling privacy and data collection to concepts in data protection laws. Finally, I will discuss the necessity for a holistic approach to responsible technology, from studying different types of harms, through development of different types of interventions, up to taking a step back and refusing technologies that cannot be fixed by technical tweaks.
Read more

Spectector: Principled Detection of Speculative Information Flows

Jan Reineke Fachrichtung Informatik - Saarbrücken
05 Feb 2020, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
The recent Spectre attacks exploit speculative execution and microarchitectural side channels, such as caches, to leak sensitive information. Since the underlying hardware vulnerabilities are here to stay in billions of deployed devices, software countermeasures have been developed. Rigorously reasoning about their effectiveness, however, requires a well-defined notion of security against speculative execution attacks, which has been missing until now. We put forward speculative non-interference, the first semantic notion of security against speculative execution attacks, and we develop Spectector, …
The recent Spectre attacks exploit speculative execution and microarchitectural side channels, such as caches, to leak sensitive information. Since the underlying hardware vulnerabilities are here to stay in billions of deployed devices, software countermeasures have been developed. Rigorously reasoning about their effectiveness, however, requires a well-defined notion of security against speculative execution attacks, which has been missing until now. We put forward speculative non-interference, the first semantic notion of security against speculative execution attacks, and we develop Spectector, an algorithm based on symbolic execution to automatically prove speculative non-interference, or to detect violations. We implement Spectector in a tool, which we use to detect subtle leaks and optimizations opportunities in the way major compilers place Spectre countermeasures.
Read more

veribetrfs: Verification as a Practical Engineering Tool

Jon Howell VMware Research, Bellevue, WA, USA
17 Jan 2020, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: VMR 6312
SWS Distinguished Lecture Series
Recent progress in systems verification have shown that verification techniques can scale to thousands of lines. It is time to ask whether verification can displace testing as an effective path to software correctness. The veribetrfs project is developing a verified high-performance storage system. A primary goal of the project is to reduce verification methodology to engineering practice. Veribetrfs is developed using the Iron★ methodology, a descendent of the Ironclad and IronFleet projects. So far, we have a key-value store with 100k iops performance and strong guarantees against data loss. …
Recent progress in systems verification have shown that verification techniques can scale to thousands of lines. It is time to ask whether verification can displace testing as an effective path to software correctness. The veribetrfs project is developing a verified high-performance storage system. A primary goal of the project is to reduce verification methodology to engineering practice. Veribetrfs is developed using the Iron★ methodology, a descendent of the Ironclad and IronFleet projects. So far, we have a key-value store with 100k iops performance and strong guarantees against data loss. This talk will give an overview of the methodology and describe how we have enhanced it in veribetrfs. 
Read more

Information Consumption on Social Media: Efficiency, Divisiveness, and Trust

Mahmoudreza Babaei Max Planck Institute for Software Systems
17 Jan 2020, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 113 / Meeting ID: 6312
SWS Student Defense Talks - Thesis Proposal
Over the last decade, the advent of social media has profoundly changed the way people produce and consume information online. On these platforms, users themselves play a role in selecting the sources from which they consume information, overthrowing traditional journalistic gatekeeping. Moreover, advertisers can target users with news stories using users’ personal data.

This new model has many advantages: the propagation of news is faster, the number of news sources is large, and the topics covered are diverse. …
Over the last decade, the advent of social media has profoundly changed the way people produce and consume information online. On these platforms, users themselves play a role in selecting the sources from which they consume information, overthrowing traditional journalistic gatekeeping. Moreover, advertisers can target users with news stories using users’ personal data.

This new model has many advantages: the propagation of news is faster, the number of news sources is large, and the topics covered are diverse. However, in this new model, users are often overloaded with redundant information, and they can get trapped in filter bubbles by consuming divisive and potentially false information. To tackle these concerns, in my thesis, I address the following important questions: • (i) How efficient are users at selecting their information sources? We have defined three intuitive notions of users’ efficiency in social media – link (the number of sources the user follows), in-flow (the amount of redundant information she acquires), and delay efficiency (the delay with which she receives the information). We use these three measures to assess how good users are at selecting who to follow within the social media system in order to acquire information most efficiently. • (ii) How can we break the filter bubbles that users get trapped in? Users on social media sites such as Twitter often get trapped in filter bubbles by being exposed to radical, highly partisan, or divisive information. To prevent users from getting trapped in filter bubbles, we propose an approach to inject diversity in users’ information consumption by identifying non-divisive, yet informative information. We propose a new method to identify less divisive information on controversial topics using features such as the publishers’ political leaning. • (iii) How can we design an efficient framework for fact-checking? The proliferation of false information is a major problem in social media. To counter it, social media platforms typically rely on expert fact-checkers to detect false news. However, human fact-checkers can realistically only cover a tiny fraction of all stories. So, it is important to automatically prioritize and select a small number of stories for human to fact check. However, the goals for prioritizing stories for fact-checking are unclear.

We identify three desired objectives to prioritize news for fact-checking. These objectives are based on the users’ perception of the truthfulness of stories. Our key finding is that these three objectives are incompatible in practice.
Read more

Towards a Tight Understanding of the Complexity of Algorithmic Problems

Dániel Marx MPI-INF - D1
08 Jan 2020, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
-

Computer-Aided Programming Across Software Stack

Işıl Dillig University of Texas at Austin
16 Dec 2019, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Distinguished Lecture Series
Program synthesis techniques aim to generate executable programs from some high-level of expression on user intent, such as logical specifications, examples, or naive reference implementation. This talk will survey different flavors of program synthesis and their applications across the entire software stack, ranging from computer end-users all the way to systems programmers. We will also illustrate how program synthesis is useful for addressing different concerns in software development, including functionality, correctness, performance, and security.

WebAssembly: Mechanisation, Security, and Concurrency

Conrad Watt University of Cambridge
12 Dec 2019, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
WebAssembly is the first new language to be introduced to the Web ecosystem in over 20 years. Its official specification is given as a formal semantics, making the language a perfect target for further applications of formal methods. This talk highlights recent work which builds on this formal semantics, and discusses the ongoing development of WebAssembly's relaxed memory model, which is complicated by the language's inter-operation with JavaScript.

Understanding and Evolving the Rust Programming Language

Ralf Jung Max Planck Institute for Software Systems
10 Dec 2019, 10:00 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 8796
SWS Student Defense Talks - Thesis Proposal
Rust is a young and actively developed "systems programming language" that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. In this talk I will present my thesis work on understanding and evolving this groundbreaking new language.

The first part of the talk is about building formal foundations for the existing Rust language. Rust employs a strong, ownership-based type system to rule out common systems programming anomalies, …
Rust is a young and actively developed "systems programming language" that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. In this talk I will present my thesis work on understanding and evolving this groundbreaking new language.

The first part of the talk is about building formal foundations for the existing Rust language. Rust employs a strong, ownership-based type system to rule out common systems programming anomalies, but then extends the expressive power of this core type system through libraries that internally use unsafe features. Unfortunately, this ``extensible'' aspect of the Rust language makes Rust's safety claims rather non-obvious, and there is good reason to question whether they actually hold. To address this, we have developed RustBelt, the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof supports the extensible nature of Rust's safety story in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe and compositional extension to the language.

The second part is about evolving the semantics of Rust to better support compiler optimizations. In particular, Rust's reference types provide strong aliasing guarantees, which a clever optimizing compiler should be able to exploit. However, if no care is taken, unsafe code (which plays an essential role in the Rust ecosystem, as mentioned above) can subvert these guarantees. To address this, we have developed Stacked Borrows, a new operational semantics for memory accesses in Rust. On the one hand, Stacked Borrows defines an aliasing discipline and declares programs violating it to have undefined behavior, thus making it possible for us to formally verify the soundness of a number of useful type-based intraprocedural optimizations. On the other hand, we have also implemented the Stacked Borrows semantics in Miri, an interpreter for Rust, and run large parts of the Rust standard library test suite in Miri. In so doing, we have validated that our semantics is sufficiently permissive to account for real-world unsafe Rust code.
Read more

Towards Usability in Private Data Analytics

Reinhard Munz Max Planck Institute for Software Systems
06 Dec 2019, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room 607
simultaneous videocast to Saarbrücken building E1 5, room 005 / Meeting ID: 6312
SWS Student Defense Talks - Thesis Defense
Private data analytics systems preferably provide required analytic accuracy to analysts and specified privacy to individuals whose data is analyzed. Devising a general system that works for a broad range of datasets and analytic scenarios has proven to be difficult.

Differentially private systems allow for proven formal privacy guarantees. To provide these guarantees, however, differentially private mechanisms often need to add large amounts of noise to statistical results, which impairs their usability. Thus, despite the advent of mechanisms with formal guarantees, …
Private data analytics systems preferably provide required analytic accuracy to analysts and specified privacy to individuals whose data is analyzed. Devising a general system that works for a broad range of datasets and analytic scenarios has proven to be difficult.

Differentially private systems allow for proven formal privacy guarantees. To provide these guarantees, however, differentially private mechanisms often need to add large amounts of noise to statistical results, which impairs their usability. Thus, despite the advent of mechanisms with formal guarantees, industry still uses inferior ad-hoc mechanisms that provide better analytic accuracy.

In my thesis I follow two approaches to improve the usability of private data analytics systems in general and differentially private systems in particular. First, I revisit ad-hoc mechanisms and explore the possibilities of systems that do not provide Differential Privacy or only a weak version thereof. Based on an attack analysis, I devise a set of new protection mechanisms including Query Based Bookkeeping (QBB). In contrast to previous systems, QBB only requires the history of analysts’ queries in order to provide privacy protection, but does NOT require any knowledge about the protected individuals’ data.

In my second approach I use the insights gained with QBB to propose UniTraX, the first differentially private analytics system that allows analysts to analyze part of a protected dataset without affecting the other parts and without giving up on accuracy. I show UniTraX’s usability by way of multiple case studies on real-world datasets across different domains. In all cases I demonstrate that UniTraX allows more queries than previous differentially private data analytics systems at moderate runtime overheads.
Read more

Stronger Higher-order Automation

Sophie Tourret MPI-INF - RG 1
04 Dec 2019, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Automated reasoning in first-order logic (FOL) is becoming a mature research domain. It has led to the development of powerful tools such as superposition-based theorem provers and SMT solvers (Satisfiability Modulo Theory solvers), that have found and continue to find many applications in industry and research.

One such application is the automation of proofs in interactive theorem proving (ITP), where proof assistants are used to write computer-checked proofs of theorems, generally expressed in a variant of higher-order logic (HOL). …
Automated reasoning in first-order logic (FOL) is becoming a mature research domain. It has led to the development of powerful tools such as superposition-based theorem provers and SMT solvers (Satisfiability Modulo Theory solvers), that have found and continue to find many applications in industry and research.

One such application is the automation of proofs in interactive theorem proving (ITP), where proof assistants are used to write computer-checked proofs of theorems, generally expressed in a variant of higher-order logic (HOL). This automation is realised via "hammers", that attempt to delegate the proof obligations to first-order automated provers. However, in the translation from HOL to FOL, hammers obfuscate the structure of terms and formulas through the application of a sequence of encodings, although it is this very structure that the provers exploit to find proofs efficiently.

This situation is less than ideal, and if until a few years ago, the ITP and ATP communities were working in parallel, mostly ignoring each other, there is nowadays a trend pushing to bring the two communities closer. The work that I will present in this talk is part of this trend. It includes ongoing research that is either improving higher-order automation with respect to ITP applications or using ITP as a vehicle for ATP research.
Read more

Prusti – Deductive Verification for Rust

Alex Summers ETH Zurich
03 Dec 2019, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Colloquium
Producing reliable systems software is a major challenge, plagued by the ubiquitous problems of shared mutable state, pointer aliasing, dynamic memory management, and subtle concurrency issues such as race conditions; even expert programmers struggle to tame the wide variety of reasons why their programs may not behave as they intended. Formal verification offers potential solutions to many of these problems, but typically at a very high price: the mathematical techniques employed are highly-complex, and difficult for even expert researchers to understand and apply. …
Producing reliable systems software is a major challenge, plagued by the ubiquitous problems of shared mutable state, pointer aliasing, dynamic memory management, and subtle concurrency issues such as race conditions; even expert programmers struggle to tame the wide variety of reasons why their programs may not behave as they intended. Formal verification offers potential solutions to many of these problems, but typically at a very high price: the mathematical techniques employed are highly-complex, and difficult for even expert researchers to understand and apply.

The relatively-new Rust programming language is designed to help with the former problem: a powerful ownership type system requires programmers to specify and restrict their discipline for referencing heap locations, providing in return the strong guarantee (almost – the talk, and Rustbelt!) that code type-checked by this system will be free from dangling pointers, unexpected aliasing, race conditions and the like. While this rules out a number of common errors, the question of whether a program behaves as intended remains.

In this talk, I’ll give an overview of the Prusti project, which leverages Rust’s type system and compiler analyses for formal verification. By combining the rich information available about a type-checked Rust program with separate user-specification of intended behaviour, Prusti enables a user to verify functional correctness of their code without interacting with a complex program logic; in particular, specifications and all interactions with our implemented tool are at the level of abstraction of Rust expressions.
Read more