Recent events

Post-quantum Challenges in Secure Computation

Nico D├Âttling
CISPA
Joint Lecture Series
05 Dec 2018, 12:15 pm - 1:15 pm
Saarbr├╝cken building E1 5, room 002
In the early 1990s cryptography went into a foundational crisis when efficient quantum algorithms were discovered which could break almost all public key encryption schemes known at the time. Since then, a enormous research effort has been invested into basing public key cryptography, and secure computation in general, on problems which are conjectured to be hard even for quantum computers. While this research program has been resoundingly successful, even leading up the way to cryptographic milestones such as fully homomorphic encryption, there are still important cryptographic primitives for which no post-quantum secure protocols are known. Until very recently, one such primitive was 2-message oblivious transfer, a fundamental primitive in the field of secure two- and multi-party computation. I will discuss a novel construction of this primitive from the Learning With Errors (LWE) assumption, a lattice-based problem which is known to be as hard as worst-case lattice problems and conjectured to be post-quantum secure. The security of our construction relies on a fundamental Fourier-analytic property of lattices, namely the transference principle: Either a lattice or its dual must have short vectors.

Privacy-Compliant Mobile Computing

Paarijaat Aditya
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Defense
03 Dec 2018, 4:30 pm - 5:30 pm
Saarbr├╝cken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Sophisticated mobile computing, sensing and recording devices like smartphones, smartwatches, and wearable cameras are carried by their users virtually around the clock, blurring the distinction between the online and offline worlds. While these devices enable transformative new applications and services, they also introduce entirely new threats to users' privacy, because they can capture a complete record of the user's location, online and offline activities, and social encounters, including an audiovisual record. Such a record of users' personal information is highly sensitive and is subject to numerous privacy risks. In this thesis, we have investigated and built systems to mitigate two such privacy risks: 1) privacy risks due to ubiquitous digital capture, where bystanders may inadvertently be captured in photos and videos recorded by other nearby users, 2) privacy risks to users' personal information introduced by a popular class of apps called `mobile social apps'. In this thesis, we present two systems, called I-Pic and EnCore, built to mitigate these two privacy risks.

Both systems aim to put the users back in control of what personal information is being collected and shared, while still enabling innovative new applications. We built working prototypes of both systems and evaluated them through actual user deployments. Overall we demonstrate that it is possible to achieve privacy-compliant digital capture and it is possible to build privacy-compliant mobile social apps, while preserving their intended functionality and ease-of-use. Furthermore, we also explore how the two solutions can be merged into a powerful combination, one which could enable novel workflows for specifying privacy preferences in image capture that do not currently exist.

Survey Equivalence: An Information-theoretic Measure of Classifier Accuracy When the Ground Truth is Subjective

Paul Resnick
University of Michigan, School of Information
SWS Distinguished Lecture Series
27 Nov 2018, 10:30 am - 12:00 pm
Saarbr├╝cken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 111
Many classification tasks have no objective ground truth. Examples include: which content or explanation is "better" according to some community? is this comment toxic? what is the political leaning of this news article? The traditional modeling approach assumes each item has an objective true state that is perceived by humans with some random error. It fails to account for the fact that people have greater agreement on some items than others. I will describe an alternative model where the true state is a distribution over labels that raters from a specified population would assign to an item. This leads to information gain (mutual information) as a theoretically justified and computationally tractable measure of a classifier's quality, and an intuitive interpretation of information gain in terms of the sample size for a survey that would yield the same expected error rate.

The Reachability Problem for Vector Addition Systems is Not Elementary

Wojciech Czerwinski
University of Warsaw
SWS Colloquium
22 Nov 2018, 4:00 pm - 5:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
I will present a recent non-elementary lower bound for the complexity of reachability problem for Vector Addition Systems. I plan to show the main insights of the proof. In particular I will present a surprising equation on fractions, which is the core of the new source of hardness found in VASes.

More Realistic Scheduling Models and Analyses for Advanced Real-Time Embedded Systems

Georg von der Brueggen
TU Dortmund
SWS Colloquium
22 Nov 2018, 2:30 pm - 3:30 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
In real-time embedded systems, for each task the compliance to timing constraints has to be guaranteed in addition to the functional correctness. The first part of the talk considers the theoretical comparison of scheduling algorithms and schedulability tests by evaluating speedup factors for non-preemptive scheduling, which leads to a discussion about general problems of resource augmentation bounds. In addition, it is explained how utilization bounds can be parametrized, resulting in better bounds for specific scenarios, i.e., when analyzing non-preemptive Rate-Monotonic scheduling as well as task sets inspired by automotive applications.

In the second part, a setting similar to mixed-criticality systems is considered and the criticism on previous work in this area is detailed. Hence, a new system model that allows a better applicability to realistic scenarios, namely Systems with Dynamic Real-Time Guarantees, is explained. This model is extended to a multiprocessor scenario, considering CPU overheating as a possible cause for mixed-criticality behaviour. Finally, a way to determine the deadline-miss probability for such systems is described that drastically reduces the runtime of such calculations.

The third part discusses tasks with self-suspension behaviour, explains a fixed-relative-deadline strategy for segmented self-suspension tasks with one suspension interval, and details how this approach can be exploited in a resource-oriented partitioned scheduling. Furthermore, it is explained how the gap between the dynamic and the segmented self-suspension model can be bridged by hybrid models.

Verified Secure Routing

Peter M├╝ller
ETH Zurich
SWS Distinguished Lecture Series
19 Nov 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
SCION is a new Internet architecture that addresses many of the security vulnerabilities of today's Internet. Its clean-slate design provides, among other properties, route control, failure isolation, and multi-path communication. The verifiedSCION project is an effort to formally verify the correctness and security of SCION. It aims to provide strong guarantees for the entire architecture, from the protocol design to its concrete implementation. The project uses stepwise refinement to prove that the protocol withstands increasingly strong attackers. The refinement proofs assume that all network components such as routers satisfy their specifications. This property is then verified separately using deductive program verification in separation logic. This talk will give an overview of the verifiedSCION project and explain, in particular, how we verify code-level properties such as memory safety, I/O behavior, and information flow security.

Feedback Control for Predictable Cloud Computing

Dr. Martina Maggio
Lund University
SWS Colloquium
14 Nov 2018, 10:30 am - 12:00 pm
Saarbr├╝cken building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Cloud computing gives the illusion of infinite computational capacity and allows for on-demand resource provisioning. As a result, over the last few years, the cloud computing model has experienced widespread industrial adoption and companies like Netflix offloaded their entire infrastructure to the cloud. However, with even the largest datacenter being of a finite size, cloud infrastructures have experienced overload due to overbooking or transient failures. In essence, this is an excellent opportunity for the design of control solutions, that tackle the problem of mitigating overload peaks, using feedback from the computing infrastructure. Exploiting control-theoretical principles and taking advantage of the knowledge and the analysis capabilities of control tools, it is possible to provide formal guarantees on the predictability of the cloud platform. This talk introduces recent research advances on feedback control in the cloud computing domain. This talk discusses control solutions and future research for both cloud application development, and infrastructure management. In particular, it covers application brownout, control-based load-balancing, and autoscaling.

Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning

Elissa Redmiles
University of Maryland
SWS Colloquium
13 Nov 2018, 10:30 am - 11:30 am
Saarbr├╝cken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
A variety of experts -- computer scientists, policy makers, judges -- constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other and, perhaps more importantly, with the people for whom they are making these decisions: the users.

This raises a question: Is it possible to learn best practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people's preferences and then infer best practice rather than using experts' normative (prescriptive) determinations to define best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Fairness for Sequential Decision Making Algorithms

Hoda Heidari
ETH Zurich
SWS Colloquium
12 Nov 2018, 10:30 am - 12:00 pm
Saarbr├╝cken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
Fairness considerations in settings where decisions are made by supervised learning algorithms (e.g. criminal risk assessment) has received considerable attention, recently. As the fairness literature continues to expand mostly around this canonical learning task, it is important to recognize that many real-world applications of ML fall outside the category of supervised, one-shot learning. In this presentation, I will talk about two scenarios in which algorithmic decisions are made in a sequential manner and over time. I will argue that in such settings, being fair---at a minimum---requires decisions to be "consistent" across individuals who arrive at different time steps, that is, similar individuals must be treated similarly. I will then talk about how such consistency constraints affect learning. 

In the first part of the talk, I will introduce a generic sequential decision making framework, in which at each time step the learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations it has made so far. I propose a general framework for post-processing predictions made by a black-box learning model, so that the resulting sequence of outcomes is guaranteed to be consistent. I show, both theoretically and via simulations, that imposing consistency constraints will not significantly slow down learning.

In the second part of the talk, I will focus on fairness considerations in a particular type of market---namely, combinatorial prediction markets---where traders can submit limit orders on various security bundles, and the market maker is tasked with executing these orders in a fair manner. The main challenge in running such market is that executing one order can potentially change the price of every other order in the book. I define the notion of a "fair trading path", which at a high level guarantees similar orders are executed similarly: no order executes at a price above its limit, and every order executes when its market price falls below its limit price. I present a market algorithm that respects these fairness conditions, and evaluate it using real combinatorial predictions made during the 2008 U.S. Presidential election. 

I will conclude by comparing my work with previous papers on fairness for online learning, and a list of directions for future work.

Generating Software Tests

Andreas Zeller
Fachrichtung Informatik - Saarbr├╝cken/CISPA
Joint Lecture Series
07 Nov 2018, 12:15 pm - 1:15 pm
Saarbr├╝cken building E1 5, room 002
Software has bugs. What can we do to find as many of these as possible? In this talk, I show how to systematically test software by generating such tests automatically, starting with simple random "fuzzing" generators and then proceeding to more effective grammar-based and coverage-guided approaches. Being fully automatic and easy to deploy, such fuzzers run at little cost, yet are very effective in finding bugs: Our own Langfuzz grammar-based test generator for JavaScript runs around the clock for the Firefox, Chrome, and Edge web browsers and so far has found more than 2,600 confirmed bugs. Our latest test generator prototypes are even able to automatically learn the input language of a given program, which allows to generate highly effective tests for arbitrary programs without any particular setup.

Compiling Dynamical Systems for Efficient Simulation on Reconfigurable Analog Computers

Sara Achour
MIT
SWS Colloquium
22 Oct 2018, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Programmable analog devices are a powerful new computing substrate that are especially appropriate for performing computationally intensive simulations of dynamical systems. Until recently, the de-facto standard for programming these devices required hardware specialists to manually program the analog device to model the dynamical system of interest. In this talk, I will present Arco, a compiler that automatically configures analog devices to perform dynamical system simulation, and Jaunt, a compilation technique that that scales dynamical system parameters to change the speed of the simulation and render the resulting simulation physically realizable given the operating constraints of the analog hardware platform. These techniques capture the domain knowledge required to fully exploit the capabilities of reconfigurable analog devices, eliminating a key obstacle to the widespread adoption of these devices.

Justified representation in multiwinner voting: axioms and algorithms

Edith Elkind
University of Oxford
SWS Distinguished Lecture Series
19 Oct 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Suppose that a group of voters wants to select k \ge 1 alternatives from a given set, and each voter indicates which of the alternatives are acceptable to her: the alternatives could be conference submissions, applicants for a scholarship or locations for a fast food chain. In this setting it is natural to require that the winning set represents the voters fairly, in the sense that large groups of voters with similar preferences have at least some of their approved alternatives in the winning set. We describe several ways to formalize this idea, and show how to use it to classify voting rules. For one of our axioms, the only known voting rule that satisfies it is not polynomial-time computable, and it was conjectured that no voting rule that satisfies this axiom can be polynomial-time computable; however, we will show that this is not the case.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

Dana Drachsler Cohen
ETH Zurich
SWS Colloquium
15 Oct 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
In this talk, I will present AI2, a sound and scalable analyzer for deep neural networks. Based on overapproximation, AI2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks). The key insight behind AI2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling to leverage decades of advances in that area. To this end, I will introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows to handle real-world neural networks, which are often built out of these types of layers. I will also empirically demonstrate that (i) AI2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, and (iii) AI2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks.

Complexity in Question Answering

Rishiraj Saha Roy
MPI-INF - D5
Joint Lecture Series
10 Oct 2018, 12:15 pm - 1:15 pm
Saarbr├╝cken building E1 5, room 002
Answering natural-language questions is a remarkable ability of search engines. However, brittleness in the state-of-the-art is exposed when handling complexity in question formulation. Such complexity has two distinct dimensions: (i) diversity of expressions which convey the same information need, and, (ii) complexity in the information need itself. We propose initial solutions to these challenges: (i) syntactic differences in question formulation can be tackled with a continuous-learning framework that extends template-based answering with semantic similarity and user feedback; (ii) complexity in information needs can be addressed by stitching pieces of evidence from multiple documents to build a noisy graph, within which answers can be detected using optimal interconnections. The talk will discuss results for these proposals, and conclude with promising open directions in question answering.

Improving the energy efficiency of virtualized datacenters

Vlad Nitu
Toulouse University
SWS Colloquium
21 Sep 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
Energy consumption is an important concern for cloud datacenters. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US datacenters alone will spend about $13 billion on energy bills. Generally, the servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. This technology enables a cloud provider to pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with a set of long-term unused resources (called 'holes'). Our first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. However the datacenter resource fragmentation has a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. Our second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. Our third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs on based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the memory he paid for. Our fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, our system allows the VM to allocate memory on a remote physical server.

Gradually Typed Symbolic Expressions: an Approach for Developing Embedded Domain-Specific Modeling Languages

David Broman
KTH Royal Institute of Technology
SWS Colloquium
13 Sep 2018, 2:30 pm - 3:30 pm
Kaiserslautern building G26, room 113
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Embedding a domain-specific language (DSL) in a general purpose host language is an efficient way to develop a new DSL. Various kinds of languages and paradigms can be used as host languages, including object-oriented, functional, statically typed, and dynamically typed variants, all having their pros and cons. For deep embedding, statically typed languages enable early checking and potentially good DSL error messages, instead of reporting runtime errors. Dynamically typed languages, on the other hand, enable flexible transformations, thus avoiding extensive boilerplate code. In this talk, I will discuss the concept of gradually typed symbolic expressions that mix static and dynamic typing for symbolic data. The key idea is to combine the strengths of dynamic and static typing in the context of deep embedding of DSLs. Moreover, I will briefly outline an ongoing research effort of developing a new framework for developing heterogenous domain-specific languages.

Timed C: An Extension to the C Programming Language for Real-Time Systems

Saranya Natarajan
KTH Royal Institute of Technology
SWS Colloquium
12 Sep 2018, 3:30 pm - 4:30 pm
Kaiserslautern building G26, room 113
simultaneous videocast to Saarbr├╝cken building E1 5, room 422
The design and implementation of real-time systems require that both the logical and the temporal behaviour are correct. There exist several specialized languages and tools that use the notion of logical time, as well as industrial strength languages such as Ada and RTJS that incorporate direct handling of real time. Although these languages and tools have shown to be good alternatives for safety-critical systems, most commodity real-time and embedded systems are today implemented in the standard C programming language. Such systems are typically targeting proprietary bare-metal platforms, standard POSIX compliant platforms, or open-source operating systems. It is, however, error prone to develop large, reliable, and portable systems based on these APIs. In this talk, I will talk about an extension to the C programming language, called Timed C, with a minimal set of language primitives, and show how a retargetable source-to-source compiler can be used to compile and execute simple, expressive, and portable programs

PROSA: A Foundation for Verified Schedulability Analysis

Felipe Augusto Queiroz de Cerqueira
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Proposal
10 Sep 2018, 4:00 pm - 5:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 422
Real-time systems comprise the class of systems that are subject to physical timing constraints, i.e., in order to be considered sound, they must be checked for both functional and temporal correctness. Regarding the latter, apart from choosing appropriate scheduling policies to be applied at runtime, system designers must confirm that the temporal requirements of the system are met by using methods known as schedulability analyses.

As opposed to the more measurement-based, reactive resource provisioning in general-purpose systems, schedulability analyses rely on statically deriving properties that characterize the temporal worst-case behavior of the system, and thus heavily rely on mathematical foundations.

In recent years, however, the trust in those foundations has been put to challenge, since a considerable number of published analyses were found to be flawed due to incorrect pen&paper proofs. Instead of attributing those issues solely to human error, we argue that there is a fundamental problem to be addressed in real-time scheduling theory: system models and analyses have become increasingly complex and difficult to verify, placing a heavy burden on authors and reviewers. Since many real-time systems target safety-critical applications, it is crucial for such systems to be grounded on more robust mathematical foundations.

To address this lack of trust in state-of-the-art schedulability analyses, I will present Prosa, a long-term project for the formalization and verification of real-time schedulability analysis, which aims to provide the theoretical support for the next generation of certified real-time systems. To ensure full correctness of the proofs, Prosa is written in and mechanically verified with Coq, a powerful and mature proof assistant with support for higher-order logic and computations.

HAMS: Harnessing AutoMobiles for Safety

Venkat Padmanabhan
Microsoft Research India
SWS Distinguished Lecture Series
24 Aug 2018, 10:30 am - 12:00 pm
Saarbr├╝cken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Road safety is a major public health issue, with road accidents accounting for an estimated 1.25 million fatalities, and many more injuries, the world over, each year. The problem is particularly acute in India, with nearly a quarter of a million fatalities every year, i.e., 20% of the world's total. Besides the heavy human cost, road accidents also impose a significant economic cost. The crux of the problem is that the major factors impacting safety -- vehicles, roads, and people -- have virtually no ongoing monitoring today. In the HAMS project at Microsoft Research India, we employ a dashboard-mounted smartphone, and the array of sensors it includes, as a virtual harness, with a view to monitoring drivers and their driving. We focus primarily on the camera sensors of the smartphone, both the front camera, which faces the driver, and the back camera, which looks out to the front of the vehicle. We address the challenges arising from our choice of low-cost generic sensors instead of more expensive specialized sensors, the need for efficient processing on a smartphone, and demands of robust operation in uncontrolled environments. HAMS has been piloted as part of a driver training program, with promising early results.

Not Your Typical Objects: Made from Raw Materials Augmented with Sensing and Computation

Phillip Stanley-Marbell
University of Cambridge
SWS Colloquium
21 Aug 2018, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Materials and manufacturing processes can be made to adapt to the way in which their end-products are used, by imbuing raw materials with sensing and computation elements. By making the algorithms that run on these computing elements aware of the physics of the objects in which they are embedded, computation-and-sensing augmented materials could change the way we think about the border between inanimate objects and computing systems. One way to exploit knowledge about the physical world in the programs running on (or in) these objects is to exploit the fact that these programs can often tolerate noise and other deviations from correctness in their input data. This talk will highlight research that builds on these observations.

Computing with Simple Dynamics and Biological Applications

Emanuele Natale
MPI-INF - D1
Joint Lecture Series
01 Aug 2018, 12:15 pm - 1:15 pm
Saarbr├╝cken building E1 5, room 002
In this talk, I will present my work on simple distributed protocols for fundamental coordination problems such as plurality and valid consensus, rumor spreading and distributed clustering in stochastic models of interaction. I will discuss applications in the research area of Biological Distributed Algorithms, which aims to understand through the algorithmic lens, the collective behavior of biological systems, such as social insects. This setting has recently motivated the study of some interesting new problems, such as the investigation of systems where communication is affected by noise in a way which cannot be handled by error correction codes. More recently, as a fellow of the Simons Institute for the Theory of Computing, I have been working at the interface between Theoretical Computer Science and Computational Neuroscience. I will provide a brief overview of some research topics in this direction.

Practical Program Analysis

Maria Christakis
Max Planck Institute for Software Systems
Joint Lecture Series
04 Jul 2018, 12:15 pm - 1:15 pm
Saarbr├╝cken building E1 5, room 002
Sound static analysis over-approximates the set of all possible executions in a given program in order to prove the absence of errors in the program. Due to this over-approximation, sound static analysis may generate spurious warnings about executions that are not wrong or even possible in the program. To become more practical, many static analyzers give up soundness by design. This means that they do not check certain properties or that they check them under certain unsound assumptions, such as the absence of arithmetic overflow. At the other end of the soundness spectrum, we have automatic test-case generation, which typically under-approximates the set of possible program executions. The goal of test-case generation is not to prove the absence of errors in the program but, rather, their existence. In this talk, I will present an overview of my research on combining these program analysis techniques to improve their overall automation, performance, and accuracy on real programs.

On the expressive power of user-defined effects: Effect handlers, monadic reflection, delimited continuations

Sam Lindley
University of Edinburgh
SWS Colloquium
29 Jun 2018, 2:00 pm - 3:30 pm
Saarbr├╝cken building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
We compare the expressive power of three programming abstractions for user-defined computational effects: Plotkin and Pretnar's effect handlers, Filinski's monadic reflection, and delimited control. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features.

We present each notion as an extension of a simply-typed core lambda-calculus with an effect type system. Using Felleisen's notion of a macro translation, we show that these abstractions can macro-express each other, providing we disregard types. Alas, not all of the translations are type-preserving; moreover, no alternative type-preserving macro translations exist. We show that if we add suitable notions of polymorphism to the core calculus and its extensions then all of the translations can be adapted to preserve typing.

(based on joint work with Yannick Forster, Ohad Kammar, and Matija Pretnar)

Program Invariants

Jo├źl Ouaknine
Max Planck Institute for Software Systems
SWS Colloquium
29 Jun 2018, 11:30 am - 12:30 pm
Saarbr├╝cken building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Automated invariant generation is a fundamental challenge in program analysis and verification, going back many decades, and remains a topic of active research. In this talk I'll present a select overview and survey of work on this problem, and discuss unexpected connections to other fields including algebraic geometry, group theory, and quantum computing. (No previous knowledge of these fields will be assumed.) This is joint work with Ehud Hrushovski, Amaury Pouly, and James Worrell.

Designing a System for Heterogeneous Compute Units

Nils Asmussen
TU Dresden
SWS Colloquium
29 Jun 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
The ongoing trend to more heterogeneity forces us to rethink the design of systems. In this talk, I will present a new system design that considers heterogeneous compute units (general-purpose cores with different instruction sets, DSPs, FPGAs, fixed-function accelerators, etc.) from the beginning instead of as an afterthought. The goal is to treat all compute units (CUs) as first-class citizens, enabling 1) isolation and secure communication between all types of CUs, 2) a direct interaction of all CUs to remove the conventional CPU from the critical path, and 3) access to OS services such as file systems and network stacks for all CUs. To study this system design, I am using a hardware/software co-design based on two key ideas: 1) introduce a new hardware component next to each CU used by the OS as the CUs' common interface and 2) let the OS kernel control applications remotely from a different CU. In my talk, I will show how this approach allows to support arbitrary CUs as aforementioned first-class citizens, ranging from fixed-function accelerators to complex general-purpose cores.

Boosting human capabilities on perceptual categorization tasks

Michael Mozer
University of Colorado, Boulder
SWS Distinguished Lecture Series
26 Jun 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
We are developing methods to improve human learning and performance on challenging perceptual categorization tasks, e.g., bird species identification, diagnostic dermatology. Our approach involves inferring psychological embeddings -- internal representations that individuals use to reason about a domain. Using predictive cognitive models that operate on an embedding, we perform surrogate-based optimization to determine efficient and effective means of training domain novices as well as amplifying an individual's capabilities at any stage of training. Our cognitive models leverage psychological theories of: similarity judgement and generalization, contextual and sequential effects in choice, attention shifts among embedding dimensions.  Rather than searching over all possible training policies, we focus our search on policy spaces motivated by the training literature, including manipulation of exemplar difficulty and the sequencing of category labels. We show that our models predict human behavior not only in the aggregate but at the level of individual learners and individual exemplars, and preliminary experiments show the benefits of surrogate-based optimization on learning and perform ance.

This work was performed in close collaboration with Brett Roads at University College London.

Heap-based reasoning about asynchronous programs

Johannes Kloos
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Defense
14 Jun 2018, 5:00 pm - 6:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbr├╝cken building E1 5, room 029
Asynchronous concurrency is a wide-spread way of writing programs that deal with many short tasks. It is the programming model behind event-driven concurrency, as exemplified by GUI applications, where the tasks correspond to event handlers, web applications based around JavaScript, the implementation of web browsers, but also of server-side software or operating systems.

This model is widely used because it provides the performance benefits of concurrency together with easier programming than multi-threading. While there is ample work on how to implement asynchronous programs, and significant work on testing and model checking, little research has been done on handling asynchronous programs that involve heap manipulation, nor on how to automatically optimize code for asynchronous concurrency.

This thesis addresses the question of how we can reason about asynchronous programs while considering the heap, and how to use this to optimize programs. The work is organized along the main questions:

(i) How can we reason about asynchronous programs, without ignoring the heap? (ii) How can we use such reasoning techniques to optimize programs involving asynchronous behavior? (iii) How can we transfer these reasoning and optimization techniques to other settings?

The unifying idea behind all the results in the thesis is the use of an appropriate model encompassing global state and a promise-based model of asynchronous concurrency. For the first question, we start from refinement type systems for sequential programs and extend them to perform precise resource-based reasoning in terms of heap contents, known outstanding tasks and promises. This extended type system is known as Asynchronous Liquid Separation Types, or ALST for short. We implement ALST in for OCaml programs using the Lwt library.

For the second question, we consider a family of possible program optimizations, described by a set of rewriting rules, the DWFM rules. The rewriting rules are type-driven: We only guarantee soundness for programs that are well-typed under ALST. We give a soundness proof based on a semantic interpretation of ALST that allows us to show behavior inclusion of pairs of programs.

For the third question, we address an optimization problem from industrial practice: Normally, JavaScript files that are referenced in an HTML file are be loaded synchronously, i.e., when a script tag is encountered, the browser must suspend parsing, then load and execute the script, and only after will it continue parsing HTML. But in practice, there are numerous JavaScript files for which asynchronous loading would be perfectly sound. First, we sketch a hypothetical optimization using the DWFM rules and a static analysis.

To actually implement the analysis, we modify the approach to use a dynamic analysis. This analysis, known as JSDefer, enables us to analyze real-world web pages, and provide experimental evidence for the efficiency of this transformation.

Learning-Based Hardware/Software Power and Performance Prediction

Andreas Gerstlauer
University of Texas at Austin
SWS Colloquium
11 Jun 2018, 10:30 am - 11:30 am
Saarbr├╝cken building E1 5, room 105
simultaneous videocast to Kaiserslautern building G26, room 111
Next to performance, early power and energy estimation is a key challenge in the design of computer systems. Traditional simulation-based methods are often too slow while existing analytical models are often not sufficiently accurate. In this talk, I will present our work on bridging this gap by providing fast yet accurate alternatives for power and performance modeling of software and hardware. In the past, we have pioneered so-called source-level and host-compiled simulation techniques that are based on back-annotation of source code with semi-analytically estimated target metrics. More recently, we have studied alternative approaches in which we employ advanced machine learning techniques to synthesize analytical proxy models that can extract latent correlations and accurately predict time-varying power and performance of an application running on a target platform purely from data obtained while executing the application natively on a completely different host machine. We have developed such learning-based approaches for both hardware and software. On the hardware side, learning-based models for white-box and black-box hardware accelerators reach simulation speeds of 1 Mcycles/s at 97% accuracy. On the software side, depending on the granularity at which prediction is performed, cross-platform prediction can achieve more than 95% accuracy at more than 3 GIPS ofequivalent simulation throughput.

Compositional Compiler Correctness Via Parametric Simulations

Georg Neis
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Defense
07 Jun 2018, 2:00 pm - 3:00 pm
Saarbr├╝cken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Compiler verification is essential for the construction of fully verified software, but most prior work (such as CompCert) has focused on verifying whole-program compilers. To support separate compilation and to enable linking of results from different verified compilers, it is important to develop a compositional notion of compiler correctness that is modular (preserved under linking), transitive (supports multi-pass compilation), and flexible (applicable to compilers that use different intermediate languages or employ non-standard program transformations). In this thesis, we formalize such a notion of correctness based on parametric simulations, thus developing a novel approach to compositional compiler verification.