Events 2018

Algorithms and the Brain

Emanuele Natale
MPI-INF - D1
Joint Lecture Series
01 Aug 2018, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Around the mid of the 20th century, pioneering theoreticians of brain science and computer science such as von Neumann, Turing, McCulloch, Pitts and Barlow, were beginning to leverage their interests in the other field to gain a better understanding of their own. These two fields have since then rapidly grown their prestige as sciences, as they became more sophisticated and their body of knowledge expanded. However, in doing so, they have also grown further apart. In this talk I will outline some recent efforts from both research communities to recognize the potential for what could be achieved in a unified research effort to answer the question: "What are the algorithms which run our brain?"

Practical Program Analysis

Maria Christakis
Max Planck Institute for Software Systems
Joint Lecture Series
04 Jul 2018, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Sound static analysis over-approximates the set of all possible executions in a given program in order to prove the absence of errors in the program. Due to this over-approximation, sound static analysis may generate spurious warnings about executions that are not wrong or even possible in the program. To become more practical, many static analyzers give up soundness by design. This means that they do not check certain properties or that they check them under certain unsound assumptions, such as the absence of arithmetic overflow. At the other end of the soundness spectrum, we have automatic test-case generation, which typically under-approximates the set of possible program executions. The goal of test-case generation is not to prove the absence of errors in the program but, rather, their existence. In this talk, I will present an overview of my research on combining these program analysis techniques to improve their overall automation, performance, and accuracy on real programs.

The Internet: A Complex System at its Limits

Anja Feldmann
MPI-INF - INET
Joint Lecture Series
06 Jun 2018, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
tbd

Capturing and Learning Digital Humans

Gerard Pons-Moll
MPI-INF - D2
Joint Lecture Series
02 May 2018, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
tbd

Power to the People. Verified.

Holger Hermanns
Fachrichtung Informatik - Saarbrücken
Joint Lecture Series
11 Apr 2018, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Twenty years ago we were able to repair cars at home. Nowadays customer services repair coffee machines by installing software updates. Soon you will no longer be able to repair your bike.

Embedded software innovations boost our society; they help us tremendously in our daily life. But we do not understand what the software embedded therein actually does, regardless of how well educated or smart we are. Proprietary embedded software has become an opaque layer between functionality and user. That layer is thick enough to possibly induce malicious or unintended behaviour, as it happened massively in our diesel cars. From the outside, there is no understanding of how decisions are made inside and across these software-driven black boxes. The outcomes are often not designed to be accessible, verified or evaluated by humans, limiting our ability to identify if, when, where, and why the software produced harm -- and worse still -- redress this harm. Proprietary embedded software locks us out of the products we own.

The presentation of Holger Hermanns will sketch the main cornerstones of a research agenda which targets the foundations of open and hence customisable embedded software. A minor customisation might well have strong unexpected impact, for instance on the longevity of an embedded battery, or the safety of the battery charging process. Means to detect, quantify and prevent such implications are needed. Those are delivered by quantitative verification technology for system-level correctness, safety, dependability and performability. Hermanns will link foundational achievements to concrete results in the area of power management for electric mobility and for satellite operation. Electric power is intricate to handle by software, is safety-critical, but vital for mobile devices and their longevity. Since ever more tools, gadgets, and vehicles run on batteries and use power harvesting, electric power management is itself a pivot of the future.

Incorporating Positional and Contextual Information into a Neural IR Model

Andrew Yates
MPI-INF - D5
Joint Lecture Series
07 Mar 2018, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Ad-hoc information retrieval models consider query-document interactions to produce a document relevance score for a given query and document. Traditionally, such interactions have been modeled using handcrafted statistics that generally count term occurrences within a document and across a collection. Recently, neural models have demonstrated that they provide the instruments necessary to consider query-document interactions directly, without the need for engineering such statistics.

In this talk, I will describe how positional term information can be represented and incorporated into a neural IR model. The resulting model, called PACRR, performs significantly better on standard TREC benchmarks than previous neural approaches. This improvement can be attributed to the fact that PACRR can learn to match both ordered and unordered sequences of query terms in addition to the single term matches considered by prior work. Using PACRR's approach to modeling query-document interactions as a foundation, I will describe how several well-known IR problems can be addressed by incorporating contextual information into the model; the resulting Co-PACRR model significantly outperforms the original PACRR model. Finally, I will provide a brief look inside the model to illustrate the interpretability of the learned weights and to investigate how match signals are combined by the model to produce a query-document relevance score.

Storage mechanisms and finite-state abstractions for software verification

Georg Zetzsche
IRIF
SWS Colloquium
01 Mar 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
A popular approach to automatic program verification is to come up with an abstraction that reflects pertinent properties of the program. This abstraction is drawn from a class of formal models that is amenable to analysis. In the plethora of existing formal models, the aspects of programs that can be represented faithfully are typically determined by the infinite dimension of its state space, its storage mechanism. A central theme of my recent research is to obtain general insights into how the structure of the storage mechanism affects the analysis of a formal model. In the first part of my talk, I will survey results on an overarching framework of storage mechanisms developed in my doctoral work. It encompasses a range of infinite-state models and permits meaningful characterizations of when a particular method of analysis is applicable. Another current focus of my work concerns finite-state abstractions of infinite-state models. On one hand, these can be over- or under-approximations that are more efficient to analyze than infinite-state systems. On the other hand, they can serve as easy-to-check correctness certificates that are produced instead of yes-or-no answers to a verification task. Thus, the second part of my talk will be concerned with results on computing downward closures and related finite-state abstractions.

Fighting Large-scale Internet Abuse

Kevin Borgolte
University of California
SWS Colloquium
26 Feb 2018, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
The widespread access to the Internet and the ubiquity of web-based services make it easy to communicate and interact globally. Unfortunately, the software implementing the functionality of these services is often vulnerable to attacks. In turn, an attacker can exploit them to compromise and abuse the services for nefarious purposes. In my research, I aim to better understand, detect, and prevent these attacks.

In this talk, we first look at detecting website defacements, which can inflict significant harm on a website's owner or operator through the loss of sales, the loss in reputation, or because of legal ramifications. Then, we dive into how to automatically identify malware distribution campaigns, which has become a major challenge in today's Internet. Next, we look at how to mitigate the dangers of domain takeover attacks, which give attackers the same capabilities to spread misinformation or malware as vulnerabilities do, but without the actual need for a vulnerability in the affected service. Last, I will conclude by sketching interesting future directions on how to better understand, detect, and prevent Internet abuse.

High Performance Data Center TCP Packet Processing

Antoine Kaufmann
University of Washington
SWS Colloquium
19 Feb 2018, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
TCP is widely used for client-server communication in modern data centers. But TCP packet handling is notoriously CPU intensive, accounting for an increasing fraction of data center processing time. Techniques such as TCP segment offload, kernel bypass, and RDMA are of limited benefit for the typical small, frequent RPCs. These techniques can also compromise protocol agility, resource isolation, overall system reliability, and complicate multi-tenancy.

I propose a unique refactoring of TCP functionality that splits processing between a streamlined fast path for common operations, and an out-of-band slow path. Protocol processing executes in the kernel on dedicated cores that enforce correctness and resource isolation. Applications asynchronously communicate with the kernel through event queues, improving parallelism and cache utilization. I show that my approach can increase RPC throughput by up to 4.1x compared to Linux. The fast-path can be offloaded to a programmable NIC to further improve performance and minimize CPU time for network processing. With hardware offload, data packets are delivered directly from application to application, while the NIC and kernel cooperate to enforce correctness and resource isolation. I show that hardware offload can increase per-core packet throughput by 10.7x compared to the Linux kernel TCP implementation.

Towards Latency Guarantees in Datacenters

Keon Jang
Google
SWS Colloquium
15 Feb 2018, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
A large portion of computation is now carried out in datacenter. A single datacenter hosts several 100s or 1000s of applications which share common compute and network resources in a datacenter. Isolating each application's performance, i.e., ensuring that its performance is predictable regardless of the behavior of other applications is essential to developing and deploying data center applications, since otherwise developers need to account for co-located applications, which increases development and deployment complexity. Despite its importance current approaches to performance isolation are incomplete, and focus mostly on isolating computational resources. In this talk I present two schemes for isolation network performance. The first, Silo, takes a resource allocation based approach and implements mechanisms for guaranteeing an application's network latency and throughput. The second, Express Pass, takes a congestion control based approach and fairly partitions network resources across applications.  Both approaches require no hardware (ASIC) changes, and can be deployed in today's datacenter. This shows that full application performance isolation is achievable today.

Liquid Haskell: Usable Language-Based Program Verification

Niki Vazou
University of Maryland
SWS Colloquium
12 Feb 2018, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Formal verification has been gaining the attention and resources of both the academic and the industrial world since it prevents critical software bugs that cost money, energy, time, and even lives. Yet, software development and formal verification are decoupled, requiring verification experts to prove properties of a template - instead of the actual - implementation ported into verification specific languages. My goal is to bridge formal verification and software development for the programming language Haskell. Haskell is a unique programming language in that it is a general purpose, functional language used for industrial development, but simultaneously it stands at the leading edge of research and teaching welcoming new, experimental, yet useful features.

In this talk I am presenting Liquid Haskell, a refinement type checker in which formal specifications are expressed as a combination of Haskell's types and expressions and are automatically checked against real Haskell code. This natural integration of specifications in the language, combined with automatic checking, established Liquid Haskell as a usable verifier, enthusiastically accepted by both industrial and academic Haskell users. Recently, I turned Liquid Haskell into a theorem prover, in which arbitrary theorems about Haskell functions would be proved within the language. As a consequence, Liquid Haskell can be used in Haskell courses to teach the principles of mechanized theorem proving.   

Turning a general purpose language into a theorem prover opens up new research questions - e.g., can theorems be used for runtime optimizations of existing real-world applications? - that I plan to explore in the future.

Scalable positioning of commodity mobile devices using audio signals

Viktor Erdélyi
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Proposal
09 Feb 2018, 4:00 pm - 4:45 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
This thesis explores the problem of computing a position map for co-located mobile devices without local infrastructure. The capability to obtain a position map can, for instance, aid spontaneous and private communication among users based on their relative position (e.g., in a given section of a room, at the same table, or in a given seat) at events like meetings, talks, or conferences. It can also facilitate interaction between speaker and audience in a lecture hall, and enable the distribution of materials and the collection of feedback based on users' location. In this thesis, we first present Sonoloc, a mobile app and system that, by relying on acoustic signals, allows a set of co-located commodity smart devices to determine their relative positions without local infrastructure. Sonoloc can position any number of devices within acoustic range with a constant number of chirps emitted by a self-organized subset of devices. Our experimental evaluation shows that the system can locate up to hundreds of devices with an accuracy of tens of centimeters using no more than 15 acoustic signals emitted by dynamically selected devices, in realistic rooms and despite substantial background noise. Then, we also discuss two planned extensions to Sonoloc: (1) one that enables users to determine the correct orientation of the position map while also allowing per-user choices, and (2) one that enables the creation of a combined relative position map in meetings that span multiple locations over videoconference.

Caribou -- Intelligent Distributed Storage for the Datacenter

Zsolt Istvan
ETH Zurich
SWS Colloquium
08 Feb 2018, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
In the era of Big Data, datacenter and cloud architectures decouple compute and storage resources from each other for better scalability. While this design choice enables elastic scale-out, it also causes unnecessary data movements. One solution is to push parts of the computation down to storage where data can be filtered more efficiently. Systems that do this are already in use and rely either on regular server machines as storage nodes or on network attached storage devices. Even though the former provide complex computation and rich functionality since there are plenty of conventional cores available to run the offloaded computation, this solution is quite inefficient because of the over-provisioning of computing capacity and the bandwidth mismatches between storage, CPU, and network.  Networked storage devices, on the other hand, are better balanced in terms of bandwidth but at the price of offering very limited options for offloading data processing.

With Caribou, we explore an alternative design that offers rich offloading functionality while matching the available line-rate processing performance of either storage or network. It also does this in a much more efficient package (size, energy consumption) than regular servers. Our FPGA-based prototype system has been designed such that the internal data management logic can saturate the network for most operation mixes, without being over-provisioned. As a result, it can extract and process data from storage at multi-GB/s rate before sending it to the computing nodes, while at the same time offering features such as replication for fault-tolerance.

Caribou has been released as open source. Its modular design and extensible processing pipeline make it a convenient platform for exploring domain-specific processing inside storage nodes.

Relational Cost Analysis

Ezgi Çiçek
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Defense
22 Jan 2018, 4:00 pm - 5:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Programming languages research has made great progress towards statically estimating the execution cost of a program. However, when one is interested in how the execution costs of two programs compare to each other (i.e., relational cost analysis), the use of unary techniques does not work well in many cases. In order to support a relational cost analysis, we must ultimately support reasoning about not only the executions of a single program, but also the executions of two programs, taking into account their similarities. This dissertation makes several contributions to the understanding and development of such a relational cost analysis. It shows how: 1) Refinement types and effect systems can express functional and relational quantitative properties of pairs of programs, including the difference in execution costs; 2) Relational cost analysis can be adapted to reason about dynamic stability, a measure of the update times of incremental programs as their inputs change; and 3) A sound and complete bidirectional type system can be developed (and implemented) for relational cost analysis.