Upcoming events

Constrained Counting and Sampling: Bridging the Gap between Theory and Practice

Kuldeep Meel
Rice University
SWS Colloquium
07 Mar 2017, 10:00 am - 12:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications including network reliability privacy probabilistic reasoning and constrained-random verification. In constrained counting the task is to compute the total weight subject to a given weighting function of the set of solutions of the given constraints. In constrained sampling the task is to sample randomly subject to a given weighting function from the set of solutions to a set of given constraints. In this talk I will introduce a novel algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in Boolean reasoning over the past two decades. This has allowed us to obtain breakthrough results in constrained sampling and counting providing a new algorithmic toolbox in machine learning probabilistic reasoning privacy and design verification. I will demonstrate the utility of the above techniques on various real applications including probabilistic inference design verification and our ongoing collaboration in estimating the reliability of critical infrastructure networks during natural disasters.

Randomized Algorithms Meets Formal Verification

Justin Hsu
University of Pennsylvania
SWS Colloquium
08 Mar 2017, 10:00 am - 12:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Algorithms and formal verification are two classical areas of computer science. The two fields apply rigorous mathematical proof to seemingly disparate ends---on the one hand analyzing computational efficiency of algorithms; on the other designing techniques to mechanically show that programs are correct.

In this talk I will present a surprising confluence of ideas from these two areas. First I will show how coupling proofs used to analyze random walks and Markov chains correspond to proofs in the program logic pRHL (probabilistic Relational Hoare Logic). This connection enables formal verification of novel probabilistic properties and provides an structured understanding of proofs by coupling. Then I will show how an approximate version of pRHL called apRHL points to a new approximate version of couplings closely related to differential privacy. The corresponding proof technique---proof by approximate coupling---enables cleaner proofs of differential privacy both for humans and for formal verification. Finally I will share some directions towards a possible "Theory AB" blending ideas from both worlds.

Privacy as a Service

Raymond Cheng
University of Washington
SWS Colloquium
13 Mar 2017, 10:30 am - 1:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Current cloud services are vulnerable to hacks and surveillance programs that undermine user privacy and personal liberties. In this talk I present how we can build practical systems for protecting user privacy from powerful persistent threats. I will discuss Talek a private publish-subscribe protocol. With Talek application developers can send and synchronize data through the cloud without revealing any information about data contents or communication patterns of application users. The protocol is designed with provable security guarantees and practical performance 3-4 orders of magnitude better throughput than other systems with comparable security goals. I will also discuss Radiatus a security-focused web framework to protect web apps against external intrusions and uProxy a Internet censorship circumvention tool in deployment today.

Relational Cost Analysis

Ezgi Cicek
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Proposal
15 Mar 2017, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
Research in the programming languages community has made great progress on statically estimating the execution cost of a program. The execution cost could be the number of execution steps or any other abstract metric of resource usage. Static execution cost analysis techniques based on program analysis type systems and abstract interpretation are well-studied. However if we are interested in establishing bounds on the execution cost of a program relative to another program's cost naively combining worst- and best-case execution costs of the two programs does not work well in many cases. The reason is that such unary analysis techniques analyze the two programs in isolation ignoring the similarities between the programs and their inputs.

In this thesis proposal I propose the first relational cost analysis framework that establishes the relative cost of one program with respect to another possibly similar program by taking advantage of the similarities in their inputs and their code. Among other things such a relational cost analysis can be used to prove statements like: 1) Of two similar algorithms to solve a problem one is faster than the other without having to establish the exact complexity of either algorithm; 2) The execution cost of a program is independent of a secret input so nothing can be inferred about the secret input by observing the cost of executing the program; and 3) The cost of a program varies very little with input changes; this can aid resource allocation and scheduling. I show that in cases where the two programs and their inputs are closely related relational cost analysis is superior to non-relational analysis both in terms of precision and simplicity. Specifically I show that a refinement type and effect system provides a precise and compact mechanism for proving the relative cost of two programs. The idea behind relational cost analysis can be extended to other nontrivial domains. In particular I show that relational cost analysis can be adapted to prove upper bounds on the update costs of incremental programs.  Finally I briefly present a bidirectional technique to typecheck relational cost bounds.

A New Approach to Network Functions

Aurojit Panda
University of Calefornia, Berkely,
SWS Colloquium
17 Mar 2017, 10:00 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
Modern networks do far more than  just deliver packets and provide network functions -- including  firewalls caches and WAN optimizers - that are crucial for scaling networks ensuring security and enabling new applications. Network functions were traditionally implemented using dedicated hardware middleboxes but in recent years they are increasingly being deployed as VMs on commodity servers. While many herald this move towards network function virtualization (NFV) as a great step forward I argue that accepted virtualization techniques are ill-suited to network functions. In this talk I describe NetBricks - a new approach to building and running virtualized network functions that speeds development and increases performance. I end the talk by discussing the implications of being able to easily create and insert new network functions. 

Digital Knowledge: From Facts to Rules and Back

Daria Stepanova
MPI-INF - D5
Joint Lecture Series
03 May 2017, 12:15 pm - 3:15 pm
Saarbrücken building E1 5, room 002
Advances in Information Extraction have enabled the automatic construction of Knowledge Graphs (KGs). These are huge collections of facts in the subject-predicate-object form extracted primarily from curated knowledge-sharing sources such as Wikipedia. KGs store information about people places organizations etc. and they are used for a wide variety of tasks such as semantic search and question answering to mention a few.

However since KGs are automatically constructed they are often incomplete and inaccurate. In this talk I will investigate how techniques from Data Mining and Knowledge Representation and Reasoning could be used to address these important issues. More specifically I will present our approach to learning so-called nonmonotonic rules (rules with exceptions) from KGs and show how these can be applied  for deducing missing facts and for performing advanced reasoning tasks on top of KGs.

Towards an Approximating Compiler for Numerical Computations

Eva Darulova
Max Planck Institute for Software Systems
Joint Lecture Series
07 Jun 2017, 12:15 pm - 3:15 pm
Saarbrücken building E1 5, room 002
Computing resources are fundamentally limited and sometimes an exact solution may not even exist. Thus when implementing real-world systems approximations are inevitable as are the errors introduced by them. The magnitude of errors is problem-dependent but higher accuracy generally comes at a cost in terms of memory energy or runtime effectively creating an accuracy-efficiency tradeoff. To take advantage of this tradeoff we need to ensure that the computed results are sufficiently accurate otherwise we risk disastrously incorrect results or system failures. Unfortunately the current way of programming with approximations is mostly manual and consequently costly error prone and often produces suboptimal results.

In this talk I will present our vision and efforts so far towards an approximating compiler for numerical computations. Such a compiler would take as input exact high-level code with an accuracy specification and automatically synthesize an approximated implementation which is as efficient as possible but verifiably computes accurate enough results.

Your Photos Expose Your Social Circles - Social Relation Recognition from 5 Social Domains

Qianru Sun
MPI-INF - D4
Joint Lecture Series
05 Jul 2017, 12:15 pm - 3:15 pm
Saarbrücken building E1 5, room 002
Social relations are the foundation of human daily life. Developing techniques to analyze such relations in visual data such as photos bears great potential to build machines that better understand people at a social level. Additionally through better understanding about such hidden information in exposed photos we would like to inform people about potential privacy risks. Social domain-based theory from social psychology is a great starting point to systematically approach social relation recognition. The theory provides a coverage of all aspects of social relations and equally is concrete and predictive about the visual attributes and behaviors defining the relations in each domain. Our work provides the first photo dataset built on this holistic conceptualization of social life that is composed of a hierarchical label space of social domains and social relations and contributes the first models to recognize such domains and relations and find superior performance for attribute based features. Beyond the encouraging performances we have some findings of interpretable features that are in accordance with the predictions from social psychology literature. Our work mainly contributes to interleave visual recognition and social psychology theory that has the potential to complement the theoretical work in the area with empirical and data-driven models of social life.