Recent events

Variational Bayes In Private Settings

Mijung Park
Amsterdam Machine Learning Lab
SWS Colloquium
03 Mar 2017, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
Bayesian methods are frequently used for analysing privacy-sensitive datasets including medical records emails and educational data and there is a growing need for practical Bayesian inference algorithms that protect the privacy of individuals' data. To this end we provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family we can privatise the variational posterior distributions simply by perturbing the expected sufficient statistics of the complete-data likelihood. For widely used non-CE models with binomial likelihoods (e.g. logistic regression) we exploit the Polya-Gamma data augmentation scheme to bring such models into the CE family such that inferences in the modified model resemble the original (private) variational Bayes algorithm as closely as possible. The iterative nature of variational Bayes presents a further challenge for privacy preservation as each iteration increases the amount of noise needed. We overcome this challenge by combining: (1) a relaxed notion of differential privacy called concentrated differential privacy which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation (LDA) Bayesian logistic regression and Sigmoid Belief Networks (SBNs) evaluated on real-world datasets. >

The Diffix Framework: Noise Revisited, Again

Paul Francis
Max Planck Institute for Software Systems
Joint Lecture Series
01 Mar 2017, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
A longstanding open problem in Computer Science is that of how to get high quality statistics through direct queries to databases containing information about individuals without revealing information specific to those individuals.  It has long been recognized that the key to making this work is to add noise to query answers.  The problem has been how to do this without either adding a great deal of noise to answers or limiting the number of answers an analyst can obtain.  This talk presents Diffix a new framework for anonymous database query.  Diffix adds noise in such a way that repeated answers produce the same noise: it cannot be averaged away.  This "fixed noise" mechanism however creates new opportunities for attacks.  Diffix pro-actively tests potential alternate queries to discover and prevent these attacks.  In this talk we describe the Diffix framework and present a system design that provides basic query logic and statistical operations.  We will give a brief demo of a more advanced Diffix system that operates as a commercial product.  

Learning With and From People

Adish Singla
ETH Zürich
SWS Colloquium
28 Feb 2017, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
People are becoming an integral part of computational systems fueled primarily by recent technological advancements as well as deep-seated economic and societal changes. Consequently there is a pressing need to design new data science and machine learning frameworks that can tackle challenges arising from human participation (e.g. questions about incentives and users? privacy) and can leverage people?s capabilities (e.g. ability to learn).

In this talk I will share my research efforts at the confluence of people and computing to address real-world problems. Specifically I will focus on collaborative consumption systems (e.g. shared mobility systems and sharing economy marketplaces like Airbnb) and showcase the need to actively engage users for shaping the demand who would otherwise act primarily in their own interest. The main idea of engaging users is to incentivize them to switch to alternate choices that would improve the system?s effectiveness. To offer optimized incentives I will present novel multi-armed bandit algorithms and online learning methods in structured spaces for learning users? costs for switching between different pairs of available choices. Furthermore to tackle the challenges of data sparsity and to speed up learning I will introduce hemimetrics as a structural constraint over users? preferences. I will show experimental results of applying the proposed algorithms on two real-world applications: incentivizing users to explore unreviewed hosts on services like Airbnb and tackling the imbalance problem in bike sharing systems. In collaboration with an ETH Zurich spinoff and a public transport operator in the city of Mainz Germany we deployed these algorithms via a smartphone app among users of a bike sharing system. I will share the findings from this deployment.

A New Verified Compiler Backend for CakeML

Magnus Myreen
Chalmers University of Technology, Göteborg
SWS Colloquium
22 Feb 2017, 10:30 am - 1:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
The CakeML project has recently produced a verified compiler which we believe to be the most realistic verified compiler for a functional programming language to date. In this talk I'll give an overview of the CakeML project with focus on the new compiler in particular how the compiler is structured how the intermediate languages are designed and how the proofs are carried out. The talk will stay at a fairly high-level but I am happy to dive into details for any of the parts that I know well.

The CakeML project is currently a collaboration between six sites across three continents. The new compiler is due to: Anthony Fox (Cambridge UK) Ramana Kumar (Data61 Sydney Australia) Magnus Myreen (Chalmers Sweden) Michael Norrish (Data61 Canberra Australia) Scott Owens (Kent UK) and Yong Kiam Tan (CMU USA).

Safe, Real-Time Software Reference Architectures for Cyber-Physical Systems

Renato Mancuso
University of Illinois at Urbana- Champaign
SWS Colloquium
21 Feb 2017, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
There has been an uptrend in the demand and need for complex Cyber-Physical Systems (CPS) such as self-driving cars unmanned aerial vehicles (UAVs) and smart manufacturing systems for Industry 4.0. CPS  often need to accurately sense the surrounding environment by using high-bandwidth acoustic imaging and other types of sensors; and to take coordinated decisions and issue time critical actuation commands. Hence temporal predictability in sensing communication computation and actuation is a fundamental attribute. Additionally CPS must operate safely even in spite of software and hardware misbehavior to avoid catastrophic failures. To satisfy the increasing demand for performance modern computing platforms have substantially increased in complexity; for instance multi-core systems are now mainstream and partially re-programmable system-on-chip (SoC) have just entered production. Unfortunately extensive and unregulated sharing of hardware resources directly undermines the ability of guaranteeing strong temporal determinism in modern computing platforms. Novel software architectures are needed to restore temporal correctness of complex CPS when using these platforms. My research vision is to design and implement software architectures that can serve as a reference for the development of high-performance CPS and that embody two main requirements: temporal predictability and robustness. In this talk I will address the following questions concerning modern multi-core systems: Why application timing can be highly unpredictable? What techniques can be used to enforce safe temporal behaviors on multi-core platforms? I will also illustrate possible approaches for time-aware fault tolerance to maximize CPS functional safety. Finally I will review the challenges faced by the embedded industry when trying to adopt emerging computing platforms and I will highlight some novel directions that can be followed to accomplish my research vision.

Computational fair division and mechanism design

Simina Branzei
Hebrew University of Jerusalem
SWS Colloquium
20 Feb 2017, 10:00 am - 12:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
The emergence of online platforms has brought about a fundamental shift in economic thinking: the design of economic systems is now a problem that computer science can tackle. For the first time we are able to move from the study of economic systems as natural systems to carefully designing and executing them on a computer. Prominent examples of digital market mechanisms include auctions for ads (run by companies such as Google) and electromagnetic spectrum (used by the US government). I will discuss several recent developments in fair division and mechanism design. I will start with a dictatorship theorem for fair division (cake cutting) showing that requiring truthfulness gives rise to a dictator. Afterwards I will discuss the theme of simplicity and complexity in mechanism design and more generally the interplay between economics and computation and learning.

Adventures in Systems Reliability: Replication and Replay

Ali Mashtizadeh
Stanford University
SWS Colloquium
17 Feb 2017, 10:30 am - 1:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
The past decade has seen a rapid acceleration in the development of new and transformative applications in many areas including transportation medicine ?nance and communication. Most of these applications are made possible by the increasing diversity and scale of hardware and software systems.

While this brings unprecedented opportunity it also increases the probability of failures and the dif?culty of diagnosing them. Increased scale and transience has also made management increasingly challenging. Devices can come and go for a variety of reasons including mobility failure and recovery and scaling capacity to meet demand.

In this talk I will be presenting several systems that I built to address the resulting challenges to reliability management and security.

Ori is a reliable distributed ?le system for devices at the network edge. Ori automates many of the tasks of storage reliability and recovery through replication taking advantage of fast LANs and low cost local storage in edge networks.

Castor is record/replay system for multi-core applications with predictable and consistently low overheads. This makes it practical to leave record/replay on in production systems to reproduce difficult bugs when they occur and to support recovering from hardware failures through fault tolerance.

Cryptographic CFI (CCFI) is a dynamic approach to control flow integrity. Unlike previous CFI systems that rely purely on static analysis CCFI can classify pointers based on dynamic and runtime characteristics. This limits the attacks to only actively used code paths resulting in a substantially smaller attack surface.

Type-Driven Program Synthesis

Nadia Polikarpova
MIT CSAIL, Cambridge USA
SWS Colloquium
15 Feb 2017, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
Modern programming languages safeguard developers from many typical errors yet more subtle errors-such as violations of security policies-still plague software. Program synthesis has the potential to eliminate such errors by generating executable code from concise and intuitive high-level specifications. Traditionally program synthesis failed to scale to specifications that encode complex behavioral properties of software: these properties are notoriously hard to check even for a given program and so it?s not surprising that finding the right program within a large space of candidates has been considered very challenging. My work tackles this challenge through the design of synthesis-friendly program verification mechanisms which are able to check a large set of candidate programs against a complex specification at once whereby efficiently pruning the search space.

Based on this principle I developed Synquid a program synthesizer that accepts specifications in the form of expressive types and uses a specialized type checker as its underlying verification mechanism. Synquid is the first synthesizer powerful enough to automatically discover provably correct implementations of complex data structure manipulations such as insertion into Red-Black Trees and AVL Trees and normal-form transformations on propositional formulas. Each of these programs is synthesized in under a minute. Going beyond textbook algorithms I created a language called Lifty which uses type-driven synthesis to automatically rewrite programs that violate information flow policies. In our case study Lifty was able to enforce all required policies in a prototype conference management system.

On the Security and Scalability of Proof of Work Blockchains

Arthur Gervais
ETH Zurich
SWS Colloquium
08 Feb 2017, 10:00 am - 12:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111
The security properties of blockchain technology allow for the shifting of trust assumptions e.g. to remove trusted third parties; they however create new challenges for security and scalability which have not yet been fully understood and that we investigate in this talk. The blockchain?s security for example affects the ability of participants to exchange monetary value or to participate in the network communication and the consensus process. Our first contribution provides a quantitative framework to objectively compare the security and performance characteristics of Proof of Work-based blockchains under adversaries with optimal strategies. Our work allows us to increase Bitcoin?s transaction throughput by a factor of ten given only one parameter change and without deteriorating the security of the underlying blockchain. In our second contribution we highlight previously unconsidered impacts of the PoW blockchain?s scalability on its security and propose design modifications that are now implemented in the primary Bitcoin client. Because blockchain technology is still in its infancy we conclude the talk with an outline of future work towards an open scalable privacy-preserving and decentralized blockchain.

Guiding program analyzers toward unsafe executions

Dr. Maria Christakis
University of Kent
SWS Colloquium
06 Feb 2017, 10:00 am - 11:00 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
Most static program analysis techniques do not fully verify all possible executions of a program. They leave executions unverified when they do not check certain properties fail to verify properties or check properties under certain unsound assumptions such as the absence of arithmetic overflow. In the first part of the talk I will present a technique to complement partial verification results by automatic test case generation. In contrast to existing work our technique supports the common case that the verification results are based on unsound assumptions. We annotate programs to reflect which executions have been verified and under which assumptions. These annotations are then used to guide dynamic symbolic execution toward unverified program executions leading to smaller and more effective test suites.

In the second part of the talk I will describe a new program simplification technique called program trimming. Program trimming is a program pre-processing step to remove execution paths while retaining equi-safety (that is the generated program has a bug if and only if the original program has a bug). Since many program analyzers are sensitive to the number of execution paths program trimming has the potential to improve their effectiveness. I will show that program trimming has a considerable positive impact on the effectiveness of two program analysis techniques abstract interpretation and dynamic symbolic execution.

Inverse Rendering

Shida Beigpour
MPI-INF - D4
Joint Lecture Series
01 Feb 2017, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
Photographs as well as video frames are two-dimensional projections of three-dimensional real-world scenes. Photorealistic Computer Graphic Imagery (CGI) is usually generated by "rendering" a sophisticated three-dimensional model of a realistic scene on to a virtual camera sensor. Such process is not invertible due to the high amount of data loss. Still both physically captured photographs and synthesized photos (CGI) contain considerable amount of information about the characteristics of the scene itself. Even naïve human viewers are able to infer important information about the scene e.g. shape material illumination distance and actions solely from a single image. This is indeed much more difficult for computer systems. Inverse rendering is one of the hot-topics in computer vision in which the goal is to estimate and model the three-dimensional scene and its illumination automatically given only one or a sparse set of images from the scene. This is by nature a highly under-constraint problem which makes it very complex to solve. Yet the advancements in computation and imaging technologies (e.g. depth sensors and light-field cameras) open new horizons in this field. Inverse rendering makes many interesting applications possible including: creating novel views of the scene/objects re-lighting detecting and altering the object materials in the photographs and augmented reality.  This talk provides a brief overview of some of the state-of-the-art inverse rendering techniques and datasets with a strong focus on addressing the complexities of real-world scenarios.

Time for Text Mining and Information Retrieval

Jannik Strötgen
MPI-INF - D5
Joint Lecture Series
11 Jan 2017, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
Temporal information is an important aspect in any information space and has important characteristics that make it highly eligible to be exploited in diverse text mining and information retrieval scenarios. For quite a long time only the metadata of text documents (e.g. publication dates) has been considered but with recent improvements in natural language processing (NLP) temporal expressions occurring in the content of documents (e.g. "January 2016" "next week") can be extracted and interpreted efficiently with high quality. This is particularly valuable as in many types of documents temporal expressions do not only occur frequently but also play a crucial role e.g. to anchor events in time.

In this talk after explaining the basics and challenges of the NLP task "temporal tagging" I present our approach to enrich documents with temporal information. Then I showcase several application scenarios in which we exploit automatically extracted temporal annotations for search and exploration tasks. These range from temporal information retrieval for news articles via Wikipedia-based event extraction up to the exploration of fictitious happenings in literary texts in the context of digital humanities research.

Proving Performance Properties of Higher-order Functions with Memoization

Ravi Madhavan
EPFL
SWS Colloquium
20 Dec 2016, 11:30 am - 1:30 pm
Kaiserslautern building G26, room 112
simultaneous videocast to Saarbrücken building E1 5, room 029
Static verification of performance properties of programs is an important problem that has attracted great deal of research. However most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects for functional Scala programs that may rely on lazy evaluation and memoization. In our system users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. as "steps <= ? * size(list) + ?" along with other functional properties necessary for establishing the bounds. The system automatically infers values for the holes that will make the templates hold for all executions of the functions. For example the property that a function converting a propositional formula f into negation-normal form (NNF) takes time linear in the size of f can be expressed in the post-condition of the function using the predicate "steps <= ? * size(f) + ?" where size is a user-defined function counting the number of nodes in the syntax tree of the formula. Using our tool we have verified asymptotically precise bounds of several algorithms and data structures that rely on complex sharing of higher-order functions and memoization. Our benchmarks include balanced search trees like red-black trees and AVL trees Okasaki?s constant-time queues deques lazy data structures based on numerical representations such as lazy binomial heaps cyclic streams and dynamic programming algorithms. Some of the benchmarks have posed serious challenges to automatic as well as manual reasoning. The system is a part of the Leon verifier and can be tried online at "http://leondev.epfl.ch" ("Resource bounds" section).

Proving Performance Properties of Higher-order Functions with Memoization

Ravi Madhavan
EPFL
SWS Colloquium
20 Dec 2016, 11:30 am - 1:30 pm
Kaiserslautern building G26, room 112
simultaneous videocast to Saarbrücken building E1 5, room 029
Static verification of performance properties of programs is an important problem that has attracted great deal of research. However most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects for functional Scala programs that may rely on lazy evaluation and memoization. In our system users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. as "steps <= ? * size(list) + ?"  along with other functional properties necessary for establishing the bounds. The system automatically infers values for the holes that will make the templates hold for all executions of the functions.  For example the property that a function converting a propositional formula f into negation-normal form (NNF)  takes time linear in the size of f can be expressed in the post-condition of the function using the predicate "steps <= ? * size(f) + ?"   where size is a user-defined function counting the number of nodes in the syntax tree of the formula. Using our tool we have verified asymptotically precise bounds of several algorithms and data structures that rely on complex sharing of higher-order functions and memoization. Our benchmarks include balanced search trees like red-black trees and AVL trees Okasaki?s constant-time queues deques lazy data structures based on numerical representations such as lazy binomial heaps cyclic streams and dynamic programming algorithms. Some of the benchmarks have posed serious challenges to automatic as well as manual reasoning. The system is a part of the Leon verifier and can be tried online at "http://leondev.epfl.ch" ("Resource bounds" section).

References:   (a) Symbolic Resource Bound Inference For Functional Programs. Ravichandhran Madhavan and Viktor Kuncak. Computer Aided Verification CAV 2014 (b) Contract-based Resource Verification for Higher-order Functions with Memoization. Ravichandhran Madhavan Sumith Kulal and Viktor Kuncak. To appear in POPL 2017

Network Inference: Graph Reconstruction and Verification

Hang Zhou
MPI-INF - D1
Joint Lecture Series
07 Dec 2016, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
How efficiently can we find an unknown graph using shortest path queries between its vertices? This is a natural theoretical question from the standpoint of recovery of hidden information. This question is related to discovering the topology of Internet networks which is a crucial step for building accurate network models and designing efficient algorithms for Internet applications.

In this talk I will introduce the problems of graph reconstruction and verification via oracles. I will investigate randomized algorithms based on a Voronoi cell decomposition. I will also analyze greedy algorithms and prove that they are near-optimal.

The talk is based on joint work with Claire Mathieu and Sampath Kannan.

Sustaining the Energy Transition: A Role for Computer Science and Complex Networks

Marco Aiello
Rijksuniversiteit Groningen
SWS Colloquium
03 Nov 2016, 10:30 am - 12:30 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
The energy sector is in the midst of an exciting transition. Driven by new generation technologies and by infrastructure digitalization the traditional way of transmitting distributing and using energy is transforming a centralized hierarchical system into a multi-directional open infrastructure. While the vision of Intelligent Energy Networks is appealing and desirable---especially from a sustainability perspective---a number of major challenges remain to be tackled. The loss of centralized control the intermittent nature of renewable energy sources and the scale of the future digital energy systems are novel situations for power systems infrastructures and consumers that pose reliability and availability threats.

In this talk I show examples of how Computer Science techniques are having and will have an important role in future energy systems. I focus on electricity as energy vector and techniques from Service-Oriented Computing and AI Planning. I also present Complex Network theory as a design tool for energy distribution systems. To make things concrete I will review almost ten years of personal research that include making office buildings energy efficient homes smarter and futuristic models for the evolution of power distribution grids to accommodate for multi-directional energy flows with distributed generation and local control.

A Promising Semantics for Relaxed-Memory Concurrency

Viktor Vafeiadis
Max Planck Institute for Software Systems
Joint Lecture Series
02 Nov 2016, 12:15 pm - 2:15 pm
Saarbrücken building E1 5, room 002
Despite decades of research in shared-memory concurrency and in the semantics of programming languages there is still no adequate answer to the following fundamental question: What is the right semantics for concurrent shared-memory programs written in higher-level languages? The difficulty in answering this question is that any solution will have to balance the conflicting desiderata of programmers compilers and hardware. Previous attempts either allow spurious "out-of-thin-air" program behaviours or incur a non-negligible implementation cost. Nevertheless we show that the question can be answered elegantly with the novel notion of promises: a thread may promise to execute a write in the future thus enabling other threads to read from that write out of order. This idea alone suffices to explain most of the relaxed consistency behaviours observed as a result of compiler and hardware optimisations while ruling out "out-of-thin-air" behaviours.

Heap-based reasoning about asynchronous concurrency

Johannes Kloos
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Proposal
27 Oct 2016, 6:00 pm - 8:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 105
Asynchronous concurrency is a wide-spread way of writing programs that deal with many short tasks. It is the programming model behind event-driven concurrency as exemplified by GUI applications where the tasks correspond to event handlers web applications based around JavaScript the implementation of web browsers but also of server-side software or operating systems. It provides a middle-ground between sequential programming and multi-threading giving the benefits of concurrency while being easier to program than multi-threading. While there are languages and libraries that make the construction of asynchronous programs relatively simple there is little support for asynchronous program analysis. Existing working is mostly focused on model checking or performing specific targeted analyses. The model checking techniques in particular have turned out to be inefficient and completely ignore the heap. In this thesis proposal I will address the question of how we can reason about asynchronous programs and how I want to use this to optimize asynchronous programs. I will address three main questions: 1.      How can we reason about asynchronous programs without ignoring the heap? 2.      How can we use such reasoning techniques to optimize programs involving asynchronous behavior? 3.      How can we scale reasoning and optimization to apply to real-world software? In the proposal I will describe answers to all three questions. The unifying idea behind all of these results is the use of a appropriate model of global state (usually the heap) and a promise-based model of asynchronous concurrency.

To address the first question I will describe some prior work (ALST ECOOP 2015) where we extended the OCaml type system to reason about asynchronous concurrency. To address the second and third question I will describe an ongoing project (JSDefer) about optimization web pages by loading JavaScript asynchronously and a planned project about an automated optimization step for OCaml programs to use change synchronous to asynchronous I/O operations safely.

Polynomial Identity Testing

Markus Bläser
Fachrichtung Informatik - Saarbrücken
Joint Lecture Series
05 Oct 2016, 12:15 pm - 3:15 pm
Saarbrücken building E1 5, room 002
Does randomness help to speed up computation? This is a fundamental question in computer science. Inspired by early successes like randomized primality tests a significant amount of researcher believed for a long time that randomness indeed helps speeding up computations. In 1997 Impagliazzo and Wigderson proved that the existence of certain hard to compute functions implies that randomness can only give a speedup by a polynomial factor. Shortly after this Agrawal Kayal and Saxena were able to give a deterministic polynomial time primality test.

We still do not know whether for every problem having a randomized polynomial time algorithm there is also a deterministic polynomial time one. In this talk we will look at the flagship problem that has a randomized polynomial time algorithm but for which we do not know whether a deterministic one exists: the so-called polynomial identity testing problem. We are given an arithmetic circuit computing some polynomial and we want to answer whether this polynomial is identically zero. While there is an easy randomized algorithm we do not know whether a deterministic polynomial time algorithm exists and it can be shown that the existence of such an algorithm implies lower bounds for circuit size a partial converse to the result by Impagliazzo and Wigderson. In this talk we will introduce the polynomial identity testing problem and its variants and give an overview about deterministic algorithms for some special cases.

Multi-Authority ABE: Constructions and Applications

Beverly Li
Hunan University
SWS Colloquium
23 Sep 2016, 11:00 am - 2:00 pm
Saarbrücken building E1 5, room 029
Attribute-based Encryption(ABE) is a form of asymmetric cryptography that allows encryption over labels named "attributes". In an ABE scheme an "authority" generates public parameters and secrets and assigns attributes (and associated secrets) to users. Data can be encrypted using formulas over attributes; users can decrypt if they have attribute secrets that satisfy the encryption formula.

In this talk I will discuss an extension to ABE that allows encryption over attributes provided by multiple authorities. Such a scheme enables secure data sharing between otherwise distrusting organizations. I will discuss example scenarios where multi-authority ABE is useful and describe one new construction of multi-authority ABE scheme named DMA.

In DMA a data owner is a first class principal: users in the system get attributes in cooperation with the data owner and various authorities. Compared to previous work DMA does not require a global identity for users or require the multiple authorities to trust a single central authority. DMA is also immune to collusion attacks mounted by users and authorities.

Useful but ugly games

Ruediger Ehlers
Univ. of Bremen
SWS Colloquium
13 Sep 2016, 10:30 am - 1:30 pm
Kaiserslautern building G26, room 112
simultaneous videocast to Saarbrücken building E1 5, room 029
The controller synthesis problem for CPS is often reduced to solving omega-regular games with an optional optimization criterion. The criteria commonly used in the literature on omega-regular games are however frequently unsuitable for obtaining high-quality CPS controllers as they are unable to capture many if not most real-world optimization objectives. We survey a few such cases and show that such problems can be overcome with more sophisticated optimization criteria. The synthesis problem for them gives rise to ugly games i.e. games that have complicated definitions but relatively simple solutions.

Towards Privacy-Compliant Mobile Computing

Paarijaat Aditya
Max Planck Institute for Software Systems
SWS Student Defense Talks - Thesis Proposal
06 Sep 2016, 5:30 pm - 7:30 pm
Saarbrücken building E1 5, room 005
simultaneous videocast to Kaiserslautern building G26, room 112
Sophisticated mobile computing sensing and recording devices like smartphones Life-logging cameras and Google Glass are carried by their users virtually around the clock. While these devices enable transformative new applications and services they also introduce entirely new threats to users' privacy because they can capture a complete record of the user's location online and offline activities and social encounters including an audiovisual record. Such a record of users' personal information is highly sensitive and is subject to numerous privacy risks. In this thesis we address two specific contexts in which this problem arises which are: 1) risks to users' personal information introduced by a popular class of apps called mobile social apps 2) privacy risks due to ubiquitous digital capture where bystanders may inadvertently (and/or against their wishes) be captured in photos and video recorded by other nearby users. Both the solutions aim at putting the user back in control of what personal information is being collected and shared while enabling innovative new applications.