Events 2013

Verifying Probabilistic Programs

Stefan Kiefer University of Oxford
16 Dec 2013, 11:00 am - 12:00 pm
Kaiserslautern building G26, room 113
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
I am going to talk about two approaches to the verification of probabilistic systems: (1) equivalence analysis; and (2) termination analysis.

Deciding equivalence of probabilistic automata is a key problem for establishing various behavioural and anonymity properties of probabilistic systems. In particular, it is at the heart of the tool APEX, an analyser for probabilistic programs. APEX is based on game semantics and analyses a broad range of anonymity and termination properties of randomised protocols and other open programs. ...
I am going to talk about two approaches to the verification of probabilistic systems: (1) equivalence analysis; and (2) termination analysis.

Deciding equivalence of probabilistic automata is a key problem for establishing various behavioural and anonymity properties of probabilistic systems. In particular, it is at the heart of the tool APEX, an analyser for probabilistic programs. APEX is based on game semantics and analyses a broad range of anonymity and termination properties of randomised protocols and other open programs.

Proving programs terminating is a fundamental computer science challenge. Recent research has produced powerful tools that can check a wide range of programs for termination. The analogue for probabilistic programs, namely termination with probability one (``almost-sure termination''), is an equally important property for randomised algorithms and probabilistic protocols. We have developed a novel algorithm for proving almost-sure termination of probabilistic programs. Our algorithm exploits the power of state-of-the-art model checkers and termination provers for nonprobabilistic programs.
Read more

Inferring Invisible Internet Traffic

Mark Crovella Boston University
06 Dec 2013, 10:15 am - 11:15 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
The Internet is at once an immense engineering artifact, a pervasive social force, and a fascinating object of study. Unfortunately many natural questions about the Internet cannot be answered by direct measurement, requiring us to turn instead to the tools of statistical inference. As a detailed example I'll describe a current project in traffic measurement. We are asking the question: using traffic measurements taken at one location in the Internet, can we estimate how much traffic is flowing in a different part of the Internet? ...
The Internet is at once an immense engineering artifact, a pervasive social force, and a fascinating object of study. Unfortunately many natural questions about the Internet cannot be answered by direct measurement, requiring us to turn instead to the tools of statistical inference. As a detailed example I'll describe a current project in traffic measurement. We are asking the question: using traffic measurements taken at one location in the Internet, can we estimate how much traffic is flowing in a different part of the Internet? Surprisingly, the answer is yes. I'll explain why this is possible (with a connection to problems like the Netflix Prize), how it can be done, and how this result could be used to give a network operator an edge over its competitors.
Read more

Computer-Aided Cryptographic Analysis and Design

Gilles Barthe IMDEA Madrid
05 Dec 2013, 11:00 am - 12:00 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Distinguished Lecture Series
EasyCrypt is a tool supported framework that supports the machine-checked construction and verification of security proofs of cryptographic systems, and that has been used to verify emblematic examples of public-key encryption schemes, digital signature schemes, hash function designs, and block cipher modes of operation. The lecture will motivate the role of computer-aided proofs in the broader context of provable security, explore the connections between provable security and programming language methods, and speculate on potential applications of computer tools in security analysis and design of cryptographic constructions. ...
EasyCrypt is a tool supported framework that supports the machine-checked construction and verification of security proofs of cryptographic systems, and that has been used to verify emblematic examples of public-key encryption schemes, digital signature schemes, hash function designs, and block cipher modes of operation. The lecture will motivate the role of computer-aided proofs in the broader context of provable security, explore the connections between provable security and programming language methods, and speculate on potential applications of computer tools in security analysis and design of cryptographic constructions.
Read more

Efficient Algorithms in Massive Graphs

He Sun Cluster of Excellence - Multimodal Computing and Interaction - MMCI
04 Dec 2013, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series of MPI-INF, MPI-SWS and MMCI
tba

Improving Trust in Cloud, Enterprise, and Mobile Computing Platforms

Nuno Miguel Carvalho Santos Max Planck Institute for Software Systems
27 Nov 2013, 6:00 pm - 7:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Defense
Trust plays a fundamental role in the adoption of technology by society. Potential consumers tend to avoid a particular technology whenever they feel suspicious about its ability to cope with their security demands. Such a loss of trust could occur in important computing platforms, namely cloud, enterprise, and mobile platforms. In this thesis, we aim to improve trust in these platforms by (i) enhancing their security mechanisms, and (ii) giving their users guarantees that these mechanisms are in place. ...
Trust plays a fundamental role in the adoption of technology by society. Potential consumers tend to avoid a particular technology whenever they feel suspicious about its ability to cope with their security demands. Such a loss of trust could occur in important computing platforms, namely cloud, enterprise, and mobile platforms. In this thesis, we aim to improve trust in these platforms by (i) enhancing their security mechanisms, and (ii) giving their users guarantees that these mechanisms are in place. To realize both these goals, we propose several novel systems. For cloud platforms, we present Excalibur, a system that enables building trusted cloud services. Such services give cloud customers the ability to process data privately in the cloud, and to attest that the respective data protection mechanisms are deployed. Attestation is made possible by the use of trusted computing hardware placed on the cloud nodes. For enterprise platforms, we propose an OS security model-the broker security model-aimed at providing information security against a negligent or malicious system administrator while letting him retain most of the flexibility to manage the OS. We demonstrate the effectiveness of this model by building BrokULOS, a proof-of-concept instantiation of this model for Linux. For mobile platforms, we present the Trusted Language Runtime (TLR), a software system for hosting mobile apps with stringent security needs (e.g., e-wallet). The TLR leverages ARM TrustZone technology to protect mobile apps from OS security breaches.
Read more

Interactive Typed Tactic Programming in Coq

Beta Ziliani Max Planck Institute for Software Systems
26 Nov 2013, 4:00 pm - 5:00 pm
Saarbrücken building E1 5, room 029
SWS Student Defense Talks - Thesis Proposal
Effective support for custom proof automation is essential for large-scale interactive proof development. However, existing languages for automation via tactics either (a) provide no way to specify the behavior of tactics within the base logic of the accompanying theorem prover, or (b) rely on advanced type-theoretic machinery that is not easily integrated into established theorem provers. My thesis is that typechecked tactic programming in Coq is possible in two different programming styles: (1) logic programming (by exploiting Coq’s existing overloading mechanism) and (2) functional programming (by a new monadic language, ...
Effective support for custom proof automation is essential for large-scale interactive proof development. However, existing languages for automation via tactics either (a) provide no way to specify the behavior of tactics within the base logic of the accompanying theorem prover, or (b) rely on advanced type-theoretic machinery that is not easily integrated into established theorem provers. My thesis is that typechecked tactic programming in Coq is possible in two different programming styles: (1) logic programming (by exploiting Coq’s existing overloading mechanism) and (2) functional programming (by a new monadic language, Mtac). The combination of both styles leads to a novel way of creating extensible tactics.
Read more

An experimentation platform for the Internet's edge

Fabián E. Bustamante Northwestern University
21 Nov 2013, 11:00 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Internet measurement and experimentation platforms, such as PlanetLab, have become essential for network studies and distributed systems evaluation. Despite their many benefits and strengths, a by now well-known problem with existing platforms is their inability to capture the geographic and network diversity of the wider, commercial Internet. Lack of diversity and poor visibility into the network hamper progress in a number of important research areas, from network troubleshooting to broadband characterization and Internet topology mapping, and complicate our attempts to generalize from test-bed evaluations of networked systems. ...
Internet measurement and experimentation platforms, such as PlanetLab, have become essential for network studies and distributed systems evaluation. Despite their many benefits and strengths, a by now well-known problem with existing platforms is their inability to capture the geographic and network diversity of the wider, commercial Internet. Lack of diversity and poor visibility into the network hamper progress in a number of important research areas, from network troubleshooting to broadband characterization and Internet topology mapping, and complicate our attempts to generalize from test-bed evaluations of networked systems. The issue has served as motivation for several efforts to build new experimentation platforms and expand existing ones. However, capturing the edge of the network remains an elusive goal. I argue that at its root, the problem is one of incentives. Today's platforms build on the assumption that the goals of experimenters and those hosting the platform are the same. As much of the Internet growth occur in residential broadband and mobile networks, this no longer holds. In this talk, I will present Dasu, a measurement experimentation platform for the Internet's edge that explicitly aligns the objectives of the experimenters with those of the users hosting the platform. Dasu is designed to support both network measurement experimentation and broadband characterization. Dasu has been publicly available since mid 2010 and has been adopted by over 95,000 users, across 150 countries. I will then illustrate the value of Dasu's unique perspective with some of the ongoing projects and collaboration already taking advantage of it.
Read more

From bounded affine types to automatic timing analysis

Dan R. Ghica University of Birmingham
20 Nov 2013, 2:00 pm - 3:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce an affine bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This is a very useful type-theoretic and denotational framework for many applications to resource-sensitive compilation, ...
Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce an affine bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This is a very useful type-theoretic and denotational framework for many applications to resource-sensitive compilation, and it represents a generalization of several existing type systems. As a non-trivial instance, motivated by our ongoing work on hardware compilation, we present a complex new application to calculating and controlling timing of execution in a (recursion-free) higher-order functional programming language with local store.

Dan R. Ghica, Alex Smith
Read more

Language and Social Dynamics in Online Communities

Cristian Danescu-Niculescu-Mizil Max Planck Institute for Software Systems
06 Nov 2013, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series of MPI-INF, MPI-SWS and MMCI
More and more of life is now manifested online, and many of the digital traces that are left by human activity are in natural-language format. In this talk I will show how exploiting these resources under a computational framework can bring a new understanding of online social dynamics; I will be discussing two of my efforts in this direction.

The first project explores the relation between users and their community, as revealed by patterns of linguistic change. ...
More and more of life is now manifested online, and many of the digital traces that are left by human activity are in natural-language format. In this talk I will show how exploiting these resources under a computational framework can bring a new understanding of online social dynamics; I will be discussing two of my efforts in this direction.

The first project explores the relation between users and their community, as revealed by patterns of linguistic change. I will show that users follow a determined life-cycle with respect to their susceptibility to adopt new community norms, and how this insight can be harnessed to predict how long a user will stay active in the community.

The second project proposes a computational framework for identifying and characterizing politeness, a central force shaping our communication behavior. I will show how this framework can be used to study the social aspects of politeness, revealing new interactions with social status and community membership.

This talk includes joint work with Dan Jurafsky, Jure Leskovec, Christopher Potts, Moritz Sudhof and Robert West.
Read more

Cluster Management at Google

John Wilkes Google
26 Sep 2013, 4:00 pm - 5:00 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Distinguished Lecture Series
Cluster management is the term that Google uses to describe how we control the computing infrastructure in our datacenters that supports almost all of our external services. It includes allocating resources to different applications on our fleet of computers, looking after software installations and hardware, monitoring, and many other things. My goal is to present an overview of some of these systems, introduce Omega, the new cluster-manager tool we are building, and present some of the challenges that we're facing along the way. ...
Cluster management is the term that Google uses to describe how we control the computing infrastructure in our datacenters that supports almost all of our external services. It includes allocating resources to different applications on our fleet of computers, looking after software installations and hardware, monitoring, and many other things. My goal is to present an overview of some of these systems, introduce Omega, the new cluster-manager tool we are building, and present some of the challenges that we're facing along the way. Many of these challenges represent research opportunities, so I'll spend the majority of the time discussing those.
Read more

Separation logic, object-orientation and refinement

Stephan van Staden ETH Zurich
19 Sep 2013, 1:30 pm - 3:00 pm
Kaiserslautern building G26, room 113
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
-

On the Achievability of Simulation-based Security for Functional Encryption

Vincenzo Iovino Universita di Salerno
04 Sep 2013, 3:00 pm - 4:00 pm
Saarbrücken building E1 4, room 024
SWS Colloquium
Let F:K×M→∑ be a functionality, where K is the key space and M is the message space and Σ is the output space.Then a functional encryption (FE) scheme for F is a special encryption scheme in which, for every key k∈K, the owner of the master secret key Msk associated with the public key Pk can generate a special key or "token" Tok that allows the computation of F(k,m) from a ciphertext of m computed under public key Pk. ...
Let F:K×M→∑ be a functionality, where K is the key space and M is the message space and Σ is the output space.Then a functional encryption (FE) scheme for F is a special encryption scheme in which, for every key k∈K, the owner of the master secret key Msk associated with the public key Pk can generate a special key or "token" Tok that allows the computation of F(k,m) from a ciphertext of m computed under public key Pk. In other words, whereas in traditional encryption schemes decryption is an all-or-nothing affair, in FE it is possible to finely control thea mount of information that is revealed by a ciphertext. Unlike traditional encryption, for FE indistinguishability-security is not equivalent to simulation-based security. This work attempts to clarify to what extent simulation-based security (SIM-security) is achievable for functional encryption and its relation to the weaker indistinguishability-based security (IND-security). Our main result is a compiler that transforms any FE scheme for the general circuit functionality (which we denote by circuit-FE) meeting IND-security to a circuit-FE scheme meeting SIM-security, where: In the random oracle model, the resulting scheme is secure for an unbounded number of encryption and key queries, which is the strongest security level one can ask for. In the standard model, the resulting scheme is secure for a bounded number of encryption and non-adaptive key queries, but an unbounded number of adaptive key queries. This matches known impossibility results and improves upon Gorbunov et al. [CRYPTO'12] (which is only secure for non-adaptive key queries). Our compiler is inspired by the celebrated Fiat-Lapidot-Shamir paradigm [FOCS'90] for obtaining zero-knowledge proof systems from witness-indistinguishable proof systems. We also give a tailored construction of SIM-secure hidden vector encryption (HVE) in composite-order bilinear groups. Finally, we revisit the known negative results for SIM-secure FE, extending them to natural weakenings of the security definition and thus providing essentially a full picture of the achievability of FE. We conclude with open problems and future challenges in the area.
Read more

Modular Verification of Finite Blocking

Peter Müller ETH Zürich
29 Aug 2013, 1:30 pm - 2:30 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Finite blocking is an important correctness property of multi-threaded programs. It requires that each blocking operation such as acquiring a lock or joining a thread executes eventually. In addition to deadlock freedom, finite blocking in the presence of non-terminating threads requires one to prove that no thread holds a lock indefinitely or attempts to join a non-terminating thread. In this talk, I will present a verification technique for finite blocking of possibly non-terminating programs. The key idea is to track explicitly whether a thread has an obligation to perform an operation that releases another thread from being blocked, ...
Finite blocking is an important correctness property of multi-threaded programs. It requires that each blocking operation such as acquiring a lock or joining a thread executes eventually. In addition to deadlock freedom, finite blocking in the presence of non-terminating threads requires one to prove that no thread holds a lock indefinitely or attempts to join a non-terminating thread. In this talk, I will present a verification technique for finite blocking of possibly non-terminating programs. The key idea is to track explicitly whether a thread has an obligation to perform an operation that releases another thread from being blocked, for instance, an obligation to release a lock or to terminate. Each obligation is associated with a lifetime to ensure that it is fulfilled within finitely many steps. Our technique guarantees finite blocking for programs with a finite number of threads and fair scheduling. We have implemented our technique in the automatic program verifier Chalice.
Read more

Type Refinements for Compiler Correctness

Robert Harper Carnegie Mellon University
22 Aug 2013, 1:00 pm - 2:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Type refinements, introduced by Freeman and Pfenning and explored by Davies and Dunfield, unify the ontological and epistemic views of typing. Types tell us what programming language constructs exist, whereas refinements express properties of the values of a type.  Here we show that refinements are very useful in compiler correctness proofs, wherein it often arises that two expressions that are inequivalent in general are equivalent in the particular context in which they occur.  Refinements serve to restrict the contexts sufficiently so that the desired equivalence holds.   ...
Type refinements, introduced by Freeman and Pfenning and explored by Davies and Dunfield, unify the ontological and epistemic views of typing. Types tell us what programming language constructs exist, whereas refinements express properties of the values of a type.  Here we show that refinements are very useful in compiler correctness proofs, wherein it often arises that two expressions that are inequivalent in general are equivalent in the particular context in which they occur.  Refinements serve to restrict the contexts sufficiently so that the desired equivalence holds.  For example, an expression might be replaced by a more efficient one, even though it is not generally equivalent to the original, but is interchangeable in any context satisfying a specified refinement of the type of those expressions.

We study here in detail a particular problem of compiler correctness, namely the correctness of compiling polymorphism (generics) to dynamic types by treating values of variable type as values of a universal dynamic type. Although this technique is widely used (for example, to compile Java generics), no proof of its correctness has been given to date.  Surprisingly, standard arguments based on logical relations do not suffice, precisely because it is necessary to record deeper invariants about the compiled code than is expressible in their types alone.  We show that refinements provide an elegant solution to this problem by capturing the required invariants so that a critical invertibility property that is false is general can be proved to hold in the contexts that arise in the translated code.  This proof not only establishes the correctness of this compilation method, but also exemplifies the importance of refinements for compiler correctness proofs more generally
Read more

From Replication to Flexible and Portable Determinism for Java

Joerg Domaschka Institute for Information Resource Management, Ulm University
05 Aug 2013, 11:30 am - 1:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 113
SWS Colloquium
Determinism of applications is required in the domains of testing, debugging, and fault-tolerant replication. Literature has presented various ways to render application execution deterministic in either transparent or non-transparent manner. Existing approaches operate on the layers of hardware, operating system, rutime environments, or even inside the application logic. Other systems operate in-between two layers: hypervisors between hardware and operating system; and supervisors between operating systems and applications. In this talk, I focus on the combination of replication and determinism: Applying replication techniques and replication protocols to business applications requires that the logic of the business application behave deterministic, ...
Determinism of applications is required in the domains of testing, debugging, and fault-tolerant replication. Literature has presented various ways to render application execution deterministic in either transparent or non-transparent manner. Existing approaches operate on the layers of hardware, operating system, rutime environments, or even inside the application logic. Other systems operate in-between two layers: hypervisors between hardware and operating system; and supervisors between operating systems and applications. In this talk, I focus on the combination of replication and determinism: Applying replication techniques and replication protocols to business applications requires that the logic of the business application behave deterministic, while the other parts of the system such as the replication logic does not. This talk gives a brief introduction to the Java-based Virtual Nodes Replication Framework (VNF). VNF allows a transparent replication of business logic. It enables a late configuration of multiple aspects of a replica group including the particular replication protocol. Deciding late which replication protocol to apply, however, is only possible when the application logic satisfies contraints imposed by the replication protocol. Many replication protocols can only be applied when it is ensured that the application logic behaves deterministically. The Deterministic Java suite (Dj suite) is capable of rendering Java applications fully deterministic. In contrast to other approaches, it is pure Java and hence, portable across JVMs, operating systems, and hardware platforms. Furthermore, it can be selectively applied to only parts of entire Java programmes what makes it suited for replicated scenarios. In addition, the Dj suite comprises a deterministic version of the Java runtime libraries which makes it the only comprehensive, Java-based approach in literature. The capability for a flexible configuration ensures adaptability to many different use cases. In this talk, I give an overview on mechanisms and techniques for making Java applications deterministic. In addition, I sketch how to build a determinitic Java-only runtime system from an existing open source Java implementation. The talk concludes with evaluation results, a discussion of future work, and an outlook on the integration of Dj with VNF as well as the possibility of virtualised Java platforms.
Read more

Programming with algebraic effects and handlers in Eff

Andrej Bauer University of Ljubljana
02 Aug 2013, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Eff is a programming language in which computational effects and handlers are first-class, so we can define new computational effects and handle existing ones in flexible ways. This allows us to express naturally not only the traditional computational effects, but also control operators, such as delimited continuations, I/O redirection, transactions, backtracking and other search strategies, and cooperative multi-threading.

In the talk I shall first introduce Eff from a practical point of view through examples, ...
Eff is a programming language in which computational effects and handlers are first-class, so we can define new computational effects and handle existing ones in flexible ways. This allows us to express naturally not only the traditional computational effects, but also control operators, such as delimited continuations, I/O redirection, transactions, backtracking and other search strategies, and cooperative multi-threading.

In the talk I shall first introduce Eff from a practical point of view through examples, and then focus on a more precise treatment of a core Eff. I shall also present an effect system for Eff.
Read more

Thoth: Controlling and tracking personal records in distributed multi-party systems

Eslam Elnikety Max Planck Institute for Software Systems
31 Jul 2013, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 422
SWS Student Defense Talks - Qualifying Exam
Personal data such as medical, tax, financial, employment or customer records are routinely stored and processed in distributed systems operated by (multiple) third parties. Ensuring compliance with data retention and privacy laws and policies, and reliably accounting for all data use is a challenging technical problem in these systems. In this paper, we show how trusted storage, a recently proposed technology that packages a trusted interpreter for a simple declarative policy language with a storage enclosure, ...
Personal data such as medical, tax, financial, employment or customer records are routinely stored and processed in distributed systems operated by (multiple) third parties. Ensuring compliance with data retention and privacy laws and policies, and reliably accounting for all data use is a challenging technical problem in these systems. In this paper, we show how trusted storage, a recently proposed technology that packages a trusted interpreter for a simple declarative policy language with a storage enclosure, can be used to enforce rich policies for data replication, dissemination, and mandatory access logging. The guarantees provided by Thoth rely on a small trusted computing base and do not constrain providers' flexibility in replicating and migrating data as required for availability and to meet other operational needs.
Read more

Traffic Correlation on Tor by Realistic Adversaries

Aaron Johnson US Naval Research Laboratory
29 Jul 2013, 1:00 pm - 2:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 113
SWS Colloquium
We present the first analysis of the popular Tor anonymity network that indicates the security of typical users against reasonably realistic adversaries in the Tor network or in the underlying Internet. Our results show that Tor users are far more susceptible to compromise than indicated by prior work. Specific contributions include (1) a model of various typical kinds of users, (2) an adversary model that includes Tor network relays, autonomous systems (ASes), Internet exchange points (IXPs), ...
We present the first analysis of the popular Tor anonymity network that indicates the security of typical users against reasonably realistic adversaries in the Tor network or in the underlying Internet. Our results show that Tor users are far more susceptible to compromise than indicated by prior work. Specific contributions include (1) a model of various typical kinds of users, (2) an adversary model that includes Tor network relays, autonomous systems (ASes), Internet exchange points (IXPs), and groups of IXPs drawn from empirical study (3) metrics that indicate how secure users are over a period of time, (4) the most accurate topological model to date of ASes and IXPs as they relate to Tor usage and network configuration, (5) a novel realistic Tor path simulator (TorPS), and (6) analyses of security making use of all the above. To show that our approach is useful to explore alternatives and not just Tor as currently deployed, we also analyze a published alternative path selection algorithm, Congestion-Aware Tor. We create an empirical model of Tor congestion, identify novel attack vectors, and show that it too is more vulnerable than previously indicated.
Read more

Complex networks approach to modeling online social systems

Przemyslaw Grabowicz Institute for Cross-Disciplinary Physics and Complex Systems, Palma de Mallorca
04 Jul 2013, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
An increasing number of today's social interactions occurs using online social media as communication channels. Some online social networks have become extremely popular in the last decade. They differ among themselves in the character of the service they provide to online users. For instance, Facebook can be seen mainly as a platform for keeping in touch with close friends and relatives, Twitter is used to propagate and receive news, LinkedIn facilitates the maintenance of professional contacts, ...
An increasing number of today's social interactions occurs using online social media as communication channels. Some online social networks have become extremely popular in the last decade. They differ among themselves in the character of the service they provide to online users. For instance, Facebook can be seen mainly as a platform for keeping in touch with close friends and relatives, Twitter is used to propagate and receive news, LinkedIn facilitates the maintenance of professional contacts, Flickr gathers amateurs and professionals of photography, etc. Albeit different, all these online platforms share an ingredient that pervades all their applications. There exists an underlying social network that allows their users to keep in touch with each other and helps to engage them in common activities or interactions leading to a better fulfillment of the service's purposes. This is the reason why these platforms share a good number of functionalities, e.g., broadcasted status updates, personal communication channels, easy one-step information sharing, groups created and maintained by the users, organized user-generated content etc. As a result, online social networks are an interesting field to study social behavior that seems to be generic among the different online services. Since at the bottom of these services lays a network of declared relations and the basic interactions in these platforms tend to be pairwise, a natural methodology for studying these systems is provided by network science. In this presentation I describe some of the results of my studies about community structure, interaction dynamics and browsing patterns in online social networks. I present them in an interdisciplinary context of network science, sociology and computer science.

The presentation is divided into three main parts, here are the links to our publications related to each of the sections:

Part I: Interaction patterns in the context of social groups http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0029358

Part II: Social and topical groups http://dl.acm.org/citation.cfm?id=2433475

Part III: User browsing patterns and photo recommendation https://docs.google.com/file/d/0B0_e6k3kQKEubEZfZnQwckNjQXM/edit
Read more

Privad: Practical Non-tracking Advertising System

Alexey Reznichenko Max Planck Institute for Software Systems
28 Jun 2013, 11:30 am - 12:30 pm
Kaiserslautern building G26, room 112
SWS Student Defense Talks - Thesis Proposal
Online tracking of users in support of behavioral advertising is widespread. Several researchers have proposed non-tracking online advertising systems that go well beyond the requirements of the Do-Not-Track initiative launched by the US Federal Trace Commission (FTC). The primary goal of these systems is to allow for behaviorally targeted advertising without revealing user behavior (clickstreams) or user profiles to the ad network. Although these designs purport to be practical solutions, none of them adequately consider a number of important practical aspects. ...
Online tracking of users in support of behavioral advertising is widespread. Several researchers have proposed non-tracking online advertising systems that go well beyond the requirements of the Do-Not-Track initiative launched by the US Federal Trace Commission (FTC). The primary goal of these systems is to allow for behaviorally targeted advertising without revealing user behavior (clickstreams) or user profiles to the ad network. Although these designs purport to be practical solutions, none of them adequately consider a number of important practical aspects. One such aspect is the role of the ad auctions, which today are central to the operation of online advertising systems. Moreover, the systems lack the ability to gather rich statistical data about their operation. They have not been deployed at scale or adequately evaluated in real-life settings. This proposal addresses the challenge of running auctions that leverage user profile information while keeping the user profile private. It also presents our ongoing efforts to build and evaluate both a practical non-tracking advertising system as well as a differentially private data collection system. We describe an experimental Privad prototype equipped with the PDDP querying subsystem and propose a set of experiments designed to gain first-hand experience in running a ``private-by-design'' system.
Read more

Naiad: a system for iterative, incremental, and interactive distributed dataflow

Frank McSherry Microsoft Research Silicon Valley
24 Jun 2013, 2:00 pm - 3:30 pm
Kaiserslautern building G26, room 113
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Distinguished Lecture Series
In this talk I'll describe the Naiad system, based on a new model for low-latency incremental and iterative dataflow. Naiad is designed to provide three properties we do not think yet exist in a single system: the expressive power of loops, concurrent vertex execution, and fine-grained edge completion. Removing any one of these requirements yields an existing class of solutions (respectively: streaming systems like StreamInsight, iterative incremental systems like Nephele, and callback systems like Percolator), but all three together appear to require a new system design. ...
In this talk I'll describe the Naiad system, based on a new model for low-latency incremental and iterative dataflow. Naiad is designed to provide three properties we do not think yet exist in a single system: the expressive power of loops, concurrent vertex execution, and fine-grained edge completion. Removing any one of these requirements yields an existing class of solutions (respectively: streaming systems like StreamInsight, iterative incremental systems like Nephele, and callback systems like Percolator), but all three together appear to require a new system design. We will describe Naiad's structured cyclic dataflow model and protocol for tracking and coordinating outstanding work, more closely resembling memory fences than traditional distributed systems barriers. We give several examples of how Naiad can be used to efficiently implement many of the currently popular "big data" programming patterns, as well as several new ones, and experimental results indicating that Naiad's relative performance ranges from "as good as" to "much better than" existing systems.

This is joint work with Derek G. Murray, Rebecca Isaacs, Michael Isard, Paul Barham, and Martin Abadi.
Read more

Dealing with Resource Allocations and Performance Interference in Virtualized Environments

Nedeljko Vasic EPFL
03 Jun 2013, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Cloud computing in general and Infrastructure-as-a-Service (IaaS) in particular, are becoming ever more popular. However, effective resource management of virtualized resource is a challenging task. Moreover, performance interference (and the resulting unpredictability in the delivered performance) across virtual machines co-located on the same physical machine threatens to make cloud computing inadequate for performance-sensitive customers and more expensive than necessary for all customer.

In this talk, I will describe two frameworks - DejaVu and DeepDive - for dealing with the resource management and performance interference issues in virtualized environments. ...
Cloud computing in general and Infrastructure-as-a-Service (IaaS) in particular, are becoming ever more popular. However, effective resource management of virtualized resource is a challenging task. Moreover, performance interference (and the resulting unpredictability in the delivered performance) across virtual machines co-located on the same physical machine threatens to make cloud computing inadequate for performance-sensitive customers and more expensive than necessary for all customer.

In this talk, I will describe two frameworks - DejaVu and DeepDive - for dealing with the resource management and performance interference issues in virtualized environments. The key idea behind DejaVu is to cache and reuse the results of previous resource allocation decisions at runtime. By doing so, it speeds up adaptation to workload changes by 18X relative to the state-of-the-art. DeepDive transparently diagnoses and manages performance interference in the cloud by leveraging easily-obtainable low level metrics to discern when interference is occurring and what resource is causing it. DeepDive also mitigates interference using a low-overhead approach to identifying a VM placement that alleviates interference.

Read more

Logical Abstractions of Systems

Thomas Wies NYU
16 May 2013, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Verification technology has proved useful for increasing software reliability and productivity. Its great asset lies in concepts and algorithms for the mechanical construction of formally precise, conservative abstractions of behavioral aspects of systems. Its applicability extends to other problems in Computer Science. A recent example is our uses of abstraction algorithms for automated debugging of programs. In my talk, I will present techniques for computing and analyzing abstractions of systems whose behavior depends on complex data and control, ...
Verification technology has proved useful for increasing software reliability and productivity. Its great asset lies in concepts and algorithms for the mechanical construction of formally precise, conservative abstractions of behavioral aspects of systems. Its applicability extends to other problems in Computer Science. A recent example is our uses of abstraction algorithms for automated debugging of programs. In my talk, I will present techniques for computing and analyzing abstractions of systems whose behavior depends on complex data and control, such as heap-allocated data structures and distributed message passing concurrency. Our techniques are based on decision procedures for specific logical theories. The resulting logical abstractions give rise to a new generation of verification tools.
Read more

Compositional Inter-Language Reasoning

Georg Neis Max Planck Institute for Software Systems
15 May 2013, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 005
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Student Defense Talks - Thesis Proposal
The correctness of a compiler is naturally phrased as a problem of program equivalence: any source program should be equivalent to the result of compiling it.  We want this notion of equivalence to be compositional such that (1) we can verify a compiler in a modular fashion, and (2) we can link modules separately compiled by one or several verified compilers without losing the correctness guarantee. Unfortunately, existing methods do not support both. With the help of "parametric bisimulations", ...
The correctness of a compiler is naturally phrased as a problem of program equivalence: any source program should be equivalent to the result of compiling it.  We want this notion of equivalence to be compositional such that (1) we can verify a compiler in a modular fashion, and (2) we can link modules separately compiled by one or several verified compilers without losing the correctness guarantee. Unfortunately, existing methods do not support both. With the help of "parametric bisimulations", which we introduced last year, we hope to change this.
Read more

Proof-relevant logical relations

Martin Hofmann Ludwig-Maximilians-Universitaet Muenchen
13 May 2013, 2:00 pm - 3:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
We introduce a novel variant of logical relations that maps types not merely to partial equivalence relations on values, as is commonly done, but rather to a proof-relevant generalisation thereof, namely setoids.

A setoid is like a category all of whose morphisms are isomorphisms (a groupoid) with the exception that no equations between these morphisms are required to hold.

The objects of a setoid establish that values inhabit semantic types, whilst its morphisms are understood as evidence for semantic equivalence. ...
We introduce a novel variant of logical relations that maps types not merely to partial equivalence relations on values, as is commonly done, but rather to a proof-relevant generalisation thereof, namely setoids.

A setoid is like a category all of whose morphisms are isomorphisms (a groupoid) with the exception that no equations between these morphisms are required to hold.

The objects of a setoid establish that values inhabit semantic types, whilst its morphisms are understood as evidence for semantic equivalence.

The transition to proof-relevance solves two well-known problems caused by the use of existential quantification over future worlds in traditional Kripke logical relations: failure of admissibility, and spurious functional dependencies.

We illustrate the novel format with two applications: a direct-style validation of Pitts and Stark's equivalences for ``new'' and a denotational semantics for a region-based effect system that supports type abstraction in the sense that only externally visible effects need to be tracked; non-observable internal modifications, such as the reorganisation of a search tree or lazy initialisation, can count as `pure' or `read only'. This `fictional purity' allows clients of a module soundly to validate more effect-based program equivalences than would be possible with traditional effect systems.

This is joint work with Nick Benton and Vivek Nigam
Read more

Gender Swapping and User Behaviors in Online Social Games

Meeyoung Cha KAIST
10 May 2013, 2:30 pm - 4:00 pm
Saarbrücken building E1 5, room 005
simultaneous videocast to Kaiserslautern building G26, room 112
SWS Colloquium
Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games’ popularity and dedicated fan base are testament to the allure of novel social interactions granted to people by granting them an alternative life as new characters and personae. In this paper we investigate the phenomenon of "gender swapping," which refers to players choosing avatars of genders opposite to their natural ones. ...
Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games’ popularity and dedicated fan base are testament to the allure of novel social interactions granted to people by granting them an alternative life as new characters and personae. In this paper we investigate the phenomenon of "gender swapping," which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped, and discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG qualities and detection of fake online identities.

(To appear at WWW2013, Joint work with Jing-Kai Lou, Kunwoo Park,Juyong Park, Chin-Laung Lei, and Kuan-Ta Chen)
Read more

Understanding and Improving the Efficiency of Failure Resilience for Big Data Frameworks

Florin Dinu Rice University
23 Apr 2013, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 206
SWS Colloquium
Big data processing frameworks (MapReduce, Hadoop, Dryad) are hugely popular today. A strong selling point is their ability to provide failure resilience guarantees. They can run computations to completion despite occasional failures in the system. However, an overlooked point has been the efficiency of the failure resilience provided. The vision of this work is that big data frameworks should not only finish computations under failures but minimize the impact of the failures on the computation time. ...
Big data processing frameworks (MapReduce, Hadoop, Dryad) are hugely popular today. A strong selling point is their ability to provide failure resilience guarantees. They can run computations to completion despite occasional failures in the system. However, an overlooked point has been the efficiency of the failure resilience provided. The vision of this work is that big data frameworks should not only finish computations under failures but minimize the impact of the failures on the computation time.

The first part of the talk presents the first in-depth analysis of the efficiency of the failure resilience provided by the popular Hadoop framework at the level of a single job. The results show that compute node failures can lead to variable and unpredictable job running times. The causes behind these results are detailed in the talk. The second part of the talk focuses on providing failure resilience at the level of multi-job computations. It presents the design, implementation and evaluation of RCMP, a MapReduce system based on the fundamental insight that using replication as the main failure resilience strategy oftentimes leads to significant and unnecessary increases in computation running time. In contrast, RCMP is designed to use job re-computation as a first-order failure resilience strategy. RCMP enables re-computations that perform the minimum amount of work and also maximizes the efficiency of the re-computation work that still needs to be performed.
Read more

Mining Requirements from an Industrial-scale Control System

Jyotirmoy Deshmukh Toyota Engineering
22 Apr 2013, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Industrial-scale control systems are often developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain highly nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, ...
Industrial-scale control systems are often developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain highly nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, and so on. Moreover, the biggest challenge is that system requirements are often imprecise, non-modular, evolving, or even simply unknown. As a result, formal techniques have been unable to digest the scale and format of industrial-scale control systems. On the other hand, the Simulink modeling language -- a popular language for describing such models -- is widely used as a high-fidelity simulation tool in the industry, and is routinely used by control designers to experimentally validate their controller designs. This raises the question: "What can we accomplish if all we have is a very complex Simulink model of a control system?" In this talk, we give an example of a simulation-guided formal technique that can help characterize temporal properties of the system, or guide the search for design behaviors that do not conform to "good behavior". Specifically, we present a way to algorithmically mine temporal assertions from a Simulink model. The input to our algorithm is a requirement template expressed in Parametric Signal Temporal Logic -- a formalism to express temporal formulas in which concrete signal or time values are replaced by parameters. Our algorithm is an instance of counterexample-guided inductive synthesis: an intermediate candidate requirement is synthesized from simulation traces of the system, which is refined using counterexamples to the candidate obtained with the help of a falsification tool. The algorithm terminates when no counterexample is found. Mining has many usage scenarios: mined requirements can be used to validate future modifications of the model, they can be used to enhance understanding of legacy models, and can also guide the process of bug-finding through simulations.

Read more

Liveness-Based Pointer Analysis

Uday Khedkar IIT Bombay
19 Apr 2013, 3:00 pm - 4:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Precise flow- and context-sensitive pointer analysis (FCPA) is generally considered prohibitively expensive for arge programs; most tools relax one or both of the requirements for scalability. We argue that precise FCPA has been over-harshly judged---the vast majority of points-to pairs calculated by existing algorithms are never used by any client analysis or transformation because they involve dead variables. We therefore formulate a FCPA in terms of a joint points-to and liveness analysis which we call L-FCPA. ...
Precise flow- and context-sensitive pointer analysis (FCPA) is generally considered prohibitively expensive for arge programs; most tools relax one or both of the requirements for scalability. We argue that precise FCPA has been over-harshly judged---the vast majority of points-to pairs calculated by existing algorithms are never used by any client analysis or transformation because they involve dead variables. We therefore formulate a FCPA in terms of a joint points-to and liveness analysis which we call L-FCPA.

Our analysis computes points-to information only for live pointers and its propagation is sparse (restricted to live ranges of respective pointers). Further, our analysis uses strong liveness, effectively including dead code elimination. It calculates must-points-to information from may-points-to information afterwards instead of using a mutual fixed-point, and uses value-based termination of call strings during interprocedural analysis (which reduces the number of call strings significantly).

We implemented a naive L-FCPA in GCC-4.6.0 using linked lists. Evaluation on SPEC2006 showed significant increase in the precision of points-to pairs compared to GCC's analysis. Interestingly, our naive implementation turned out to be faster than GCC's analysis for all programs under 30kLoC. Further, L-FCPA showed that fewer than 4% of basic blocks had more than 8 points-to pairs. We conclude that the usable points-to information and the required context information is small and sparse and argue that approximations (e.g. weakening flow or context sensitivity) are not only undesirable but also unnecessary for performance.
Read more

Synthesis and Control of Infinite-State Systems with Partial ObservabilitySpeaker

Rayna Dimitrova UdS
18 Apr 2013, 11:00 am - 12:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Synthesis methods automatically construct a system, or an individual component within a system, such that the result satisfies a given specification. The synthesis procedure must take into account the component's interface and deliver implementations that comply with its limitations. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component's environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. ...
Synthesis methods automatically construct a system, or an individual component within a system, such that the result satisfies a given specification. The synthesis procedure must take into account the component's interface and deliver implementations that comply with its limitations. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component's environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. In this talk I will present synthesis methods for infinite-state systems with partial observability. First, I will describe a technique for automatic generation of observation predicates (clock constraints) for timed control with safety requirements. Finding the right observations is the main challenge in timed control with partial observability. Our approach follows the Counterexample-Guided Abstraction Refinement scheme, i.e., it uses counterexamples to guide the search. It employs novel refinement techniques based on interpolation and model generation. Our approach yields encouraging results, demonstrating better performance than brute-force enumeration of observation sets, in particular for systems where a fine granularity of the observations is necessary. Second, I will outline a synthesis algorithm for Lossy Channel Systems (LCSs) with partial observability and safety specifications. The algorithm uses an extension of the symbolic representation common for backward analysis of LCSs. Its termination relies on the fact that LCSs are better-quasi ordered systems.
Read more

Securing information release: systems, models, and programming languages

Aslan Askarov Harvard University
08 Apr 2013, 10:30 am - 11:30 am
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
Computer systems sometimes need to release some confidential information. However, they must also prevent inadvertent release of information that should remain confidential. These requirements significantly complicate reasoning about system security, and are not addressed by conventional security mechanisms. To provide assurance for such systems we need to develop principled approaches for specifying and enforcing secure information release. In this talk, I will describe how this can be achieved using systems and programming languages techniques.

The first part of the talk will focus on controlling inadvertent leaks in complex systems. ...
Computer systems sometimes need to release some confidential information. However, they must also prevent inadvertent release of information that should remain confidential. These requirements significantly complicate reasoning about system security, and are not addressed by conventional security mechanisms. To provide assurance for such systems we need to develop principled approaches for specifying and enforcing secure information release. In this talk, I will describe how this can be achieved using systems and programming languages techniques.

The first part of the talk will focus on controlling inadvertent leaks in complex systems. I will discuss the leaks that happen when an adversary can measure the time at which a system performs an observable action, also known as timing channels. I will explain how timing channels present a serious threat in computer security, and introduce predictive mitigation---a general technique for mitigating timing channels that works by predicting timing from past behavior and public information. Rather than eliminating timing channels entirely, predictive mitigation bounds the amount of information that an adversary can learn via timing channels with a trade-off in system performance. Under reasonable assumptions, the bounds are logarithmic in the running time of the system.

The second part of the talk will present insights into the formalization of practical security specifications for the intentional release of confidential information. I will introduce a programming language-based framework that provides a formal vocabulary for expressing such specifications. Example specifications include what information may be released, when a release may happen, and whether an adversary has any control over a release. These specifications are soundly enforceable using a variety of static and dynamic program analyses.
Read more

Verifying shared-variable concurrent programs

Daniel Kroening Oxford
04 Apr 2013, 11:00 am - 12:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
I will first outline a series of research results on verifying shared-variable concurrent programs, including techniques for predicate abstraction for such programs, and how to check the resulting concurrent Boolean programs. I will then elaborate on two recent results on supporting weak memory consistency (ESOP and CAV 2013, respectively).

Diagnosing and Repairing Internet Performance Problems

David Choffnes University of Washington
21 Mar 2013, 10:30 am - 11:30 am
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
We increasingly depend on Internet connectivity and performance for services ranging from telephony and video streaming to home monitoring and remote health care. However, hardware failures, misconfigurations, and software bugs frequently cause outages and other performance problems that disrupt these services. Existing tools provide network operators with only limited visibility into these problems and few options to address them. The result is that debugging Internet problems is often a slow, manual process. In this talk, I discuss how we can improve Internet reliability by enabling better tools for detecting, ...
We increasingly depend on Internet connectivity and performance for services ranging from telephony and video streaming to home monitoring and remote health care. However, hardware failures, misconfigurations, and software bugs frequently cause outages and other performance problems that disrupt these services. Existing tools provide network operators with only limited visibility into these problems and few options to address them. The result is that debugging Internet problems is often a slow, manual process. In this talk, I discuss how we can improve Internet reliability by enabling better tools for detecting, isolating, and repairing network problems as they occur. First, I discuss a system for crowdsourcing network monitoring to end. By leveraging the network view from applications running on a large number of hosts, we can efficiently detect network problems that impact end-to-end performance. Second, I describe a system for isolating the network responsible for a problem. To address this, we develop new tools that allow an ISP to identify the root cause even when portions of the Internet are unreachable. Third, I present an approach for automatically repairing isolated network problems. This allows an ISP to use existing routing protocols in novel ways to cause other networks to avoid problems, thus restoring normal connectivity.
Read more

Holistic System Design for Deterministic Replay

Dongyoon Lee University of Michigan
12 Mar 2013, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
With the advent of multiprocessor systems, it is now the role of the programmers to explicitly expose parallelism and take advantage of parallel computing resources. However, parallel programming is inherently complex as programmers have to reason about all possible thread interleavings. A deterministic replay system that records and reproduces the execution of parallel programs can serve as a foundation for building many useful tools (e.g., time-travel debugger, fault tolerance system, etc.) by overcoming the inherent non-determinism in multiprocessor systems. ...
With the advent of multiprocessor systems, it is now the role of the programmers to explicitly expose parallelism and take advantage of parallel computing resources. However, parallel programming is inherently complex as programmers have to reason about all possible thread interleavings. A deterministic replay system that records and reproduces the execution of parallel programs can serve as a foundation for building many useful tools (e.g., time-travel debugger, fault tolerance system, etc.) by overcoming the inherent non-determinism in multiprocessor systems. While it is well known how to replay uniprocessor systems, it is much harder to provide deterministic replay of shared memory multithreaded programs on multiprocessors because shared memory accesses add a high-frequency source of non-determinism. I introduce a new insight to deterministic replay that it is sufficient for many replay uses to guarantee only the same output and the final states between the recorded and replayed executions, and thus it is possible to support replay without logging precise shared-memory dependencies. I call this relaxed but sufficient replay guarantee "external determinism" and leverage this observation to build efficient multiprocessor replay systems. In this talk, I will introduce two replay systems: Respec and Chimera. Respec enables software-only deterministic replay at low overhead with operating system support. Chimera leverages static data-race analysis to build an efficient software-only replay solution.
Read more

Incremental, Inductive Coverability

Johannes Kloos Max Planck Institute for Software Systems
11 Mar 2013, 3:00 pm - 4:00 pm
Kaiserslautern building G26, room 318
SWS Student Defense Talks - Qualifying Exam
We give an incremental, inductive (IC3) procedure to check coverability of well-structured transition systems. Our procedure generalizes the IC3 procedure for safety verification that has been successfully applied in finite-state hardware verification to infinite-state wellstructured transition systems. We show that our procedure is sound, complete, and terminating for downward-finite well-structured transition systems —where each state has a finite number of states below it— a class that contains extensions of Petri nets, broadcast protocols, and lossy channel systems. ...
We give an incremental, inductive (IC3) procedure to check coverability of well-structured transition systems. Our procedure generalizes the IC3 procedure for safety verification that has been successfully applied in finite-state hardware verification to infinite-state wellstructured transition systems. We show that our procedure is sound, complete, and terminating for downward-finite well-structured transition systems —where each state has a finite number of states below it— a class that contains extensions of Petri nets, broadcast protocols, and lossy channel systems. We have implemented our algorithm for checking coverability of Petri nets. We describe how the algorithm can be efficiently implemented without the use of SMT solvers. Our experiments on standard Petri net benchmarks show that IC3 is competitive with state-of-the-art implementations for coverability based on symbolic backward analysis or expand-enlarge-and-check algorithms both in time taken and space usage.
Read more

Practical, Usable, and Secure Authentication and Authorization on the Web

Alexei Czeskis University of Washington, Seattle
07 Mar 2013, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
User authentication is a critical part of many systems.  As strong cryptography has become widespread and vulnerabilities in systems become harder to find and exploit, attackers are turning toward user authentication as a potential avenue for compromising users.  Unfortunately, user authentication on the web has remained virtually unchanged since the invention of the Internet.  I will present three systems that attempt to strengthen user authentication, and its close cousin authorization, on the web while being practical for developers, ...
User authentication is a critical part of many systems.  As strong cryptography has become widespread and vulnerabilities in systems become harder to find and exploit, attackers are turning toward user authentication as a potential avenue for compromising users.  Unfortunately, user authentication on the web has remained virtually unchanged since the invention of the Internet.  I will present three systems that attempt to strengthen user authentication, and its close cousin authorization, on the web while being practical for developers, usable for users, and secure against attackers.  First, I will discuss Origin Bound Certificates -- a mechanism for tweaking Transport Layer Security (TLS) that can then be used to strongly strengthen the authentication of HTTP requests by binding cookies (or other tokens) to a client certificate.  This renders stolen cookies unusable by attackers. Second, I will present PhoneAuth, a system for protecting password-based login by opportunistically providing cryptographic identity assertions from a user's mobile phone while maintaining a simple and usable authentication experience.  Third, I will describe ongoing research into how a class of web vulnerabilities called Cross-Site Request Forgeries (CSRFs) can be fundamentally prevented using Allowed Referrer Lists.  I'll discuss the next big challenges in user authentication and conclude with several examples of where authentication matters beyond the web.
Read more

Selected Topics on Wireless Security and Localization

Kasper Bonne Rasmussen University of California, Irvine
04 Mar 2013, 10:30 am - 11:30 am
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
I will cover a couple of my recent contributions to secure localization and distance bounding. Distance bounding protocols have been proposed for many security critical applications as a means of getting an upper bound on the physical distance to a communication partner. I will show some practical examples of problems where distance bounding can provide a unique solution to problems which are otherwise difficult to solve. One such example is in the context of implantable medical devices. ...
I will cover a couple of my recent contributions to secure localization and distance bounding. Distance bounding protocols have been proposed for many security critical applications as a means of getting an upper bound on the physical distance to a communication partner. I will show some practical examples of problems where distance bounding can provide a unique solution to problems which are otherwise difficult to solve. One such example is in the context of implantable medical devices.

One of the main obstacles for the wider deployment of distance bounding using electromagnetic (radio) waves, is the lack of hardware platforms that implement and support these protocols. I will show the first prototype system that demonstrates that radio distance bounding protocols can be implemented to match the strict requirements on processing time, that these protocols require. Our system implements a radio that is able to receive, process and transmit signals in less than 1ns.

Finally I will present an area where I see a great potential for future work. In both sensing and actuation applications there is a semantic gap between the electrical system and the physical world. In an adversarial setting this gap can be exploited to make a system believe that, e.g., a switch was activated, when in fact it wasn't. there is a plethora application domains that share this problem, from bio-medical sensors and implantable medical devices to factory control systems and security critical infrastructures. Some of these challenges can be solved using a traditional cryptographic approach, and some are highly interdisciplinary, and will best be handled in collaboration with experts from other fields.
Read more

Cloud Storage Consistency Explained Through Baseball

Doug Terry Microsoft Research, Silicon Valley
22 Feb 2013, 1:00 pm - 2:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 206
SWS Colloquium
Some cloud storage services, like Windows Azure, replicate data while providing strong consistency to their clients while others, like Amazon, have chosen eventual consistency in order to obtain better performance and availability. A broader class of consistency guarantees can, and perhaps should, be offered to clients that read shared data. During a baseball game, for example, different participants (the scorekeeper, umpire, sportswriter, and so on) benefit from six different consistency guarantees when reading the current score. ...
Some cloud storage services, like Windows Azure, replicate data while providing strong consistency to their clients while others, like Amazon, have chosen eventual consistency in order to obtain better performance and availability. A broader class of consistency guarantees can, and perhaps should, be offered to clients that read shared data. During a baseball game, for example, different participants (the scorekeeper, umpire, sportswriter, and so on) benefit from six different consistency guarantees when reading the current score. Eventual consistency is insufficient for most of the participants, but strong consistency is not needed either.
Read more

Tales from the Jungle

Peter Sewell University of Cambridge
18 Feb 2013, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 206
SWS Distinguished Lecture Series
We rely on a computational infrastructure that is a densely intertwined mass of software and hardware: programming languages, network protocols, operating systems, and processors. It has accumulated great complexity, from a combination of engineering design decisions, contingent historical choices, and sheer scale, yet it is defined at best by prose specifications, or, all too often, just by the common implementations. Can we do better? More specifically, can we apply rigorous methods to this mainstream infrastructure, taking the accumulated complexity seriously, ...
We rely on a computational infrastructure that is a densely intertwined mass of software and hardware: programming languages, network protocols, operating systems, and processors. It has accumulated great complexity, from a combination of engineering design decisions, contingent historical choices, and sheer scale, yet it is defined at best by prose specifications, or, all too often, just by the common implementations. Can we do better? More specifically, can we apply rigorous methods to this mainstream infrastructure, taking the accumulated complexity seriously, and if we do, does it help? My colleagues and I have looked at these questions in several contexts: the TCP/IP network protocols with their Sockets API; programming language design, including the Java module system, the C11/C++11 concurrency model, and the C programming language; the hardware concurrency behaviour of x86, IBM POWER, and ARM multiprocessors; and compilation of concurrent code.

In this talk I will draw some lessons from what did and did not succeed, looking especially at the empirical nature of some of the work, at the social process of engagement with the various different communities, and at the mathematical and software tools we used. Domain-specific modelling languages (based on functional programming ideas) and proof assistants were invaluable for working with the large and loose specifications involved: idioms within HOL4 for TCP, our Ott tool for programming language specification, and Owens's Lem tool for portable semantic definitions, with HOL4, Isabelle, and Coq, for the relaxed-memory concurrency semantics work. Our experience with these suggests something of what is needed to make mathematically rigorous engineering of mainstream computer systems (and in systems research) a commonplace reality.
Read more

Borders of Decidability in Verification of Data-Centric Dynamic Systems

Babak Bagheri UNIBZ
14 Feb 2013, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
In this talk I present our recent results on data-aware static verification, in which we select the artifact-centric model as a natural vehicle for our investigation. Artifact-centric systems are models for business process systems in which the dynamics and the manipulation of data are equally central. Given the family of variations on this model found in the literature, for the sake of a uniform terminology we introduce our own pristine formalization, which captures existing artifact-centric dialects, ...
In this talk I present our recent results on data-aware static verification, in which we select the artifact-centric model as a natural vehicle for our investigation. Artifact-centric systems are models for business process systems in which the dynamics and the manipulation of data are equally central. Given the family of variations on this model found in the literature, for the sake of a uniform terminology we introduce our own pristine formalization, which captures existing artifact-centric dialects, and which is semantically equivalent to the most expressive ones. We call our business process formalism ``Data-Centric Dynamic Systems'' (DCDS). The process is described in terms of atomic actions that evolve the system. Action execution may involve calls to external services, thus inserting fresh data into the system. As a result such systems are infinite-state. We show that verification is undecidable in general, and we isolate notable cases where decidability is achieved. More specifically, we show that in a first-order $\mu$-calculus variant that preserves knowledge of objects appeared along a run, we get decidability under the assumption that the fresh data introduced along a run are bounded, though they might not be bounded in the overall system. Then, we investigate decidability under the assumption that knowledge of objects is preserved only if they are continuously present. We show that if infinitely many values occur in a run but do not accumulate in the same state, then we get again decidability. We give syntactic conditions to avoid this accumulation through the novel notion of ``generate-recall acyclicity'', which ensures that every service call activation generates new values that cannot be accumulated indefinitely. We believe that DCDSs are natural and expressive models for systems powered by an underlying database (i.e., so called data-centric systems), and thus are an ideal vehicle for foundational research with potential to transfer to alternative models.
Read more

Abstraction for Weakly Consistent Systems

Alexey Gotsman IMDEA Software Institute, Madrid, Spain
14 Feb 2013, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Colloquium
When constructing complex concurrent and distributed systems, abstraction is vital: programmers should be able to reason about system components in terms of abstract specifications that hide the implementation details. Nowadays such components often provide only weak consistency guarantees about the data they manage: in shared-memory systems because of the effects of relaxed memory models, and in distributed systems because of the effects of replication. This makes existing notions of component abstraction inapplicable. In this talk I will describe our ongoing effort to specify consistency guarantees provided by modern shared-memory and distributed systems in a uniform framework and to propose notions of abstraction for components of such systems. ...
When constructing complex concurrent and distributed systems, abstraction is vital: programmers should be able to reason about system components in terms of abstract specifications that hide the implementation details. Nowadays such components often provide only weak consistency guarantees about the data they manage: in shared-memory systems because of the effects of relaxed memory models, and in distributed systems because of the effects of replication. This makes existing notions of component abstraction inapplicable. In this talk I will describe our ongoing effort to specify consistency guarantees provided by modern shared-memory and distributed systems in a uniform framework and to propose notions of abstraction for components of such systems. I will illustrate our results using the examples of the C/C++ memory model and eventually consistent distributed systems. This is joint work with Mark Batty (University of Cambridge), Sebastian Burckhardt (Microsoft Research), Mike Dodds (University of York) and Hongseok Yang (University of Oxford).
Read more

In Search of Truth (on the Deep Web)

Divesh Srivastava AT&T Labs
13 Feb 2013, 10:30 am - 12:00 pm
Kaiserslautern building TU - 48, room 680
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Distinguished Lecture Series
The Deep Web has enabled the availability of a huge amount of useful information and people have come to rely on it to fulfill their information needs in a variety of domains. We present a recent study on the accuracy of data and the quality of Deep Web sources in two domains where quality is important to people's lives: Stock and Flight. We observe that, even in these domains, the quality of the data is less than ideal, ...
The Deep Web has enabled the availability of a huge amount of useful information and people have come to rely on it to fulfill their information needs in a variety of domains. We present a recent study on the accuracy of data and the quality of Deep Web sources in two domains where quality is important to people's lives: Stock and Flight. We observe that, even in these domains, the quality of the data is less than ideal, with sources providing conflicting, out-of-date and incomplete data. Sources also copy, reformat and modify data from other sources, making it difficult to discover the truth. We describe techniques proposed in the literature to solve these problems, evaluate their strengths on our data, and identify directions for future work in this area.
Read more

Towards a Secure DNS

Haya Shulman Bar Ilan University, Israel
12 Feb 2013, 1:00 pm - 2:30 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 206
SWS Colloquium
Most caching DNS resolvers still rely for their security, against poisoning, on validating that the DNS responses contain some ‘unpredictable’ values, copied from the request. These values include the 16 bit identifier field, and other fields, randomised and validated by different ‘patches’ to DNS. We investigate the prominent patches, and show how off-path attackers can circumvent all of them, exposing the resolvers to cache poisoning attacks. We present countermeasures preventing our attacks; however, we believe that our attacks provide additional motivation for adoption of DNSSEC (or other MitM-secure defenses). ...
Most caching DNS resolvers still rely for their security, against poisoning, on validating that the DNS responses contain some ‘unpredictable’ values, copied from the request. These values include the 16 bit identifier field, and other fields, randomised and validated by different ‘patches’ to DNS. We investigate the prominent patches, and show how off-path attackers can circumvent all of them, exposing the resolvers to cache poisoning attacks. We present countermeasures preventing our attacks; however, we believe that our attacks provide additional motivation for adoption of DNSSEC (or other MitM-secure defenses). We then investigate vulnerabilities in DNSSEC configuration among resolvers and zones, which reduce or even nullify the protection offered by DNSSEC. Finally we provide our recommendations and countermeasures to prevent the vulnerabilities.
Read more

Networking: A Killer App for Programming Languages Researchers

David Walker Princeton University
28 Jan 2013, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 206
SWS Distinguished Lecture Series
Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. Moreover, historically, managing these networks has been hideously complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) and their ad hoc, closed and proprietary configuration interfaces. Software-Defined Networking (SDN) is poised to change this state of affairs by offering a clean, simple and open interface between network devices and the software that controls them. ...
Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. Moreover, historically, managing these networks has been hideously complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) and their ad hoc, closed and proprietary configuration interfaces. Software-Defined Networking (SDN) is poised to change this state of affairs by offering a clean, simple and open interface between network devices and the software that controls them. In particular, many commercial switches now support the OpenFlow protocol, and a number of campus, data-center, and backbone networks have deployed the new technology.

However, while SDN makes it possible to program the network, it does not make it easy: The first generation of SDN controllers offered application developers the "assembly language" of network programming platforms. To reach SDN’s full potential, research in programming languages and compilers is desperately needed. In this talk, I discuss our work to date in this area, which revolves around the design of a language, compiler and run-time system for SDN programming. The language, called Frenetic, allows programmers to work declaratively, specifying the behavior of a network at a high level of abstraction. The compiler and run-time system take care of the tedious details of compiling and implementing these high-level policies using the OpenFlow protocol.

A key strength of the Frenetic design is its support for modular programming: Complex network applications can be decomposed in to logical subcomponents --- an access control policy, a load balancer, a traffic monitor --- and coded independently. Frenetic's rich combinator library makes it possible to stitch such components back together to form a fully functioning whole. Frenetic also contains carefully designed operators that help users transition from one global, high-level network policy to the next while preserving key network invariants. Overall, Frenetic's abstractions make it dramatically easier for programmers to write and reason about SDN applications.
Read more

Scientific Data Management: Not your everyday transaction

Prof. Anastasia Ailamaki EPFL
23 Jan 2013, 2:00 pm - 3:30 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 206
SWS Distinguished Lecture Series
Today's scientific processes heavily depend on fast and accurate analysis of experimental data. Scientists are routinely overwhelmed by the effort needed to manage the volumes of data produced either by observing phenomena or by sophisticated simulations. As database systems have proven inefficient, inadequate, or insufficient to meet the needs of scientific applications, the scientific community typically uses special-purpose legacy software. When compared to a general-purpose data management system, however, application-specific systems require more resources to maintain, ...
Today's scientific processes heavily depend on fast and accurate analysis of experimental data. Scientists are routinely overwhelmed by the effort needed to manage the volumes of data produced either by observing phenomena or by sophisticated simulations. As database systems have proven inefficient, inadequate, or insufficient to meet the needs of scientific applications, the scientific community typically uses special-purpose legacy software. When compared to a general-purpose data management system, however, application-specific systems require more resources to maintain, and in order to achieve acceptable performance they often sacrifice data independence and hinder the reuse of knowledge. With the exponential growth of dataset sizes, data management technology are no longer luxury; they are the sole solution for scientific applications.

I will discuss some of the work from teams around the world and the requirements of their applications, as well as how these translate to challenges for the data management community. As an example I will describe a challenging application on brain simulation data, and its needs; I will then present how we were able to simulate a meaningful percentage of the human brain as well as access arbitrary brain regions fast, independently of increasing data size or density. Finally I will present some of the data management challenges that lie ahead in domain sciences, and will introduce NoDB, a new query processor which explores raw, never-before-seen data in-situ, using full querying power.
Read more

Type-Directed Automatic Incrementalization

Yan Chen Max Planck Institute for Software Systems
18 Jan 2013, 10:00 am - 11:00 am
Saarbrücken building E1 5, room 422
SWS Student Defense Talks - Qualifying Exam
Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes. ...
Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes.

In this paper, we describe a significant step towards solving the problem of automatic incrementalization: a programming language and a compiler that can, given a few type annotations describing what can change over time, compile a conventional program that assumes its data to be static (unchanging over time) to an incremental program. Based on recent advances in self-adjusting computation, including a theoretical proposal for translating purely functional programs to self-adjusting programs, we develop techniques for translating conventional Standard ML programs to self-adjusting programs. By extending the Standard ML language, we design a fully featured programming language with higher-order features, a module system, and a powerful type system, and implement a compiler for this language. The resulting programming language, LML, enables translating conventional programs decorated with simple type annotations into incremental programs that can respond to changes in their data correctly and efficiently.

We evaluate the effectiveness of our approach by considering a range of benchmarks involving lists, vectors, and matrices, as well as a ray tracer. For these benchmarks, our compiler incrementalizes existing code with only trivial amounts of annotation. The resulting programs are often asymptotically more efficient, leading to orders of magnitude speedups in practice.
Read more

Building Privacy-Preserving Systems: What Works, What Doesn't, and What Is To Be done

Vitaly Shmatikov University of Texas, Austin
17 Jan 2013, 10:00 am - 11:30 am
Kaiserslautern building G26, room 206
simultaneous videocast to Saarbrücken building E1 5, room 029
SWS Distinguished Lecture Series
Every time you touch a computer, you leave a trace. Sensitive information about you can be found in the remnants of visited websites and voice-over-IP conversations on your machine; in the allegedly "de-identified" data about your purchases, preferences, and social relationships collected by advertisers and marketers; and, in the not-too-distant future, in the video and audio feeds gathered by sensor-based applications on mobile phones, gaming devices, and household robots.

This talk will describe several research systems developed at UT Austin that aim to provide precise privacy guarantees for individual users. ...
Every time you touch a computer, you leave a trace. Sensitive information about you can be found in the remnants of visited websites and voice-over-IP conversations on your machine; in the allegedly "de-identified" data about your purchases, preferences, and social relationships collected by advertisers and marketers; and, in the not-too-distant future, in the video and audio feeds gathered by sensor-based applications on mobile phones, gaming devices, and household robots.

This talk will describe several research systems developed at UT Austin that aim to provide precise privacy guarantees for individual users. These include (1) the Airavat system for differentially private data analysis, (2) the "Eternal Sunshine of the Spotless Machine" system that runs full-system applications in private sessions and then erases all memories of their execution from the host, and (3) "A Scanner Darkly" system that adds a privacy protection layer to popular vision and image processing libraries, preserving the functionality of sensor-based applications but preventing them from collecting raw images of their users.
Read more

SecGuru - Symbolic Analysis of Network Connectivity Restrictions

Nikolaj Bjorner Microsoft Research Redmond
09 Jan 2013, 11:00 am - 12:00 pm
Saarbrücken building E1 5, room 002
SWS Distinguished Lecture Series
SecGuru is a tool for automatically validating network connectivity restriction policies in large-scale data centers. The problem solved by SecGuru is acute in data centers offering public cloud services where multiple tenants are hosted in customized isolation boundaries. Our tool supports the following interactions: (1) given a policy and a contract verify that the policy satisfies the contract (2) provide a semantic difference between policies. The former facilitates property checking and the latter facilitates identifying configuration drifts. ...
SecGuru is a tool for automatically validating network connectivity restriction policies in large-scale data centers. The problem solved by SecGuru is acute in data centers offering public cloud services where multiple tenants are hosted in customized isolation boundaries. Our tool supports the following interactions: (1) given a policy and a contract verify that the policy satisfies the contract (2) provide a semantic difference between policies. The former facilitates property checking and the latter facilitates identifying configuration drifts. We identify bit-vector logic as a suitable basis for policy analysis and use the Z3 theorem prover to solve these constraints. We furthermore develop algorithms for compact enumeration of differences for bit-vector constraints. Finally, we evaluate SecGuru on large scale production services where it has been used to identify and fix numerous configuration errors.
Read more