Recent events

veribetrfs: Verification as a Practical Engineering Tool

Jon Howell VMware Research, Bellevue, WA, USA
17 Jan 2020, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: VMR 6312
SWS Distinguished Lecture Series
Recent progress in systems verification have shown that verification techniques can scale to thousands of lines. It is time to ask whether verification can displace testing as an effective path to software correctness. The veribetrfs project is developing a verified high-performance storage system. A primary goal of the project is to reduce verification methodology to engineering practice. Veribetrfs is developed using the Iron★ methodology, a descendent of the Ironclad and IronFleet projects. So far, we have a key-value store with 100k iops performance and strong guarantees against data loss. …
Recent progress in systems verification have shown that verification techniques can scale to thousands of lines. It is time to ask whether verification can displace testing as an effective path to software correctness. The veribetrfs project is developing a verified high-performance storage system. A primary goal of the project is to reduce verification methodology to engineering practice. Veribetrfs is developed using the Iron★ methodology, a descendent of the Ironclad and IronFleet projects. So far, we have a key-value store with 100k iops performance and strong guarantees against data loss. This talk will give an overview of the methodology and describe how we have enhanced it in veribetrfs. 
Read more

Towards a Tight Understanding of the Complexity of Algorithmic Problems

Dániel Marx MPI-INF - D1
08 Jan 2020, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
-

Computer-Aided Programming Across Software Stack

Işıl Dillig University of Texas at Austin
16 Dec 2019, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Distinguished Lecture Series
Program synthesis techniques aim to generate executable programs from some high-level of expression on user intent, such as logical specifications, examples, or naive reference implementation. This talk will survey different flavors of program synthesis and their applications across the entire software stack, ranging from computer end-users all the way to systems programmers. We will also illustrate how program synthesis is useful for addressing different concerns in software development, including functionality, correctness, performance, and security.

WebAssembly: Mechanisation, Security, and Concurrency

Conrad Watt University of Cambridge
12 Dec 2019, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
WebAssembly is the first new language to be introduced to the Web ecosystem in over 20 years. Its official specification is given as a formal semantics, making the language a perfect target for further applications of formal methods. This talk highlights recent work which builds on this formal semantics, and discusses the ongoing development of WebAssembly's relaxed memory model, which is complicated by the language's inter-operation with JavaScript.

Understanding and Evolving the Rust Programming Language

Ralf Jung Max Planck Institute for Software Systems
10 Dec 2019, 10:00 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 8796
SWS Student Defense Talks - Thesis Proposal
Rust is a young and actively developed "systems programming language" that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. In this talk I will present my thesis work on understanding and evolving this groundbreaking new language.

The first part of the talk is about building formal foundations for the existing Rust language. Rust employs a strong, ownership-based type system to rule out common systems programming anomalies, …
Rust is a young and actively developed "systems programming language" that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. In this talk I will present my thesis work on understanding and evolving this groundbreaking new language.

The first part of the talk is about building formal foundations for the existing Rust language. Rust employs a strong, ownership-based type system to rule out common systems programming anomalies, but then extends the expressive power of this core type system through libraries that internally use unsafe features. Unfortunately, this ``extensible'' aspect of the Rust language makes Rust's safety claims rather non-obvious, and there is good reason to question whether they actually hold. To address this, we have developed RustBelt, the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof supports the extensible nature of Rust's safety story in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe and compositional extension to the language.

The second part is about evolving the semantics of Rust to better support compiler optimizations. In particular, Rust's reference types provide strong aliasing guarantees, which a clever optimizing compiler should be able to exploit. However, if no care is taken, unsafe code (which plays an essential role in the Rust ecosystem, as mentioned above) can subvert these guarantees. To address this, we have developed Stacked Borrows, a new operational semantics for memory accesses in Rust. On the one hand, Stacked Borrows defines an aliasing discipline and declares programs violating it to have undefined behavior, thus making it possible for us to formally verify the soundness of a number of useful type-based intraprocedural optimizations. On the other hand, we have also implemented the Stacked Borrows semantics in Miri, an interpreter for Rust, and run large parts of the Rust standard library test suite in Miri. In so doing, we have validated that our semantics is sufficiently permissive to account for real-world unsafe Rust code.
Read more

Towards Usability in Private Data Analytics

Reinhard Munz Max Planck Institute for Software Systems
06 Dec 2019, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room 607
simultaneous videocast to Saarbrücken building E1 5, room 005 / Meeting ID: 6312
SWS Student Defense Talks - Thesis Defense
Private data analytics systems preferably provide required analytic accuracy to analysts and specified privacy to individuals whose data is analyzed. Devising a general system that works for a broad range of datasets and analytic scenarios has proven to be difficult.

Differentially private systems allow for proven formal privacy guarantees. To provide these guarantees, however, differentially private mechanisms often need to add large amounts of noise to statistical results, which impairs their usability. Thus, despite the advent of mechanisms with formal guarantees, …
Private data analytics systems preferably provide required analytic accuracy to analysts and specified privacy to individuals whose data is analyzed. Devising a general system that works for a broad range of datasets and analytic scenarios has proven to be difficult.

Differentially private systems allow for proven formal privacy guarantees. To provide these guarantees, however, differentially private mechanisms often need to add large amounts of noise to statistical results, which impairs their usability. Thus, despite the advent of mechanisms with formal guarantees, industry still uses inferior ad-hoc mechanisms that provide better analytic accuracy.

In my thesis I follow two approaches to improve the usability of private data analytics systems in general and differentially private systems in particular. First, I revisit ad-hoc mechanisms and explore the possibilities of systems that do not provide Differential Privacy or only a weak version thereof. Based on an attack analysis, I devise a set of new protection mechanisms including Query Based Bookkeeping (QBB). In contrast to previous systems, QBB only requires the history of analysts’ queries in order to provide privacy protection, but does NOT require any knowledge about the protected individuals’ data.

In my second approach I use the insights gained with QBB to propose UniTraX, the first differentially private analytics system that allows analysts to analyze part of a protected dataset without affecting the other parts and without giving up on accuracy. I show UniTraX’s usability by way of multiple case studies on real-world datasets across different domains. In all cases I demonstrate that UniTraX allows more queries than previous differentially private data analytics systems at moderate runtime overheads.
Read more

Stronger Higher-order Automation

Sophie Tourret MPI-INF - RG 1
04 Dec 2019, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Automated reasoning in first-order logic (FOL) is becoming a mature research domain. It has led to the development of powerful tools such as superposition-based theorem provers and SMT solvers (Satisfiability Modulo Theory solvers), that have found and continue to find many applications in industry and research.

One such application is the automation of proofs in interactive theorem proving (ITP), where proof assistants are used to write computer-checked proofs of theorems, generally expressed in a variant of higher-order logic (HOL). …
Automated reasoning in first-order logic (FOL) is becoming a mature research domain. It has led to the development of powerful tools such as superposition-based theorem provers and SMT solvers (Satisfiability Modulo Theory solvers), that have found and continue to find many applications in industry and research.

One such application is the automation of proofs in interactive theorem proving (ITP), where proof assistants are used to write computer-checked proofs of theorems, generally expressed in a variant of higher-order logic (HOL). This automation is realised via "hammers", that attempt to delegate the proof obligations to first-order automated provers. However, in the translation from HOL to FOL, hammers obfuscate the structure of terms and formulas through the application of a sequence of encodings, although it is this very structure that the provers exploit to find proofs efficiently.

This situation is less than ideal, and if until a few years ago, the ITP and ATP communities were working in parallel, mostly ignoring each other, there is nowadays a trend pushing to bring the two communities closer. The work that I will present in this talk is part of this trend. It includes ongoing research that is either improving higher-order automation with respect to ITP applications or using ITP as a vehicle for ATP research.
Read more

Prusti – Deductive Verification for Rust

Alex Summers ETH Zurich
03 Dec 2019, 10:30 am - 12:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Colloquium
Producing reliable systems software is a major challenge, plagued by the ubiquitous problems of shared mutable state, pointer aliasing, dynamic memory management, and subtle concurrency issues such as race conditions; even expert programmers struggle to tame the wide variety of reasons why their programs may not behave as they intended. Formal verification offers potential solutions to many of these problems, but typically at a very high price: the mathematical techniques employed are highly-complex, and difficult for even expert researchers to understand and apply. …
Producing reliable systems software is a major challenge, plagued by the ubiquitous problems of shared mutable state, pointer aliasing, dynamic memory management, and subtle concurrency issues such as race conditions; even expert programmers struggle to tame the wide variety of reasons why their programs may not behave as they intended. Formal verification offers potential solutions to many of these problems, but typically at a very high price: the mathematical techniques employed are highly-complex, and difficult for even expert researchers to understand and apply.

The relatively-new Rust programming language is designed to help with the former problem: a powerful ownership type system requires programmers to specify and restrict their discipline for referencing heap locations, providing in return the strong guarantee (almost – the talk, and Rustbelt!) that code type-checked by this system will be free from dangling pointers, unexpected aliasing, race conditions and the like. While this rules out a number of common errors, the question of whether a program behaves as intended remains.

In this talk, I’ll give an overview of the Prusti project, which leverages Rust’s type system and compiler analyses for formal verification. By combining the rich information available about a type-checked Rust program with separate user-specification of intended behaviour, Prusti enables a user to verify functional correctness of their code without interacting with a complex program logic; in particular, specifications and all interactions with our implemented tool are at the level of abstraction of Rust expressions.
Read more

Personal Knowledge Extraction: What Can Be Inferred From What You Say and Do

Paramita Mirza MPI-INF - D5
06 Nov 2019, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
Despite recent advances in natural language processing and generation, communication between humans and machines is in still its infancy. Existing intelligent home and mobile assistant technologies excel at scripted tasks such as weather or news reports and music control, yet typically fail at more advanced personalization. This calls for a centralized repository for personal knowledge about each user, which will then be a distant source of background knowledge for personalization in downstream applications. Such personal knowledge repository will be beneficial as a reusable asset; …
Despite recent advances in natural language processing and generation, communication between humans and machines is in still its infancy. Existing intelligent home and mobile assistant technologies excel at scripted tasks such as weather or news reports and music control, yet typically fail at more advanced personalization. This calls for a centralized repository for personal knowledge about each user, which will then be a distant source of background knowledge for personalization in downstream applications. Such personal knowledge repository will be beneficial as a reusable asset; it should be both explainable and scrutable, giving full control to the owning user on editing and sharing stored information with selected service providers.

In this talk, I will discuss our efforts on automated personal knowledge extraction. We can easily obtain personal knowledge of famous people from biographies or news articles, however, such resources are not available for ordinary users. Hence, we turn to the task of inferring personal attributes from users' utterances in conversations, e.g., guessing a person's occupation from "I was sitting the whole day in front of my computer today, trying to finish a grant proposal for my research." I will highlight our Hidden Attribute Models (HAM) to solve the problem, a neural architecture leveraging attention mechanisms and embeddings, as well as an ongoing work on its extension to address challenging attributes such as hobbies and travel preferences with wide sets of multi-faceted attribute values. Finally, I will present an outlook on what we can further infer from users' activities, particularly in relation with their mood and emotion.
Read more

Dealing with Epidemics under Uncertainty

Jessica Hoffmann University of Texas at Austin
04 Nov 2019, 10:30 am - 11:30 am
Saarbrücken building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Colloquium
Epidemic processes can model anything that spreads. As such, they are a useful tool for studying not only human diseases, but also network attacks, chains of activation in the brain, the propagation of real or fake news, the spread of viral tweets, and other processes. In this talk, we investigate epidemics spreading on a graph in the presence of various forms of uncertainty. We present in particular a result about controlling the spread of an epidemic when there is uncertainty about who exactly is infected. …
Epidemic processes can model anything that spreads. As such, they are a useful tool for studying not only human diseases, but also network attacks, chains of activation in the brain, the propagation of real or fake news, the spread of viral tweets, and other processes. In this talk, we investigate epidemics spreading on a graph in the presence of various forms of uncertainty. We present in particular a result about controlling the spread of an epidemic when there is uncertainty about who exactly is infected. We show first that neither algorithms nor results are robust to uncertainty. In other words, uncertainty fundamentally changes how we must approach epidemics on graphs. We also present two related results about learning the graph underlying an epidemic process when there is uncertainty about when people were infected or what infected them.
Read more

Knowledge and Information Dissemination: Models and Methods

Utkarsh Upadhyay Max Planck Institute for Software Systems
17 Oct 2019, 4:00 pm - 5:00 pm
Kaiserslautern building G26, room 111
simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312
SWS Student Defense Talks - Thesis Proposal
In the past, information and knowledge dissemination was relegated to the brick-and-mortar classrooms, newspapers, radio, and television. As these processes were simple and centralized, the models behind them were well understood and so were the empirical methods for optimizing them. In today's world, the internet and social media has become a powerful tool for information and knowledge dissemination: Wikipedia gets more than 1 million edits per day, Stack Overflow has more than 17 million questions, 25% of US population visits Yahoo! …
In the past, information and knowledge dissemination was relegated to the brick-and-mortar classrooms, newspapers, radio, and television. As these processes were simple and centralized, the models behind them were well understood and so were the empirical methods for optimizing them. In today's world, the internet and social media has become a powerful tool for information and knowledge dissemination: Wikipedia gets more than 1 million edits per day, Stack Overflow has more than 17 million questions, 25% of US population visits Yahoo! News for articles and discussions, Twitter has more than 60 million active monthly users, and Duolingo has 25 million users learning languages online.

These developments have introduced a paradigm shift in the process of dissemination. Not only has the nature of the task moved from being centralized to decentralized, but the developments have also blurred the boundary between the creator and the consumer of the content, i.e., information and knowledge. These changes have made it necessary to develop new models, which are better suited to understanding and analysing the dissemination, and to develop new methods to optimize them.

At a broad level, we can view the participation of users in the process of dissemination as falling in one of two settings: collaborative or competitive. In the collaborative setting, the participants work together in crafting knowledge online, e.g., by asking questions and contributing answers, or by discussing news or opinion pieces. In contrast, as competitors, they vie for the attention of their followers on social media. The first part of the thesis will propose models for the complexity of discussions and the evolution of expertise. The latter part of the thesis will explore the competitive setting where I will propose computational methods for measuring, and increasing, the attention received from followers on social media.
Read more

Non-Reformist Reform for Haskell Modularity

Scott Kilpatrick Max Planck Institute for Software Systems
15 Oct 2019, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6747
SWS Student Defense Talks - Thesis Defense
Module systems like that of Haskell permit only a weak form of modularity in which module implementations depend directly on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies and each module can be typechecked and reasoned about independently.

In this thesis, I present Backpack, a new language for building separately-typecheckable packages on top of a weak module system like Haskell’s. …
Module systems like that of Haskell permit only a weak form of modularity in which module implementations depend directly on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies and each module can be typechecked and reasoned about independently.

In this thesis, I present Backpack, a new language for building separately-typecheckable packages on top of a weak module system like Haskell’s. The design of Backpack is the first to bring the rich world of type systems to the practical world of packages via mixin modules. It’s inspired by the MixML module calculus of Rossberg and Dreyer but by choosing practicality over expressivity Backpack both simplifies that semantics and supports a flexible notion of applicative instantiation. Moreover, this design is motivated less by foundational concerns and more by the practical concern of integration into Haskell. The result is a new approach to writing modular software at the scale of packages.

The semantics of Backpack is defined via elaboration into sets of Haskell modules and binary interface files, thus showing how Backpack maintains interoperability with Haskell while retrofitting it with interfaces. In my formalization of Backpack I present a novel type system for Haskell modules and I prove a key soundness theorem to validate Backpack’s semantics.
Read more

Are We Susceptible to Rowhammer? An End-to-End Methodology for Cloud Providers

Stefan Saroiu Mircosoft Research, Redmond
07 Oct 2019, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 002
simultaneous videocast to Kaiserslautern building G26, room 113 / Meeting ID: 6312
SWS Colloquium
Cloud providers are nervous about recent research showing how Rowhammer attacks affect many types of DRAM including DDR4 and ECC-equipped DRAM.  Unfortunately, cloud providers lack a systematic way to test the DRAM present in their servers for the threat of a Rowhammer attack. Building such a methodology needs to overcome two difficult challenges: (1) devising a CPU instruction sequence that maximizes the rate of DRAM row activations on a given system, and (2) determining the adjacency of rows internal to DRAM. …
Cloud providers are nervous about recent research showing how Rowhammer attacks affect many types of DRAM including DDR4 and ECC-equipped DRAM.  Unfortunately, cloud providers lack a systematic way to test the DRAM present in their servers for the threat of a Rowhammer attack. Building such a methodology needs to overcome two difficult challenges: (1) devising a CPU instruction sequence that maximizes the rate of DRAM row activations on a given system, and (2) determining the adjacency of rows internal to DRAM. This talk will present an end-to-end methodology that overcomes these challenges to determine if cloud servers are susceptible to Rowhammer attacks. With our methodology, a cloud provider can construct worst-case testing conditions for DRAM.

We used our methodology to create worst-case DRAM testing conditions on the hardware used by a major cloud provider for a recent generation of its servers. Our findings show that none of the instruction sequences used in prior work to mount Rowhammer attacks create worst-case DRAM testing conditions. Instead, we construct an instruction sequence that issues non-explicit load and store instructions. Our new sequence leverages microarchitectural side-effects to ``hammer'' DRAM at a near-optimal rate on modern Skylake platforms. We also designed a DDR4 fault injector capable of reverse engineering row adjacency inside a DRAM device. When applied to our cloud provider's DIMMs, we find that rows inside DDR4 DRAM devices do not always follow a linear map.

Joint work with Lucian Cojocar (VU Amsterdam), Jeremie Kim, Minesh Patel, Onur Mutlu (ETH Zurich), Lily Tsai (MIT), and Alec Wolman (MSR)
Read more

Efficient Optimization for Very Large Combinatorial Problems in Computer Vision and Machine Learning

Paul Swoboda MPI-INF - D2
02 Oct 2019, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
In computer vision and machine learning combinatorial optimization problems are widespread, typically NP-hard and tend to pose unique challenges due to their very large scale and problem structure. Established techniques from the mathematical optimization community cannot cope with the encountered problem sizes and do not exploit special problem characteristics. In this talk I will present several new solution paradigms for solving large scale combinatorial problems in computer vision efficiently and to high accuracy. I will discuss how these principles can be applied on classical problems of combinatorial optimization that have found wide use in computer vision, …
In computer vision and machine learning combinatorial optimization problems are widespread, typically NP-hard and tend to pose unique challenges due to their very large scale and problem structure. Established techniques from the mathematical optimization community cannot cope with the encountered problem sizes and do not exploit special problem characteristics. In this talk I will present several new solution paradigms for solving large scale combinatorial problems in computer vision efficiently and to high accuracy. I will discuss how these principles can be applied on classical problems of combinatorial optimization that have found wide use in computer vision, machine learning and computer graphics, namely inference in Markov Random Fields, the quadratic assignment problem and graph decomposition. Lastly, I will show empirical results showing the great practical performance of the presented techniques.
Read more

Toward Cognitive Security

Claude Castelluccia InRIA
02 Oct 2019, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
Online services, devices or secret services are constantly collecting data and meta-data from users. This data collection is mostly  used to target users or customise their services. However, as illustrated by the Cambridge Analytica case, data and technologies are more and  more used to manipulate, influence or shape people's opinions online, i.e. to "hack" our brains. In this context, it is urgent to develop the field of  "Cognitive security" in order to better comprehend these attacks and provide counter-measures.  …
Online services, devices or secret services are constantly collecting data and meta-data from users. This data collection is mostly  used to target users or customise their services. However, as illustrated by the Cambridge Analytica case, data and technologies are more and  more used to manipulate, influence or shape people's opinions online, i.e. to "hack" our brains. In this context, it is urgent to develop the field of  "Cognitive security" in order to better comprehend these attacks and provide counter-measures.  This talk will introduce the concept of "Cognitive security". We will explore the different types of cognitive attacks and discuss possible research directions.
Read more

Human-Centered Design and Data Science for Good

Maria Rauschenberger Universitat Pompeu Fabra
30 Sep 2019, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112 / Meeting ID: 9312
SWS Colloquium
How can we make better applications for social impact issues? For example, the combination of Human-Centered Design (HCD) and Data Science (DS) can be the answer to avoid biases in the collection of data with online-experiments and the analysis of small data. This presentation shows how we combine HCD and DS to design applications and analyze the collected data for Good.  We will focus mainly on the project: "Early screening of dyslexia using a language-independent content game and machine learning". …
How can we make better applications for social impact issues? For example, the combination of Human-Centered Design (HCD) and Data Science (DS) can be the answer to avoid biases in the collection of data with online-experiments and the analysis of small data. This presentation shows how we combine HCD and DS to design applications and analyze the collected data for Good.  We will focus mainly on the project: "Early screening of dyslexia using a language-independent content game and machine learning". With our two designed games (MusVis and DGames), we collected data sets (313 and 137 participants) in different languages (mainly Spanish and German) and evaluated them with machine learning classifiers. For MusVis, we mainly use content that refers to one single acoustic or visual indicator, while DGames content refers to generic content related to various indicators. Our results open the possibility of low-cost and early screening of dyslexia through the Web. In this talk, we will further address the techniques used from HCD and DS to reach these results. 
Read more

Accelerating Network Applications with Stateful TCP Offloading

YoungGyoun Moon KAIST
24 Sep 2019, 10:30 am - 12:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Colloquium
The performance of modern key-value servers or layer-7 load balancers often heavily depends on the efficiency of the underlying TCP stack. Despite numerous optimizations such as kernel-bypassing and zero-copying, performance improvement for TCP applications is fundamentally limited due to the protocol conformance overhead for compatible TCP operations.

In this talk, I will introduce AccelTCP, a hardware-assisted TCP stack architecture that harnesses programmable network interface cards (NICs) as a TCP protocol accelerator. AccelTCP can offload complex TCP operations such as connection setup and teardown completely to NIC, …
The performance of modern key-value servers or layer-7 load balancers often heavily depends on the efficiency of the underlying TCP stack. Despite numerous optimizations such as kernel-bypassing and zero-copying, performance improvement for TCP applications is fundamentally limited due to the protocol conformance overhead for compatible TCP operations.

In this talk, I will introduce AccelTCP, a hardware-assisted TCP stack architecture that harnesses programmable network interface cards (NICs) as a TCP protocol accelerator. AccelTCP can offload complex TCP operations such as connection setup and teardown completely to NIC, which frees a significant amount of host CPU cycles for application processing. In addition, for layer-7 proxies, it supports running connection splicing on NIC so that the NIC relays all packets of the spliced connections with zero DMA overhead. We showcase the effectiveness of AccelTCP with two real-world applications: (1) Redis, a popular in-memory key-value store, and (2) HAProxy, a widely-used layer-7 load balancer. Our evaluation shows that AccelTCP improves their performance by 2.3x and 11.9x, respectively.
Read more

Synthesis from within: implementing automated synthesis inside an SMT solver

Cesare Tinelli University of Iowa
16 Sep 2019, 10:30 am - 11:30 am
Kaiserslautern building G26, room 111
simultaneous videocast to Kaiserslautern building E1 5, room 029 / Meeting ID: 6312
SWS Colloquium
Recent research in automated software synthesis from specifications or observations has leveraged the power of SMT solvers in order to explore the space of synthesis conjectures efficiently. In most of this work, synthesis techniques are built around a backend SMT solver which is used as a black-box reasoning engine. In this talk, I will describe a successful multiyear research effort by the developers of the SMT solver CVC4 that instead incorporates synthesis capabilities directly within the solver, …
Recent research in automated software synthesis from specifications or observations has leveraged the power of SMT solvers in order to explore the space of synthesis conjectures efficiently. In most of this work, synthesis techniques are built around a backend SMT solver which is used as a black-box reasoning engine. In this talk, I will describe a successful multiyear research effort by the developers of the SMT solver CVC4 that instead incorporates synthesis capabilities directly within the solver, and the discuss the advances in performance and scope made possible by this approach.
Read more

Computational Fabrication: 3D Printing and Beyond

Vahid Babaei MPI-INF - D4
04 Sep 2019, 12:15 pm - 1:15 pm
Saarbrücken building E1 5, room 002
Joint Lecture Series
The objective of my talk is to introduce the audience to the exciting field of computational fabrication. The recent, wide availability of 3D printers has triggered considerable interest in academia and industry. Computer scientists could engage with hands-on 3D printing and very soon realize the immense but untapped potential of the manufacturing industry for computational methods. In this talk, I will explain the principles of 3D printing (also known as additive manufacturing) both from hardware and software viewpoints. …
The objective of my talk is to introduce the audience to the exciting field of computational fabrication. The recent, wide availability of 3D printers has triggered considerable interest in academia and industry. Computer scientists could engage with hands-on 3D printing and very soon realize the immense but untapped potential of the manufacturing industry for computational methods. In this talk, I will explain the principles of 3D printing (also known as additive manufacturing) both from hardware and software viewpoints. I will then show examples of recent research addressing computational problems in both 3D printing and general manufacturing. I will also discuss my main research interest, i.e. computational fabrication of visual appearance. Appearance of objects is among their most important and most complicated properties that influence or in numerous cases define their function. I show that additive manufacturing provides unprecedented opportunities to create products with novel and useful appearance properties.
Read more

A type theory for amortized resource analysis

Vineet Rajani Max Planck Institute for Software Systems
27 Aug 2019, 2:00 pm - 3:00 pm
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312
SWS Student Defense Talks - Thesis Proposal
Amortized analysis is a standard algorithmic technique for estimating upper bounds on the average costs of functions, specifically operations on data structures. This thesis intends to develop λ-amor, a type-theory for amortized analysis of higher-order functional programs. A typical amortized analysis works by storing ghost resource called /potential/ with a data structure's internal state. Accordingly, the central idea in λ-amor is a type-theoretic construct to associate potential with an arbitrary type. Additionally, λ-amor relies on standard concepts from substructural and modal type systems: indexed monads, …
Amortized analysis is a standard algorithmic technique for estimating upper bounds on the average costs of functions, specifically operations on data structures. This thesis intends to develop λ-amor, a type-theory for amortized analysis of higher-order functional programs. A typical amortized analysis works by storing ghost resource called /potential/ with a data structure's internal state. Accordingly, the central idea in λ-amor is a type-theoretic construct to associate potential with an arbitrary type. Additionally, λ-amor relies on standard concepts from substructural and modal type systems: indexed monads, affine types and indexed exponential types. We show that λ-amor is not only sound (in a very elementary logical relations model), but also very expressive: It can be used to analyze both eager and lazy data structures, and it can embed existing resource analysis frameworks. In fact, λ-amor is /complete/ for the cost analysis of lazy PCF programs. Further, the basic principles behind λ-amor can be adapted (by dropping affineness and adding mutable state) to obtain an expressive type system for a completely unrelated application, namely, information flow control.

The proposal talk will cover the broad setting and the motivation of the work and a significant subset of λ-amor, but due to time constraints, it will not cover all of λ-amor or the adaptation to information flow control. Implementation of the two type theories is not in the scope of the thesis.
Read more

Modeling and Individualizing Learning in Computer-Based Environments

Tanja Käser Stanford University
21 Aug 2019, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 029
simultaneous videocast to Kaiserslautern building G26, room 112 / Meeting ID: 6312
SWS Colloquium
Learning technologies are becoming increasingly important in today's education. This includes game-based learning and simulations, which produce high volume output, and MOOCs (massive open online courses), which reach a broad and diverse audience at scale. The users of such systems often are of very different backgrounds, for example in terms of age, prior knowledge, and learning speed. Adaptation to the specific needs of the individual user is therefore essential. In this talk, I will present two of my contributions on modeling and predicting student learning in computer-based environments with the goal to enable individualization. …
Learning technologies are becoming increasingly important in today's education. This includes game-based learning and simulations, which produce high volume output, and MOOCs (massive open online courses), which reach a broad and diverse audience at scale. The users of such systems often are of very different backgrounds, for example in terms of age, prior knowledge, and learning speed. Adaptation to the specific needs of the individual user is therefore essential. In this talk, I will present two of my contributions on modeling and predicting student learning in computer-based environments with the goal to enable individualization. The first contribution introduces a new model and algorithm for representing and predicting student knowledge. The new approach is efficient and has been demonstrated to outperform previous work regarding prediction accuracy. The second contribution introduces models, which are able to not only take into account the accuracy of the user, but also the inquiry strategies of the user, improving prediction of future learning. Furthermore, students can be clustered into groups with different strategies and targeted interventions can be designed based on these strategies. Finally, I will also describe lines of future research.
Read more