Events 2007

Juan Navarro Perez University of Manchester
20 Dec 2007, 2:00 pm - 3:00 pm
Saarbrücken building E1 5, room rotunda 6th floor
SWS Colloquium


We present an encoding that is able to specify LTL bounded model checking problems within the Bernays-Schönfinkel fragment of first-order logic. This fragment, which also corresponds to the category of effectively propositional problems (EPR) of the CASC system competitions, allows a natural and succinct representation of both a software/hardware system and the property that one wants to verify.

This is part of the research I did during my PhD studies at the University of Manchester, ...


We present an encoding that is able to specify LTL bounded model checking problems within the Bernays-Schönfinkel fragment of first-order logic. This fragment, which also corresponds to the category of effectively propositional problems (EPR) of the CASC system competitions, allows a natural and succinct representation of both a software/hardware system and the property that one wants to verify.

This is part of the research I did during my PhD studies at the University of Manchester, which deals with finding problems suitable for encoding within the effectively propositional class of formulas, and aims to encourage the interest on theorem proving in this restricted fragment of first order logic.

Read more

Meeyoung Cha KAIST
13 Dec 2007, 9:00 am - 10:00 am
Saarbrücken building E1 5, room 019
SWS Colloquium
Multimedia streaming is becoming an integral part of people's life.  In this talk, I will present data analysis on user's viewing behaviors based on two streaming services: YouTube and IPTV.  YouTube is the largest VoD service for user generated contents (UGC).  Nowadays, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities.  In this talk, I will present the intrinsic statistical properties of video popularity based on real traces from YouTube. ...
Multimedia streaming is becoming an integral part of people's life.  In this talk, I will present data analysis on user's viewing behaviors based on two streaming services: YouTube and IPTV.  YouTube is the largest VoD service for user generated contents (UGC).  Nowadays, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities.  In this talk, I will present the intrinsic statistical properties of video popularity based on real traces from YouTube. Understanding the popularity characteristics is important because it can bring forward the latent demand created by bottlenecks in the system (e.g. poor search and recommendation engines, lack of metadata, etc).

Another popular form of streaming is IPTV.  After many years of academic and industry work, we are finally witnessing a thriving of IP multicast in large scale deployment of IPTV services.  In the second part of this talk, I will present for the first time previously hidden TV viewing habits based on real traces.  I will also discuss the feasibility of using peer-to-peer distribution for scalable IPTV system and supporting advanced viewing controls such as DVD-like functionalities, content recommendations, and target advertisements.
Read more

From Lowenheim to PSL

Moshe Vardi Rice University
12 Dec 2007, 5:00 pm - 6:00 pm
Kaiserslautern building G26, room building 42, lecture hall 110
SWS Distinguished Lecture Series
One of the surprising developments in the area of program verification is how several ideas introduced by logicians in the first part of the 20th century ended up yielding at the start of the 21st century an industrial-standard property-specification language called PSL. This development was enabled by the equally unlikely transformation of the mathematical machinery of automata on infinite words, introduced in the early 1960s for second-order arithmetics, into effective algorithms for model-checking tools. This talk attempts to trace the tangled threads of this development. ...
One of the surprising developments in the area of program verification is how several ideas introduced by logicians in the first part of the 20th century ended up yielding at the start of the 21st century an industrial-standard property-specification language called PSL. This development was enabled by the equally unlikely transformation of the mathematical machinery of automata on infinite words, introduced in the early 1960s for second-order arithmetics, into effective algorithms for model-checking tools. This talk attempts to trace the tangled threads of this development.
Read more

Reliable and efficient programming abstractions for sensor networks

Ramki Gummadi MIT
30 Nov 2007, 11:00 pm - 01 Dec 2007, 12:00 am
Saarbrücken building E1 5, room 024
SWS Colloquium
It is currently difficult to build practical and reliable programming systems out of distributed and resource-constrained sensor devices. The state of the art in today's sensornet programming is centered around nesC. nesC is a nodelevel language---a program is written for an individual node in the network---and nesC programs use the services of the TinyOS operating system. In this talk, I will describe an alternate approach to programming sensor networks that significantly raises the level of abstraction over today's practice. ...
It is currently difficult to build practical and reliable programming systems out of distributed and resource-constrained sensor devices. The state of the art in today's sensornet programming is centered around nesC. nesC is a nodelevel language---a program is written for an individual node in the network---and nesC programs use the services of the TinyOS operating system. In this talk, I will describe an alternate approach to programming sensor networks that significantly raises the level of abstraction over today's practice. The critical change is one of perspective: rather than writing programs from the point of view of an individual node, programmers implement a central program that conceptually has access to the entire network. This approach pushes to the compiler the task of producing node-level programs that implement the desired behavior.

I will present the Pleiades programming language, its compiler, and its runtime. The Pleiades language extends the C language with constructs that allow programmers to name and access node-local state within the network and to specify simple forms of concurrent execution. The compiler and runtime system cooperate to implement Pleiades programs efficiently and reliably. First, the compiler employs a novel program analysis to translate Pleiades programs into message-efficient units of work implemented in nesC. The Pleiades runtime system orchestrates execution of these units, using TinyOS services, across a network of sensor nodes. Second, the compiler and runtime system employ novel locking, deadlock detection, and deadlock recovery algorithms that guarantee serializability in the face of concurrent execution. We illustrate the readability, reliability and efficiency benefits of the Pleiades language through detailed experiments, and demonstrate that the Pleiades implementation of a realistic application performs similar to a hand-coded nesC version that contains more than ten times as much code.
Read more

Baltic: Service Combinators for Farming Virtual Machines

Andrew D. Gordon Microsoft Research
20 Nov 2007, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room bldg. 57, rotunda
SWS Distinguished Lecture Series
Based on joint work with Karthikeyan Bhargavan, Microsoft Research Iman Narasamdya, University of Manchester

We consider the problem of managing server farms largely automatically, in software. Automated management is gaining in importance with the widespread adoption of virtualization technologies, which allow multiple virtual machines per physical host. We consider the case where each server is service-oriented, in the sense that the services it provides, and the external services it depends upon, are explicitly described in metadata. ...
Based on joint work with Karthikeyan Bhargavan, Microsoft Research Iman Narasamdya, University of Manchester

We consider the problem of managing server farms largely automatically, in software. Automated management is gaining in importance with the widespread adoption of virtualization technologies, which allow multiple virtual machines per physical host. We consider the case where each server is service-oriented, in the sense that the services it provides, and the external services it depends upon, are explicitly described in metadata. We describe the design, implementation, and formal semantics of a library of combinators whose types record and respect server metadata. Our implementation consists of a typed functional script built with our combinators, in control of a Virtual Machine Monitor hosting a set of virtual machines. Our combinators support a range of operations including creation of virtual machines, their interconnection using typed endpoints, and the creation of intermediaries for tasks such as load balancing. Our combinators also allow provision and reconfiguration in response to events such as machine failures or spikes in demand. We describe a series of programming examples run on our implementation, based on existing server code for order processing, a typical data centre workload. To obtain a formal semantics for any script using our combinators, we provide an alternative implementation of our interface using a small concurrency library. Hence, the behaviour of the script plus our libraries can be interpreted within a statically typed process calculus. Assuming that server metadata is correct, a benefit of typing is that various server configuration errors are detected statically, rather than sometime during the execution of the script.

Read more

Transforming RTL Design to Counter Automaton

Ales Smrcka Brno University of Technology
19 Nov 2007, 2:00 pm
Saarbrücken building E1 5, room Rotunda 6th floor
SWS Colloquium
Abstract: The languages for the description of a hardware design in RTL level (e.g., VHDL or Verilog) allow to specify a generic description. Such a description contains one or more parameters which make the generic nature of a design (e.g, the number of items in a buffer or the width of inputs and output of arithmetic operation). A new approach to formal verification of generic hardware designs will be presented. The proposed approach is based on a translation of such designs to counter automata. ...
Abstract: The languages for the description of a hardware design in RTL level (e.g., VHDL or Verilog) allow to specify a generic description. Such a description contains one or more parameters which make the generic nature of a design (e.g, the number of items in a buffer or the width of inputs and output of arithmetic operation). A new approach to formal verification of generic hardware designs will be presented. The proposed approach is based on a translation of such designs to counter automata. These main topics of modelling and the verification will be discussed in more details: the translation of VHDL constructs to counter automata, specification of an environment of modelled hardware component, specification of safety properties over a design, and some experimental results with ARMC and Blast.

Read more

Title: Can Concurrent Software Ever Be Quality Software?

Edward A. Lee UC Berkeley
09 Nov 2007, 2:00 pm - 3:00 pm
Kaiserslautern building G26, room rotunda bldg. 57
SWS Distinguished Lecture Series


The most widely used concurrent software techniques, which are based on threads, monitors (or approximations to monitors), and semaphores, yield incomprehensible and untestable software. Bugs due to race conditions, timing unpredictability, and potential deadlocks can go undetected for a very long time. Unexpected interactions between even loosely coupled software components can destabilize systems. Yet increased parallelism in general-purpose computing (particularly multicore systems), multi-threaded languages such as Java and C#, increased networking in embedded computing, ...


The most widely used concurrent software techniques, which are based on threads, monitors (or approximations to monitors), and semaphores, yield incomprehensible and untestable software. Bugs due to race conditions, timing unpredictability, and potential deadlocks can go undetected for a very long time. Unexpected interactions between even loosely coupled software components can destabilize systems. Yet increased parallelism in general-purpose computing (particularly multicore systems), multi-threaded languages such as Java and C#, increased networking in embedded computing, and a growing diversity of attached hardware requiring specialized device drivers mean that a much greater fraction of software is concurrent. Software designers are poorly equipped for this. They use threads because syntactically, threads change almost nothing. They only later discover that semantically, threads change everything. By then it is too late. Yet there is no shortage of theory; there is a mature community with sound, well-developed techniques that the mainstream largely ignores. How can we change that? In this talk, I will make a case that composition languages, which describe program structure only, can be coupled with concurrent models of computation and conventional imperative languages to form a powerful troika. Such heterogeneous combinations of languages do have a chance for acceptance, and in certain niche situations, have already achieved a measure of acceptance

Read more

Stable Internet Routing Without Global Coordination

Jennifer Rexford Princeton University
29 Oct 2007, 2:00 pm
Saarbrücken building E1 5, room 019
SWS Distinguished Lecture Series


The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability. This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet's hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. ...


The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability. This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet's hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. The guidelines conform to conventional traffic-engineering practices of ISPs, and provide each AS with significant flexibility in selecting its local policies. Furthermore, the guidelines ensure route convergence even under changes in the topology and routing policies. Drawing on a formal model of BGP, we prove that following our proposed policy guidelines guarantees route convergence. We also describe how our methodology can be applied to new types of relationships between ASes, how to verify the hierarchical AS relationships, and how to realize our policy guidelines. Our approach has significant practical value since it preserves the ability of each AS to apply complex local policies without divulging its BGP configurations to others. The end of the talk briefly summarizes follow-up studies that have built on this work.

Read more

Formal Models for Side-Channel Attacks

Boris Köpf ETH Zuerich
24 Oct 2007, 10:30 am - 11:30 am
Saarbrücken building E1 5, room 023
SWS Colloquium
Side-channel attacks have become so effective that they pose a real threat to the security of cryptographic algorithms. This threat is not covered by traditional notions of cryptographic security and models for proving resistance against it are only now emerging. In this talk, I will present work on such a model. It is based on concrete and realistic assumptions about the attacker and it is tailored to synchronous hardware, where faithful system models are available. The model leads to meaningful metrics for assessing the resistance of a system to side-channel attacks. ...
Side-channel attacks have become so effective that they pose a real threat to the security of cryptographic algorithms. This threat is not covered by traditional notions of cryptographic security and models for proving resistance against it are only now emerging. In this talk, I will present work on such a model. It is based on concrete and realistic assumptions about the attacker and it is tailored to synchronous hardware, where faithful system models are available. The model leads to meaningful metrics for assessing the resistance of a system to side-channel attacks. I will show how these metrics can be computed and be used for analyzing nontrivial hardware implementations for their vulnerability to timing attacks. I will conclude with a number of directions for further research.

Read more

Technology for Developing Regions

Eric Brewer UC Berkeley
28 Sep 2007, 11:00 am - 12:00 pm
Kaiserslautern building G26, room bldg. 42, HS 110
SWS Distinguished Lecture Series
Moore's Law and the wave of technologies it enabled have led to tremendous improvements in productivity and the quality of life in industrialized nations. Yet, technology has had almost no effect on the other five billion people on our planet. In this talk I argue that decreasing costs of computing and wireless networking make this the right time to spread the benefits of technology, and that the biggest missing piece is a lack of focus on the problems that matter. ...
Moore's Law and the wave of technologies it enabled have led to tremendous improvements in productivity and the quality of life in industrialized nations. Yet, technology has had almost no effect on the other five billion people on our planet. In this talk I argue that decreasing costs of computing and wireless networking make this the right time to spread the benefits of technology, and that the biggest missing piece is a lack of focus on the problems that matter. After covering some example applications that have shown very high impact, I present some our own results, including the use of novel low-cost telemedicine to improve the vision of real people, with over 20,000 patients examined so far. I conclude with some discussion on the role of EECS researchers in this new area.
Read more

Structural Abstraction of Software Verification Conditions--

Domagoj Babic, UBC. U of British Columbia
27 Sep 2007, 3:00 pm - 4:00 pm
Saarbrücken building E1 5, room rotunda 6th floor
SWS Colloquium
Precise software analysis and verification require tracking the exact path along which a statement is executed (path-sensitivity), the different contexts from which a function is called (context-sensitivity), and the bit-accurate operations performed. Previously, verification with such precision has been considered too inefficient to scale to large software. In this talk, Domagoj presents a novel approach to solving such verification conditions, based on an automatic abstraction-checking-refinement framework that exploits natural abstraction boundaries present in software. Experimental results show that our approach scales to over 200,000 lines of real C code. ...
Precise software analysis and verification require tracking the exact path along which a statement is executed (path-sensitivity), the different contexts from which a function is called (context-sensitivity), and the bit-accurate operations performed. Previously, verification with such precision has been considered too inefficient to scale to large software. In this talk, Domagoj presents a novel approach to solving such verification conditions, based on an automatic abstraction-checking-refinement framework that exploits natural abstraction boundaries present in software. Experimental results show that our approach scales to over 200,000 lines of real C code.
Read more

The Vision and Reality of Ubiquitous Computing

Henning Schulzrinne, Columbia University
19 Sep 2007, 3:00 pm
Kaiserslautern building G26, room bldg. 57, rotunda
SWS Distinguished Lecture Series
Simultaneous videocast: Campus of Saarland University, Building E 1.4, room 024 (MPI building)



About ten years ago, the notion of ubiquitous computing first appeared, just as the first truly mobile devices became available. Ten years later, the notion of ubiquitous computing as integrating computing into the environment has not quite panned out, with the emphasis shifting to personal, mobile devices. In this talk, I will try to illustrate some of the user-focused challenges that derive from the goals of ubiquitous computing. ...
Simultaneous videocast: Campus of Saarland University, Building E 1.4, room 024 (MPI building)



About ten years ago, the notion of ubiquitous computing first appeared, just as the first truly mobile devices became available. Ten years later, the notion of ubiquitous computing as integrating computing into the environment has not quite panned out, with the emphasis shifting to personal, mobile devices. In this talk, I will try to illustrate some of the user-focused challenges that derive from the goals of ubiquitous computing. We have also started to address some aspects of the problem with our work on service and session mobility, as well as attempts to offer core network services, such as email and web access, in partially disconnected environments.



Read more

Loss and Delay Accountability for the Internet

Katerina Argyraki EPFL, Switzerland
12 Sep 2007, 11:00 am - 12:00 pm
Saarbrücken building E1 5, room rotunda 6th floor
SWS Colloquium


The Internet provides no information on the fate of transmitted packets, and end systems cannot determine who is responsible for dropping or delaying their traffic. As a result, they cannot verify that their ISPs are honoring their service level agreements, nor can they react to adverse network conditions appropriately. While current probing tools provide some assistance in this regard, they only give feedback on probes, not actual traffic. Moreover, service providers could, at any time, ...


The Internet provides no information on the fate of transmitted packets, and end systems cannot determine who is responsible for dropping or delaying their traffic. As a result, they cannot verify that their ISPs are honoring their service level agreements, nor can they react to adverse network conditions appropriately. While current probing tools provide some assistance in this regard, they only give feedback on probes, not actual traffic. Moreover, service providers could, at any time, render their network opaque to such tools.

I will present AudIt, an explicit "accountability interface" for the Internet, through which ISPs can pro-actively supply feedback to traffic sources on loss and delay, at administrative-domain granularity. AudIt benefits not only end systems, but also ISPs, because---in contrast to probing tools---it allows them to control the amount and quality of information revealed about their internal structure and policy. I will show that AudIt is resistant to ISP lies in a business-sensible threat model and can be implemented with a modest NetFlow modification. Finally, I will discuss a Click-based prototype, which introduced less than 2% bandwidth overhead on real traces from a Tier-1 ISP.
Read more

Lifetime Driven MAC Protocols for Ad Hoc Networking

Prof.Siva Ram Murthy IIT Madras
25 May 2007, 2:00 pm
Saarbrücken building E1 4, room 021
SWS Colloquium
In the last few years, there has been a big interest in Ad Hoc Wireless Networks as they have tremendous military and commercial potential. An Ad Hoc Wireless Network is a wireless network, comprising of mobile nodes (which can also serve as routers) that use wireless transmission, having no infrastructure (central administration such as a Base Station in a Cellular Wireless Network or an Access Point in a Wireless LAN). Ad Hoc Wireless Networks can be set up anywhere and anytime as they eliminate the complexities of infrastructure setup. ...
In the last few years, there has been a big interest in Ad Hoc Wireless Networks as they have tremendous military and commercial potential. An Ad Hoc Wireless Network is a wireless network, comprising of mobile nodes (which can also serve as routers) that use wireless transmission, having no infrastructure (central administration such as a Base Station in a Cellular Wireless Network or an Access Point in a Wireless LAN). Ad Hoc Wireless Networks can be set up anywhere and anytime as they eliminate the complexities of infrastructure setup. Ad Hoc wireless Networks find applications in several areas. Some of these include: military applications (establishing communication among a group of soldiers for tactical operations as setting up of a fixed infrastructure in enemy territories or in inhospitable terrains may not be possible), collaborative and distributed computing, emergency operations, wireless mesh networks, wireless sensor networks, and hybrid (integrated Cellular and Ad hoc) wireless networks. In this talk, I first present a brief overview of the major issues that influence the design and performance of Ad Hoc Wireless Networks. As the performance of any wireless network hinges on the Medium Access Control (MAC) protocol, more so for Ad Hoc Wireless Networks, I present, in detail, novel distributed homogeneous and heterogeneous battery aware MAC protocols, which take benefit of the chemical properties of the batteries and their characteristics, to provide fair node scheduling and increased network and node lifetime through uniform discharge of batteries.

Read more

Abstraction for Liveness and Safety

Andrey Rybalchenko Ecole Polytechnique Fédérale de Lausanne and Max Planck Institute for Computer Science
26 Apr 2007, 3:00 pm
Saarbrücken building E1 4, room 019
SWS Colloquium
We present new approaches to verification of liveness and safety properties of software. Proving liveness properties of programs is central to the process of ensuring that software systems can always react. Verification of liveness properties had been an open problem since 1970s, due to the lack of modular termination arguments and adequate abstraction techniques. We highlight our experience in developing the theoretical foundations for the first software verification tool for termination that provides capacity for large program fragments (of more than 20,000 lines of code) together with support for programming language features such as arbitrarily nested loops, ...
We present new approaches to verification of liveness and safety properties of software. Proving liveness properties of programs is central to the process of ensuring that software systems can always react. Verification of liveness properties had been an open problem since 1970s, due to the lack of modular termination arguments and adequate abstraction techniques. We highlight our experience in developing the theoretical foundations for the first software verification tool for termination that provides capacity for large program fragments (of more than 20,000 lines of code) together with support for programming language features such as arbitrarily nested loops, pointers, function-pointers, side-effects, etc. We also describe our experience in applying the tool on device driver dispatch routines from the Windows operating system. In the second part of the talk, we will focus on abstraction techniques that are at heart of state-of-the-art verification tools for safety. We address their limitations, which severely restrict the practical applicability. We propose a new approach for finding abstraction of a program that overcomes the inherent limitations of current abstraction refinement schemes.

Read more

Feedback-directed random test generation

Michael D. Ernst MIT
24 Apr 2007, 4:00 pm
Saarbrücken building E1 3 - Hörsaal Gebäude, room HS 003
SWS Colloquium


We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. ...


We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected.

Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation.
Read more

Expanding and Exploiting the Expressive Power of Modules

Derek Dreyer Toyota Technological Institute at Chicago
19 Apr 2007, 3:00 pm
Saarbrücken building E1 4, room 019
SWS Colloquium
Modularity is widely viewed as a central concept in the design of robust software. Programming languages vary widely, however, in the kinds of modularity mechanisms they support, leading to the mistaken impression that there are fundamental tradeoffs between different paradigms of modular programming. The high-level goal of my research is to design languages that overcome these tradeoffs and combine the benefits of existing mechanisms.

In this talk, I will describe my work on using the ML module system as a basis for developing next-generation module languages. ...
Modularity is widely viewed as a central concept in the design of robust software. Programming languages vary widely, however, in the kinds of modularity mechanisms they support, leading to the mistaken impression that there are fundamental tradeoffs between different paradigms of modular programming. The high-level goal of my research is to design languages that overcome these tradeoffs and combine the benefits of existing mechanisms.

In this talk, I will describe my work on using the ML module system as a basis for developing next-generation module languages. ML provides an excellent starting point because of its powerful support for both data abstraction (implementor-side modularity) and generic programming (client-side modularity). Nevertheless, there are ways in which ML's module language is unnecessarily inflexible and in which the expressive power of ML modules has not been exploited to its full potential.

In the first part of the talk, I will compare the support for generic programming offered by ML modules with that offered by Haskell's "type classes". Modules emphasize explicit program configuration and namespace control, whereas type classes emphasize implicit program configuration and overloading. In fact, I show that it is possible to support both styles of programming within one language, by exploiting the expressive power of ML modules and encoding type classes directly in terms of existing ML module mechanisms.

In the second part of the talk, I will discuss the problem of extending ML with one of the most frequently requested features---recursive modules. The lack of support for this feature is a key stumbling block in incorporating ML-style modules into object-oriented languages, in which recursive components are commonplace. The primary difficulty with recursive modules is something called the "double vision" problem, which concerns the interaction of recursion and data abstraction. I trace the problem back to a deficiency in the classical type-theoretic account of data abstraction (namely, existential types), and I show how a novel interpretation of data abstraction as a computational effect leads to an elegant solution.
Read more

Enabling what-if explorations in distributed systems

Eno Thereska Carnegie Mellon University
16 Apr 2007, 3:00 pm
Saarbrücken building E1 4, room 019
SWS Colloquium


With a large percentage of total system cost going to system administration tasks, self-management remains a difficult and important goal in systems. As a step towards the self-management vision, I will present a framework we have developed that enables systems to be self-predicting and answer ``what-if'' questions about their behavior with little or no administrator involvement. We have built a Resource Advisor inside two real systems: Microsoft's SQL Server database and the Ursa Minor storage system at Carnegie Mellon University. ...


With a large percentage of total system cost going to system administration tasks, self-management remains a difficult and important goal in systems. As a step towards the self-management vision, I will present a framework we have developed that enables systems to be self-predicting and answer ``what-if'' questions about their behavior with little or no administrator involvement. We have built a Resource Advisor inside two real systems: Microsoft's SQL Server database and the Ursa Minor storage system at Carnegie Mellon University. The Resource Advisor helps with upgrade and data placement decisions and provides what-if interfaces to external administrators (and internal tuning modules). The Resource Advisor is based on efficient system behavioral models that enable robust predictions in multi-tier systems.

Bio:

Eno Thereska is a PhD student at Carnegie Mellon University working with Prof. Greg Ganger. Eno has broad research interests in computer systems. Currently he is investigating ways to make the management of distributed systems easier. An approach he is currently pursuing puts sufficient instrumentation and modeling within the system, enabling it to answer several important what-if questions without outside intervention. He is interested in applying methods from queuing analysis (for components build from scratch) and machine learning (for legacy components) to this problem. As a testbed he is using Ursa Minor, a cluster-based storage system being deployed at Carnegie Mellon for researching system management issues. Concrete what-if questions in this system are about the effect of resource upgrades, service migration and data distribution. Eno received the Masters of Science (MS) degree in Electrical and Computer Engineering in 2003 at Carnegie Mellon University and the Bachelor of Science (BS) degree in Electrical and Computer Engineering and Computer Science in 2002 also at CMU. ------------------------------
Read more

Scalable Byzantine Fault Tolerance

Rodrigo Rodrigues INESC-ID, Lisbon
04 Apr 2007, 3:00 pm
Saarbrücken building E1 5, room 019
SWS Colloquium
The research presented in this talk attempts to extend Byzantine fault tolerance protocols to provide better scalability. I will cover two axes of scalability in such systems.

First, I will focus on systems with a large number of nodes, where existing solutions are not well-suited, since they assume a static set of replicas, or provide limited forms of reconfiguration. In the context of a storage system we implemented, I will present a membership service that is part of the overall design, ...
The research presented in this talk attempts to extend Byzantine fault tolerance protocols to provide better scalability. I will cover two axes of scalability in such systems.

First, I will focus on systems with a large number of nodes, where existing solutions are not well-suited, since they assume a static set of replicas, or provide limited forms of reconfiguration. In the context of a storage system we implemented, I will present a membership service that is part of the overall design, but can be reused by any large-scale Byzantine-fault-tolerant system. The membership service provides applications with a globally consistent view of the set of currently available servers. Its operation is mostly automatic, to avoid human configuration errors; it can be implemented by a subset of the storage servers; it tolerates arbitrary failure of the servers that implement it; and it can itself be reconfigured.

The second aspect of scalability of Byzantine-fault-tolerant systems that this talk discusses is scalability with the size of the replica group (and, consequently, the number of faults that the system tolerates). I show a new replication protocol called HQ, that combines quorum-based techniques that are more scalable when there is no contention with more traditional state-machine replication protocols with quadratic inter-replica communication like Castro-Liskov's BFT that are useful for resolving concurrency issues.
Read more