News 2018

Research Spotlight: From Newton to Turing to cyber-physical systems

February 2018
In 1937, a young Englishman by the name of Alan M. Turing published a paper with the obscure title "On computable numbers, with an application to the Entscheidungsproblem'' in the Proceedings of the London Mathematical Society. In doing so, he arguably laid the mathematical foundations of modern computer science. Turing's seminal contribution was to show that the famous Entscheidungsproblem, formulated by the great German mathematician David Hilbert several years earlier, could not be solved: more precisely, Turing proved (in modern parlance) that the problem of determining whether a given computer program halts could not be done algorithmically---in other words that the famous Halting Problem is undecidable.

Although seemingly at the time a rather esoteric concern, the Halting Problem (and related questions) have dramatically gained in importance and relevance in more contemporary times. Fast forward to the 21st Century: nowadays, it is widely acknowledged that enabling engineers, programmers, and researchers to automatically verify and certify the correctness of the computer systems that they design is one of the Grand Challenges of computer science. In increasingly many instances, it is absolutely critical that the software governing various aspects of our daily lives (such as that running on an aircraft controller, for example) behave exactly as intended, lest catastrophic consequences ensue.


What classes of infinite-state programs can be analyzed algorithmically?


Researchers at the Foundations of Algorithmic Verification group are investigating what classes of infinite-state programs can, at least in principle, be fully handled and analyzed algorithmically by viewing computer programs abstractly as dynamical systems, and they seek to design exact algorithms enabling one to fully analyse the behaviour of such systems. In particular, they are presently tackling a range of central algorithmic problems from verification, synthesis, performance, and control for linear dynamical systems, drawing among others on tools from number theory, Diophantine geometry, and algebraic geometry, with the overarching goal of offering a systematic exact computational treatment of various important classes of dynamical systems and other fundamental models used in mathematics, computer science, and the quantitative sciences. Some of their achievements include several decidability and hardness results for linear recurrence sequences, which can be used to model simple loops in computer programs, answering a number of longstanding open questions in the mathematics and computer science literature.

In a series of recent papers [1, 2],  they have attacked the so-called Zero Problem for linear differential equations, i.e., the question of determining algorithmically whether the unique solution to a given linear differential equation has a zero or not. Such equations, which go back as far as Newton, are ubiquitous in mathematics, physics, and engineering; they are also particularly useful to model cyber-physical systems, i.e., digital systems that evolve in and interact with a continuous environment. In their work, they obtained several important partial results: if one is interested in the existence of a zero over a bounded time interval, then it is possible to determine this algorithmically, provided that a certain hypothesis from the mathematical field of number theory, known as Schanuel's Conjecture, is true. They were also able to partially account for the fact that the Zero Problem has hitherto remained open in full generality: indeed, if one were able to solve it in dimension 9 (or higher), then in turn this would enable one to solve various longstanding hard open problems from a field of mathematics known as Diophantine approximation. In doing so, they therefore exhibited surprising and unexpected connections between the modelling and analysis of cyber-physical systems and seemingly completely unrelated deep mathematical theories dealing with questions about whole numbers.

References


[1] Ventsislav Chonev, Joel Ouaknine, and James Worrell. On recurrent reachability for continuous linear dynamical systems. In Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2016.

[2] Ventsislav Chonev, Joel Ouaknine, and James Worrell. On the Skolem Problem for continuous linear dynamical systems. In Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming (ICALP), 2016.

Björn Brandenburg receives SIGBED Early Career Award

February 2018
MPI-SWS faculty member Björn Brandenburg has received the first ever SIGBED Early Career Researcher Award. The award is given by ACM SIGBED to recognize outstanding contributions by young investigators in the area of embedded, real-time, and cyber-physical systems.

A week-long school for outstanding undergrad/MS students curious about research in computing. Apply now!

January 2018
Outstanding undergraduate and Masters students are invited to learn about world-class research in security and privacy, social systems, distributed systems, machine learning, programming languages, and verification. Leading researchers will engage with attendees in their areas of expertise: the curriculum will include lectures, projects, and interaction with faculty from participating institutions.

Attendees will be exposed to state-of-the-art research in computer science, have the opportunity to interact one-on-one with internationally leading scientists from three of the foremost academic institutions in research and higher learning in the US and in Europe, and network with like-minded students. They will get a sense of what it is like to pursue an academic or industrial research career in computer science and have a head start when applying for graduate school.

Applications are due by February 7, 2018. Travel and accommodation will be covered for accepted students.

More info can be found on the CMMRS website.

POPLpalooza: Five MPI-SWS papers at POPL 2018 + a new record!

January 2018
In 2018, MPI-SWS researchers authored a total of five POPL papers:
  • Parametricity versus the Universal Type. Dominique Devriese, Marco Patrignani, Frank Piessens.
  • Effective Stateless Model Checking for C/C++ Concurrency. Michalis Kokologiannakis, Ori Lahav, Kostis Sagonas, Viktor Vafeiadis.
  • Monadic refinements for relational cost analysis. Ivan Radicek, Gilles Barthe, Marco Gaboardi, Deepak Garg, Florian Zuleger.
  • Why is Random Testing Effective for Partition Tolerance Bugs? Rupak Majumdar, Filip Niksic.
  • RustBelt: Securing the Foundations of the Rust Programming Language. Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, Derek Dreyer.

Furthermore, with the "RustBelt" paper, MPI-SWS faculty member Derek Dreyer cements a 10-year streak of having at least one POPL paper each year, breaking the all-time record of 9 years previously held by John Mitchell at Stanford. Congratulations Derek!

Research Spotlight: Teaching machine learning algorithms to be fair

January 2018
Machine learning algorithms are increasingly being used to automate decision making in several domains such as hiring, lending and crime-risk prediction. These algorithms have shown significant promise in leveraging large or “big” training datasets to achieve high prediction accuracy, sometimes surpassing even human accuracy.

Unfortunately, some recent investigations have shown that machine learning algorithms can also lead to unfair outcomes. For example, a recent ProPublica study found that COMPAS, a tool used in US courtrooms for assisting judges with crime risk prediction, was unfair towards black defendants. In fact, several studies from governments, regulatory authorities, researchers as well as civil rights groups have raised concerns about machine learning potentially acting as a tool for perpetuating existing unfair practices in society, and worse, introducing new kinds of unfairness in prediction tasks. As a consequence, a flurry of recent research has focused on defining and implementing appropriate computational notions of fairness for machine learning algorithms.



Parity-based fairness


Existing computational notions of fairness in the machine learning literature are largely inspired by the concept of discrimination in social sciences and law. These notions require the decision outcomes to ensure parity (i.e. equality) in treatment and in impact.

Notions based on parity in treatment require that the decision algorithm should not take into account the sensitive feature information (e.g., gender, race) of a user. Notions based on parity in impact require that the decision algorithm should give beneficial decision outcomes (e.g., granting a loan) to similar percentages of people from all sensitive feature groups (e.g., men, women).

However, in many cases, these existing notions are too stringent and can lead to unexpected side effects. For example, ensuring parity has been shown to lead to significant reductions in prediction accuracy. Parity may also lead to scenarios where none of the groups involved in decision making (e.g., neither men nor women) get beneficial outcomes. In other words, these scenarios might be preferred neither by the decision maker using the algorithm (due to diminished accuracy), nor by the groups involved (due to very little benefits).

User preferences and fairness


In recent work, to appear at NIPS 2017, researchers at MPI-SWS have introduced two new computational notions of algorithmic fairness: preferred treatment and preferred impact. These notions are inspired by ideas related to envy-freeness and bargaining problem in economics and game theory. Preferred treatment and preferred impact leverage these ideas to build more accurate solutions that are preferable for both the decision maker and the user groups.

The new notion of preferred treatment allows basing the decisions on sensitive feature information (thereby relaxing the parity treatment criterion) as long as the decision outcomes do not lead to envy. That is, each group of users prefers their own group membership over other groups and does not feel that presenting itself to the algorithm as another group would have led to better outcomes for the group.

The new notion of preferred impact allows differences in beneficial outcome rates for different groups (thereby relaxing the parity impact criterion) as long as all the groups get more beneficial outcomes than what they would have received under the parity impact criterion.

In their work, MPI-SWS researchers have developed a technique to ensure machine learning algorithms satisfy preferred treatment and / or preferred impact. They also tested their technique by designing crime-predicting machine-learning algorithms that satisfy the above-mentioned notions. In their experiments, they show that preference-based fairness notions can provide significant gains in overall decision-making accuracy as compared to parity-based fairness, while simultaneously increasing the beneficial outcomes for the groups involved.

This work is one of the most recent additions to an expanding set of techniques developed by MPI-SWS researchers to enable fairness, accountability and interpretability of machine learning algorithms.

References


Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna Gummadi and Adrian Weller. From Parity to Preference: Learning with Cost-effective Notions of Fairness. Neural Information Processing Systems (NIPS), Long Beach (CA, USA), December 2017