News 2019

Girls Day 2019

April 2019
As in the years before, the MPIs for Informatics and Software Systems jointly participated in the annual Girls' Day on 28 March. We welcomed a record number of 27 school-aged girls to our institutes to provide them with a bit of insight into computer science. Together we programmed smartphone apps, soldered blinking smiles and running bugs, and solved hands-on computer science puzzles.

Research Spotlight: A General Data Anonymity Measure

January 2019
A long-standing problem both within research and in society generally is that of how to analyze data about people without risking the privacy of those people. There is an ever-growing amount of data about people: medical, financial, social, government, geo-location, etc. This data is very valuable in terms of better understanding ourselves. Unfortunately, analyzing the data in its raw form carries the risk of exposing private information about people.

The problem of how to analyze data while protecting privacy has been around for more than 40 years---ever since the first data processing systems were developed. Most workable solutions are ad hoc: practitioners try things like removing personally identifying information (e.g. names and addresses), aggregating data, removing outlying data, and even swapping some data between individuals. This process can work reasonably well, but it is time-consuming, requires substantial expertise to get right, and invariably limits the accuracy of the analysis or the types of analysis that can be done.

A holy grail within computer science is to come up with an anonymization system that has formal guarantees of anonymity and provides good analytics. Most effort in this direction has focused on two ideas, K-anonymity and Differential Privacy. Both can provide strong anonymity, but except in rare cases neither can do so while still providing adequate analytics. As a result, common practice is still to use informal ad hoc techniques with weak anonymization, and to mitigate risk by for instance sharing data only with trusted partners.

The European Union has raised the stakes with the General Data Protection Regulation (GDPR). The GDPR has strict rules on how personal data may be used, and threatens large fines to organizations that do not follow the rules. However, GDPR says that if data is anonymous, then it is not considered personal and does not fall under the rules. Unfortunately, there are no precise guidelines on how to determine if data is anonymous or not. Member states are expected to come up with certification programs for anonymity, but do not know how to do so.

This is where we come in. Paul Francis' group, in research partnership with the startup Aircloak, has been developing an anonymizing technology called Diffix over the last five years. Diffix is an empirical, not a formal technology, and so the question remains "how anonymous is Diffix?" While it may not be possible to precisely answer that question, one way we try to answer that question is through a bounty program: we pay attackers who can demonstrate how to compromise anonymity in our system. Last year we ran the first (and still only) bounty program for anonymity. The program was successful in that some attacks were found, and in the process of designing defensive measures, Diffix has improved.

In order to run the bounty program, we naturally needed a measure of anonymity so that we could decide how much to pay attackers. We designed a measure based on how much confidence an attacker has in a prediction of an individual's data values, among other things. At some point, we realized that our measure applies not just to attacks on Diffix, but to any anonymization system. We also realized that our measure might be useful in the design of certification programs for anonymity.

We decided to develop a general score for anonymity, and to build tools that would allow anyone to apply the measure to any anonymization technology. The score is called the GDA Score, for General Data Anonymity Score.

The primary strength of the GDA Score is that it can be applied to any anonymization method, and therefore is apples-to-apples. The primary weakness is that it is based on empirical attacks (real attacks against real systems), and therefore the score is only as good as the attacks themselves. If there are unknown attacks on a system, then the score won't reflect this and may therefore make a system look more anonymous than it is.

Our hope is that over time enough attacks will be developed that we can have high confidence in the GDA Score. Towards that end, we've started the Open GDA Score Project. This is a community effort to provide software and databases in support of developing new attacks, and a repository where the scores can be viewed. We recently launched the project in the form of a website, www.gda-score.org, and some initial tools and simple attacks. We will continue to develop tools and new attacks, but our goal is to attract broad participation from the community.

For more information, visit www.gda-score.org.

Francis' group launches Open GDA Score Project

January 2019
We have launched the Open GDA Score Project at www.gda-score.org.  This is an open project to develop a set of tools and databases to generate anonymity scores for any data anonymization technique. The GDA Score, which stands for General Data Anonymity Score, is the first data anonymization measurement methodology that works with any anonymization technique. The GDA Score is a generalization of the measurement technique developed by Francis' group for the Diffix bounty program run last year. This was the first bounty program for anonymity. The GDA Score is of particular interest in Europe, where member states are expected to produce certification programs for anonymity.

Francis' group launches Open GDA Score Project

January 2019
MPI-SWS Director Paul Francis and his group have launched the Open GDA Score Project at www.gda-score.org.  This is an open project to develop a set of tools and databases to generate anonymity scores for any data anonymization technique. The GDA Score, which stands for General Data Anonymity Score, is the first data anonymization measurement methodology that works with any anonymization technique. The GDA Score is a generalization of the measurement technique developed by Francis' group for the Diffix bounty program run last year. This was the first bounty program for anonymity. The GDA Score is of particular interest in Europe, where member states are expected to produce certification programs for anonymity.

MPI-SWS article published in the Proceedings of Academy of Sciences (PNAS)

January 2019
The article "Enhancing Human Learning via spaced repetition optimization", coauthored by MPI-SWS and MPI-IS researchers, has been published in the Proceedings of the National Academy of Sciences (PNAS), a highly prestigious journal.

The (open-access) article can be found here: https://www.pnas.org/content/early/2019/01/18/1815156116.

Five MPI-SWS papers at POPL 2019!

January 2019
Just as in 2018, MPI-SWS researchers again authored a total of five POPL papers in 2019:


  • Bridging the Gap Between Programming Languages and Hardware Weak Memory Models by Anton Podkopaev, Ori Lahav, and Viktor Vafeiadis.

  • From Fine- to Coarse-Grained Dynamic Information Flow Control and Back by Marco Vassena, Alejandro Russo, Deepak Garg, Vineet Rajani, and Deian Stefan.

  • Formal verification of higher-order probabilistic programs by Tetsuya Sato, Alejandro Aguirre, Gilles Barthe, Marco Gaboardi, Deepak Garg, Justin Hsu.

  • Grounding Thin-Air Reads with Event Structures by Soham Chakraborty and Viktor Vafeiadis.

  • On Library Correctness under Weak Memory Consistency by Azalea Raad, Marko Doko, Lovro Rožić, Ori Lahav, and Viktor Vafeiadis.


What's more, the MPI-SWS Software Analysis and Verification group has a whole session to itself at POPL 2019. The weak memory session on Thursday, Jan 17, is comprised of the three papers by Viktor Vafeiadis, his students, postdocs, and collaborators.