News 2019

Security & Privacy

Paper by MPI-SWS researchers wins both a 2019 Usenix Security Symposium Distinguished Paper Award and the Usenix/Facebook Internet Defense Prize

The paper "ERIM: Secure, Efficient, In-process Isolation with Memory Protection Keys (MPK)" received a Distinguished Paper Award at the 2019 Usenix Security Symposium. It was selected as one of 6 distinguished papers out of 113 papers that appeared in the conference proceedings.

The work was also selected as the recipient of the Usenix Internet Defense Prize, along with a USD 100k gift from Facebook to support  further development of the technology. ...
The paper "ERIM: Secure, Efficient, In-process Isolation with Memory Protection Keys (MPK)" received a Distinguished Paper Award at the 2019 Usenix Security Symposium. It was selected as one of 6 distinguished papers out of 113 papers that appeared in the conference proceedings.

The work was also selected as the recipient of the Usenix Internet Defense Prize, along with a USD 100k gift from Facebook to support  further development of the technology.

The paper was authored by MPI-SWS doctoral students Anjo Vahldiek-Oberwagner, Eslam Elnikety, and Michael Sammler, along with MPI-SWS intern Nuno Duarte and MPI-SWS faculty members Deepak Garg and Peter Druschel.

Read more about ERIM here.
Read more

MPI-SWS researcher receives CSF 2019 Distinguished Paper Award

June 2019
MPI-SWS faculty member Deepak Garg, along with his external collaborators Carmine Abate, Roberto Blanco, Catalin Hritcu, Marco Patrignani and Jérémy Thibault, has been awarded a Distinguished Paper Award at the 2019 IEEE Computer Security Foundations Symposium (CSF 2019). Their paper is titled "Journey Beyond Full Abstraction: Exploring Robust Property Preservation for Secure Compilation".

Research Spotlight: A General Data Anonymity Measure

A long-standing problem both within research and in society generally is that of how to analyze data about people without risking the privacy of those people. There is an ever-growing amount of data about people: medical, financial, social, government, geo-location, etc. This data is very valuable in terms of better understanding ourselves. Unfortunately, analyzing the data in its raw form carries the risk of exposing private information about people.

The problem of how to analyze data while protecting privacy has been around for more than 40 years---ever since the first data processing systems were developed. ...
A long-standing problem both within research and in society generally is that of how to analyze data about people without risking the privacy of those people. There is an ever-growing amount of data about people: medical, financial, social, government, geo-location, etc. This data is very valuable in terms of better understanding ourselves. Unfortunately, analyzing the data in its raw form carries the risk of exposing private information about people.

The problem of how to analyze data while protecting privacy has been around for more than 40 years---ever since the first data processing systems were developed. Most workable solutions are ad hoc: practitioners try things like removing personally identifying information (e.g. names and addresses), aggregating data, removing outlying data, and even swapping some data between individuals. This process can work reasonably well, but it is time-consuming, requires substantial expertise to get right, and invariably limits the accuracy of the analysis or the types of analysis that can be done.

A holy grail within computer science is to come up with an anonymization system that has formal guarantees of anonymity and provides good analytics. Most effort in this direction has focused on two ideas, K-anonymity and Differential Privacy. Both can provide strong anonymity, but except in rare cases neither can do so while still providing adequate analytics. As a result, common practice is still to use informal ad hoc techniques with weak anonymization, and to mitigate risk by for instance sharing data only with trusted partners.

The European Union has raised the stakes with the General Data Protection Regulation (GDPR). The GDPR has strict rules on how personal data may be used, and threatens large fines to organizations that do not follow the rules. However, GDPR says that if data is anonymous, then it is not considered personal and does not fall under the rules. Unfortunately, there are no precise guidelines on how to determine if data is anonymous or not. Member states are expected to come up with certification programs for anonymity, but do not know how to do so.

This is where we come in. Paul Francis' group, in research partnership with the startup Aircloak, has been developing an anonymizing technology called Diffix over the last five years. Diffix is an empirical, not a formal technology, and so the question remains "how anonymous is Diffix?" While it may not be possible to precisely answer that question, one way we try to answer that question is through a bounty program: we pay attackers who can demonstrate how to compromise anonymity in our system. Last year we ran the first (and still only) bounty program for anonymity. The program was successful in that some attacks were found, and in the process of designing defensive measures, Diffix has improved.

In order to run the bounty program, we naturally needed a measure of anonymity so that we could decide how much to pay attackers. We designed a measure based on how much confidence an attacker has in a prediction of an individual's data values, among other things. At some point, we realized that our measure applies not just to attacks on Diffix, but to any anonymization system. We also realized that our measure might be useful in the design of certification programs for anonymity.

We decided to develop a general score for anonymity, and to build tools that would allow anyone to apply the measure to any anonymization technology. The score is called the GDA Score, for General Data Anonymity Score.

The primary strength of the GDA Score is that it can be applied to any anonymization method, and therefore is apples-to-apples. The primary weakness is that it is based on empirical attacks (real attacks against real systems), and therefore the score is only as good as the attacks themselves. If there are unknown attacks on a system, then the score won't reflect this and may therefore make a system look more anonymous than it is.

Our hope is that over time enough attacks will be developed that we can have high confidence in the GDA Score. Towards that end, we've started the Open GDA Score Project. This is a community effort to provide software and databases in support of developing new attacks, and a repository where the scores can be viewed. We recently launched the project in the form of a website, www.gda-score.org, and some initial tools and simple attacks. We will continue to develop tools and new attacks, but our goal is to attract broad participation from the community.

For more information, visit www.gda-score.org.
Read more

Francis' group launches Open GDA Score Project

We have launched the Open GDA Score Project at www.gda-score.org.  This is an open project to develop a set of tools and databases to generate anonymity scores for any data anonymization technique. The GDA Score, which stands for General Data Anonymity Score, is the first data anonymization measurement methodology that works with any anonymization technique. The GDA Score is a generalization of the measurement technique developed by Francis' group for the Diffix bounty program run last year. ...
We have launched the Open GDA Score Project at www.gda-score.org.  This is an open project to develop a set of tools and databases to generate anonymity scores for any data anonymization technique. The GDA Score, which stands for General Data Anonymity Score, is the first data anonymization measurement methodology that works with any anonymization technique. The GDA Score is a generalization of the measurement technique developed by Francis' group for the Diffix bounty program run last year. This was the first bounty program for anonymity. The GDA Score is of particular interest in Europe, where member states are expected to produce certification programs for anonymity.
Read more

MPI-SWS researchers have a distinguished paper at POPL 2019

January 2019
Vineet Rajani and Deepak Garg, along with their co-authors Marco Vassena, Alejandro Russo and Deian Stefan, have won a Distinguished Paper Award at the 2019 ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2019) for their paper titled "From fine- to coarse-grained dynamic information flow control and back".