- Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
- Learning to Interact with Learning Agents
- Information Gathering with Peers: Submodular Optimization with Peer-Prediction Constraints
- Learning User Preferences to Incentivize Exploration in the Sharing Economy
- Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
- On the Causal Effect of Badges
Unfortunately, some recent investigations have shown that machine learning algorithms can also lead to unfair outcomes. For example, a recent ProPublica study found that COMPAS, a tool used in US courtrooms for assisting judges with crime risk prediction, was unfair towards black defendants. In fact, several studies from governments, regulatory authorities, researchers as well as civil rights groups have raised concerns about machine learning potentially acting as a tool for perpetuating existing unfair practices in society, and worse, introducing new kinds of unfairness in prediction tasks. As a consequence, a flurry of recent research has focused on defining and implementing appropriate computational notions of fairness for machine learning algorithms.
Existing computational notions of fairness in the machine learning literature are largely inspired by the concept of discrimination in social sciences and law. These notions require the decision outcomes to ensure parity (i.e. equality) in treatment and in impact.
Notions based on parity in treatment require that the decision algorithm should not take into account the sensitive feature information (e.g., gender, race) of a user. Notions based on parity in impact require that the decision algorithm should give beneficial decision outcomes (e.g., granting a loan) to similar percentages of people from all sensitive feature groups (e.g., men, women).
However, in many cases, these existing notions are too stringent and can lead to unexpected side effects. For example, ensuring parity has been shown to lead to significant reductions in prediction accuracy. Parity may also lead to scenarios where none of the groups involved in decision making (e.g., neither men nor women) get beneficial outcomes. In other words, these scenarios might be preferred neither by the decision maker using the algorithm (due to diminished accuracy), nor by the groups involved (due to very little benefits).
User preferences and fairness
In recent work, to appear at NIPS 2017, researchers at MPI-SWS have introduced two new computational notions of algorithmic fairness: preferred treatment and preferred impact. These notions are inspired by ideas related to envy-freeness and bargaining problem in economics and game theory. Preferred treatment and preferred impact leverage these ideas to build more accurate solutions that are preferable for both the decision maker and the user groups.
The new notion of preferred treatment allows basing the decisions on sensitive feature information (thereby relaxing the parity treatment criterion) as long as the decision outcomes do not lead to envy. That is, each group of users prefers their own group membership over other groups and does not feel that presenting itself to the algorithm as another group would have led to better outcomes for the group.
The new notion of preferred impact allows differences in beneficial outcome rates for different groups (thereby relaxing the parity impact criterion) as long as all the groups get more beneficial outcomes than what they would have received under the parity impact criterion.
In their work, MPI-SWS researchers have developed a technique to ensure machine learning algorithms satisfy preferred treatment and / or preferred impact. They also tested their technique by designing crime-predicting machine-learning algorithms that satisfy the above-mentioned notions. In their experiments, they show that preference-based fairness notions can provide significant gains in overall decision-making accuracy as compared to parity-based fairness, while simultaneously increasing the beneficial outcomes for the groups involved.
This work is one of the most recent additions to an expanding set of techniques developed by MPI-SWS researchers to enable fairness, accountability and interpretability of machine learning algorithms.
Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna Gummadi and Adrian Weller. From Parity to Preference: Learning with Cost-effective Notions of Fairness. Neural Information Processing Systems (NIPS), Long Beach (CA, USA), December 2017
WSDM will take place in Los Angeles (CA, USA) in February 2018.
NIPS will take place in Long Beach (CA, USA) in December 2017.
The award citation reads as follows: "This is one of the first papers that examine multiple online social networks at scale. By introducing novel measurement techniques, the paper has had an enduring influence on the analysis, modeling and design of modern social media and social networking services."
Adish Singla is joining us from ETH Zurich, where he has completed his Ph.D. in computer science. His research focuses on designing new machine learning frameworks and developing algorithmic techniques, particularly for situations where people are an integral part of computational systems. Adish joins the institute as a tenure-track faculty member, effective Oct 1, 2017.
Before starting his Ph.D., he worked as a Senior Development Lead in Bing Search for over three years. Adish received his Bachelor's degree from IIT Delhi and his Master's degree from EPFL. He is a recipient of the Facebook Fellowship in the area of Machine Learning, the Microsoft Research Tech Transfer Award, and the Microsoft Gold Star Award.
The 26th International World Wide Web Conference (WWW) took place in Perth (Australia) in April 2017.
- Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
- Modeling the Dynamics of Online Learning Activity
- Distilling Information Reliability and Source Trustworthiness from Digital Traces
- Optimizing the Recency-Relevancy Trade-off in Online News Recommendations
- Predicting the Success of Online Petitions Leveraging Multi-dimensional Time-Series
The 26th International World Wide Web Conference (WWW) will take place in Perth, Australia in April 2017.
- RedQueen: An Online Algorithm for Smart Broadcasting in Social Networks
- Uncovering the Dynamics of Crowdlearning and the Value of Knowledge
Allen Clement obtained his Ph.D. at the University of Texas at Austin in 2011. Allen's research aims at designing and building systems that continue to work despite the myriad of things that go 'wrong' in deployed systems, including broken components, malicious adversaries, and benign race conditions. His research builds on techniques from distributed systems, security, fault tolerance, and game theory.
Cristian Danescu-Niculescu-Mizil is joining us from Cornell University, where he obtained his PhD in computer science. Cristian's research aims at developing computational frameworks that can lead to a better understanding of human social behavior, by unlocking the unprecedented potential of the large amounts of natural language data generated online. His work tackles problems related to conversational behavior, opinion mining, computational semantics and computational advertising.
A recent WWW 2012 paper by Krishna Gummadi, Bimal Viswanath, and their coauthors was covered by GigaOM, a popular technology news blog, in an article titled Who's to blame for Twitter spam? Obama, Gaga, and you.
Steven le Blond's work on security flaws in Skype and other peer-to-peer applications has been receiving global media attention: WSJ, Le Monde (French), die Zeit (German), Daily Mail, New Scientist, Slashdot, Wired, and the New Scientist "One Percent" blog.
The study, which will be presented at the ACM Internet Measurement Conference (IMC) in November, looks at the targeting behavior of Google and Facebook. While the goal of the study was to understand targeting in general, the researchers discovered that gay Facebook users can unknowingly reveal to advertisers that they are gay simply by clicking on an ad targeted to gay men. The ads appear innocuous in that they make no mention of targeting gay users (for instance, an ad for a nursing degree). A user's sexual orientation can be leaked even if the user made his sexual orientation private using Facebook's privacy settings.
This study was done as part of a broader research project to design techniques for making advertising more private.
Alan Mislove, Bimal Viswanath, Krishna P. Gummadi, and Peter Druschel's work on inferring user profiles in online social networks has received media coverage from Slashdot.