### Discrimination in Algorithmic Decision Making: From Principles to Measures and Mechanisms

Bilal Zafar

Max Planck Institute for Software Systems

SWS Student Defense Talks - Thesis Defense

04 Feb 2019, 6:00 pm - 7:00 pm

Saarbrücken building E1 5, room 029

simultaneous videocast to Kaiserslautern building G26, room 111

simultaneous videocast to Kaiserslautern building G26, room 111

The rise of algorithmic decision making in a variety of applications has also
raised
concerns about its potential for discrimination against certain social groups.
However,
incorporating nondiscrimination goals into the design of algorithmic decision
making
systems (or, classiﬁers) has proven to be quite challenging. These challenges
arise mainly
due to the computational complexities involved in the process, and the
inadequacy of
existing measures to computationally capture discrimination in various
situations. The
goal of this thesis is to tackle these problems.

First, with the aim of incorporating existing measures of discrimination (namely, disparate treatment and disparate impact) into the design of well-known classiﬁers, we introduce a mechanism of decision boundary covariance, that can be included in the formulation of any convex boundary-based classiﬁer in the form of convex constraints. Second, we propose alternative measures of discrimination. Our ﬁrst proposed measure, disparate mistreatment, is useful in situations when unbiased ground truth training data is available. The other two measures, preferred treatment and preferred impact, are useful in situations when feature and class distributions of different social groups are signiﬁcantly different, and can additionally help reduce the cost of nondiscrimination (as compared to the existing measures). We also design mechanisms to incorporate these new measures into the design of convex boundary-based classiﬁers.

First, with the aim of incorporating existing measures of discrimination (namely, disparate treatment and disparate impact) into the design of well-known classiﬁers, we introduce a mechanism of decision boundary covariance, that can be included in the formulation of any convex boundary-based classiﬁer in the form of convex constraints. Second, we propose alternative measures of discrimination. Our ﬁrst proposed measure, disparate mistreatment, is useful in situations when unbiased ground truth training data is available. The other two measures, preferred treatment and preferred impact, are useful in situations when feature and class distributions of different social groups are signiﬁcantly different, and can additionally help reduce the cost of nondiscrimination (as compared to the existing measures). We also design mechanisms to incorporate these new measures into the design of convex boundary-based classiﬁers.