Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. The first part of the talk will cover how machines actually happen to discriminate and research attempts to measure and mitigate discrimination in various ways.
The second part will present a recent research work of mine that proposes the Continuous Fairness Algorithm which enables a continuous interpolation between different fairness definitions. More specifically, we make three main contributions to the existing literature. First, our approach allows the decision maker to continuously vary between specific concepts of individual and group fairness. As a consequence, the algorithm enables the decision maker to adopt intermediate „worldviews“ on the degree of discrimination encoded in algorithmic processes, adding nuance to the extreme cases of „we’re all equal“ (WAE) and „what you see is what you get“ (WYSIWYG) proposed so far in the literature. Second, we use optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints. Third, the algorithm is able to handle cases of intersectionality, i.e., of multi-dimensional discrimination of certain groups on grounds of several criteria.
Meike Zehlike is a Ph.D. researcher at the Social & Information Systems group of MPI-SWS Saarbrücken and Humboldt Universität zu Berlin, Germany since 2016. Before she worked as a software developer and scrum master in Berlin. She is a 2017 grantee of the Data Transparency Lab Research grant and a Google WTM scholar of 2019. Her research interests center around artificial intelligence and its social impact, automatic discrimination discovery and algorithmic fairness, as well as the use of artificial intelligence in medical applications.
Empfohlenes Erfahrungslevel: Alle
Anmeldung nicht erforderlich