The advent of powerful prediction algorithms led to increased automation of high-stake decisions regarding the allocation of scarce resources . This automation bears the risk of unwanted discrimination against vulnerable and historically disadvantaged groups . We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice . We show that metrics implementing equality of opportunity only apply when resource allocation is based on deservingness, but fail when allocations shouldreflect concerns about egalitarianism, sufficiency, and priority . We also argue thatby cleanly distinguishing between prediction task and decision task, research could take

Author(s) : Matthias Kuppler, Christoph Kern, Ruben L. Bach, Frauke Kreuter

Links : PDF - Abstract

Code :
Coursera

Keywords : prediction - decision - - distributive - metrics -

Leave a Reply

Your email address will not be published. Required fields are marked *