0
The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision Making
cacm.acm.org

Every automated system encodes a value judgment. Accepting training data as given implies structural bias does not appear in the data and that replicating the data as given would be just. Different value judgments can require satisfying contradicting fairness properties each leading to different societal outcomes.

Our main claim in this work is that discussions about fairness algorithms and measures should make explicit the implicit assumptions about the world being modeled.

Our framework suggests ways in which the current discussion of fairness measures is misleading. First, group and individual notions of fairness reflect fundamentally different underlying goals and are not mechanisms toward the same outcome. Second, group notions of fairness differ based on their implicit axiomatic assumptions: mathematical incompatibilities should be viewed as a formal statement of this more philosophical difference.

Further reading on this topic
waiting for moderation