One of our areas of interest is bias and discrimination in recommender systems. As machine learning, data mining, and other artificial intelligence techniques become increasingly pervasive in our daily lives, the research community has started to turn our attention to the question of whether they are fair. There is a variety of work happening on algorithmic fairness to try to assess whether particular systems are unfair or discriminatory, and how to mitigate this situation.
In this project, we are investigating several questions of fairness and bias in recommender systems:
- What does it mean for a recommender to be fair, unfair, or biased?
- What potentially discriminatory biases are present in the recommender’s input data, algorithmic structure, or output?
- How do these biases change over time through the recommender-user feedback loop?
This is a part of our overall, ongoing goal to help make recommenders (and other AI systems) better for the people they affect.
For more information on this project, contact Michael Ekstrand.