Please use this identifier to cite or link to this item: https://research.matf.bg.ac.rs/handle/123456789/1537
Title: Fairness in Machine Learning: Why and How?
Authors: Nikolić, Mladen 
Petrović, Andrija
Affiliations: Informatics and Computer Science 
Keywords: Fairness;Machine learning;ethical artificial inteligence
Issue Date: 2022
Rank: M33
Publisher: Kragujevac : University of Kragujevac
Related Publication(s): 1st Serbian International Conference on Applied Artificial Intelligence (SICAAI 2022).
Conference: Serbian International Conference on Applied Artificial Intelligence (SICAAI)(1 ; 2022 ; Kragujevac)
Abstract: 
Services based on machine learning are increasingly present in our everyday lives. While such application make promises of its improvement, they also pose considerable risks if machine learning models do not perform as expected. One specific issue related to the quality of learnt models which has recently gained considerable visibility is their unfairness. Namely, it has been noted that the decisions of machine learning models sometimes reflect human biases against some historically discriminated groups of people, thus unintendedly perpetuating the discrimination. In this paper we discuss why is the fairness of machine learning models important, by revisiting some notable examples of discrimination committed by the models and discuss different notions of fairness. We discuss how to measure the fairness of such models and how to achieve it, reflecting on both algorithmic and non-technical aspects of this effort. We present several fairness ensuring methods representative of different fairness paradigms, one of them being our own.
URI: https://research.matf.bg.ac.rs/handle/123456789/1537
Appears in Collections:Research outputs

Files in This Item:
File Description SizeFormat
71 Mladen Nikolic and Andrija Petrovic.pdf277.91 kBAdobe PDF
View/Open
Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.