Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/36899
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKOUTSOVITI-KOUMERI, Lisa-
dc.contributor.authorNAPOLES RUIZ, Gonzalo-
dc.date.accessioned2022-03-11T14:00:22Z-
dc.date.available2022-03-11T14:00:22Z-
dc.date.issued2021-
dc.date.submitted2022-02-28T09:16:09Z-
dc.identifier.citationTavares, João Manuel R. S.; Papa, João Paulo; Hidalgo, Manuel González (Ed.). Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications 25th Iberoamerican Congress, CIARP 2021, Porto, Portugal, May 10–13, 2021, Revised Selected Papers, Springer, p. 351 -360-
dc.identifier.isbn9783030934200-
dc.identifier.isbn9783030934194-
dc.identifier.issn0302-9743-
dc.identifier.issn1611-3349-
dc.identifier.urihttp://hdl.handle.net/1942/36899-
dc.description.abstractThe need to measure and mitigate bias in machine learning data sets has gained wide recognition in the field of Artificial Intelligence (AI) during the past decade. The academic and business communities call for new general-purpose measures to quantify bias. In this paper, we propose a new measure that relies on the fuzzy-rough set theory. The intuition of our measure is that protected features should not change the fuzzy-rough set boundary regions significantly. The extent to which this happens can be understood as a proxy for bias quantification. Our measure can be categorized as an individual fairness measure since the fuzzy-rough regions are computed using instance-based information pieces. The main advantage of our measure is that it does not depend on any prediction model but on a distance function. At the same time, our measure offers an intuitive rationale for the bias concept. The results using a proof-of-concept show that our measure can capture the bias issues better than other state-of-the-art measures.-
dc.language.isoen-
dc.publisherSpringer-
dc.subject.otherFuzzy-rough sets-
dc.subject.otherFairness-aware AI-
dc.subject.otherBias-
dc.titleBias Quantification for Protected Features in Pattern Classification Problems-
dc.typeProceedings Paper-
local.bibliographicCitation.authorsTavares, João Manuel R. S.-
local.bibliographicCitation.authorsPapa, João Paulo-
local.bibliographicCitation.authorsHidalgo, Manuel González-
local.bibliographicCitation.conferencedateMay 10–13, 2021-
local.bibliographicCitation.conferencename25th Iberoamerican Congress, CIARP 2021-
local.bibliographicCitation.conferenceplacePorto, Portugal-
dc.identifier.epage360-
dc.identifier.spage351-
local.bibliographicCitation.jcatC1-
local.publisher.placeSwitzerland-
local.type.refereedRefereed-
local.type.specifiedProceedings Paper-
local.relation.ispartofseriesnr12702-
dc.identifier.doi10.1007/978-3-030-93420-0_33-
dc.identifier.eissn1611-3349-
local.provider.typeCrossRef-
local.bibliographicCitation.btitleProgress in Pattern Recognition, Image Analysis, Computer Vision, and Applications 25th Iberoamerican Congress, CIARP 2021, Porto, Portugal, May 10–13, 2021, Revised Selected Papers-
local.uhasselt.internationalyes-
item.validationvabb 2023-
item.contributorKOUTSOVITI-KOUMERI, Lisa-
item.contributorNAPOLES RUIZ, Gonzalo-
item.accessRightsRestricted Access-
item.fullcitationKOUTSOVITI-KOUMERI, Lisa & NAPOLES RUIZ, Gonzalo (2021) Bias Quantification for Protected Features in Pattern Classification Problems. In: Tavares, João Manuel R. S.; Papa, João Paulo; Hidalgo, Manuel González (Ed.). Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications 25th Iberoamerican Congress, CIARP 2021, Porto, Portugal, May 10–13, 2021, Revised Selected Papers, Springer, p. 351 -360.-
item.fulltextWith Fulltext-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
Pages from 2021_Book_ProgressInPatternRecognitionIm.pdf
  Restricted Access
Published version668.36 kBAdobe PDFView/Open    Request a copy
Show simple item record

Page view(s)

34
checked on Sep 6, 2022

Download(s)

4
checked on Sep 6, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.