Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.