Breaking Biases

Machine learning models have demonstrated powerful predictability capabilities, but also have demonstrated bias against certain demographic groups. Canyu Chen joined a research group that is working to limit that bias and build public trust in machine learning models.

“Fairness and privacy are two important aspects in trustworthy artificial intelligence,” Canyu says. “When reading papers, I noticed that it is a critical but underexplored problem to study fair AI algorithms considering real-world privacy constraints.”

Machine learning models are increasingly used in the healthcare field to diagnose disease, develop treatment plans, and modeling the spread of viruses. Attacking bias against specific ethnic groups, genders, or age will help mitigate existing healthcare disparities among minority groups.

“The goal of this work is to explore how to make fair predictions under privacy constraints,” Chen says. “More specifically, conventional bias mitigation algorithms are not applicable to real-world scenarios where privacy mechanisms such as local differential privacy are enforced. We aim to design novel techniques to largely improve the fairness of machine learning models considering the privacy constraints in the real world.”

The team studied a new and practical solution for fair classification in a semi-private setting, where most of the sensitive attributes are private and only a small number of clean ones are available. They develop a novel framework, FairSP, which can achieve fair prediction under this semi-private setting. FairSP learns to correct noise-protected sensitive attributes by exploiting the limited clean sensitive attributes. Then, it jointly models the corrected and clean data in an adversarial way for debiasing and prediction. Initial analysis shows that the proposed model can ensure fairness under mild assumptions in the semi-private setting. Experimental results on real-world datasets demonstrate the effectiveness of this method for making fair predictions under privacy and maintaining high accuracy.

“In the long run, my aspiration is to make AI accessible, responsible, and reliable to the general public,” Canyu says. “This work is the first step to my long-term goal and builds my confidence to continue solving the real-world challenges of trustworthy AI.”

Related Stories