Tackling Machine Learning Vulnerabilities with NSF CAREER Award

Date

Author

By Casey Moffitt
Headshot of Assistant Professor of Computer Science Binghui Wang.

Binghui Wang, assistant professor of computer science at Illinois Institute of Technology, has earned the prestigious CAREER Award from the National Science Foundation, which will fund his research to make machine learning models more trustworthy.

Wang has dedicated much of his research to building trustworthy machine learning models, and he says this award validates his past work and the potential for future research.

“Earning the NSF CAREER Award signifies that my past work has had, and proposed research will have, the potential to advance the field of trustworthy machine learning,” he says. “It can be a catalyst for further career development and offers increased visibility in this field.”

Over the past decade, extensive work has shown machine learning models are vulnerable to privacy and security attacks. Wang says the objective of this project is to develop new methods that will make machine learning models, especially deep learning models that use deep neural networks, more secure against these attacks, and become more trustworthy.

For example, email spam filters can be compromised by data poisoning attacks, where attackers confuse machine learning models by feeding them bogus data. This allows adversaries to send malicious emails containing malware or other security threats without being noticed. Attackers also can make repeated requests to models to reconstruct the data used to build them by looking at the results. In health domains, successful data reconstruction attacks might expose private medical details about patients.

“If the models are trained while under security attacks, the results produced by the models can be compromised,” Wang says. “On the other hand, privacy attacks mainly extract the private information used to train ML models, but do not compromise the results.”

Existing defense methods to mitigate these attacks face several limitations. They often aren’t effective in real-world applications with strict confidentiality requirements, or they degrade the performance of the models. Many defenses are aimed at specific attack types, making it hard to deal with multiple concurrent attacks. Wang’s goal is to address these limitations by designing a trustworthy learning framework based on information theory.

The work will be structured around three thrusts. Thrust One will design novel information-theoretic representation learning methods against common privacy attacks, including membership inference, property inference, and data reconstruction attacks. Thrust Two will design novel information-theoretic representation learning methods against common security attacks, including test-time evasion attacks, training-time poisoning attacks, and training- and test-time backdoor attacks. Thrust Three will generalize Thrust One and Thrust Two to handle diverse attack types, data types, and learning types.

“The research requires extensive knowledge of applied mathematics, computer science, programming, as well as intensive computational resources,” Wang says. “All the parts are equally important and could be a challenge.”

Machine learning models and techniques, especially deep learning that uses deep neural networks, are being quickly adopted by a variety of businesses and industries. These models have made remarkable breakthroughs in various research domains and disciplines including computer vision, natural language processing, biology, and math, to name a few. And the potential to advance these fields through machine learning models is great. Since the application of machine learning is expected to grow exponentially, the need to make them more trustworthy becomes more urgent.

“The ultimate goal of this research is to ensure the current machine learning methods and techniques, including those deployed by industries—as well as future ones—be ‘trustworthy,’ or to say, ensure current and future machine learning methods be robust against the worst-case security and privacy attacks,” Wang says. “The project will provide a completely new angle to study trustworthy machine learning. Particularly, it will link trustworthy machine learning with information theory; and design a practical, accurate, flexible, and generalizable information-theoretic trustworthy representation learning framework with robustness and privacy guarantees.”

Disclaimer: Research reported in this publication is supported by the National Science Foundation under Award Number 2230757. This content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation.