Developing Innovative Elixir for Graph Data Poisoning

Developing a defense for a new backdoor attack will make training a federated graph learning (FedGL) framework safe from present and future dangers has earned an Illinois Tech researcher a Best Paper Award at the Association of Computing Machinery Conference on Computer and Communications Security.
Binghui Wang, assistant professor of computer science, and his collaborators earned the Best Paper Award in the conference’s Artificial Intelligence Security and Privacy Track for “Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses.” The paper was submitted to the ACM’s Special Interest Group on Security, which accepts about 20 percent of submissions after a rigorous peer review.
“What excites me most about this project is how it masterfully bridges the gap between deep theoretical rigor and practical accessibility,” Wang says. “The provable defense mechanism is both elegant in its mathematical foundation and effective in real-world applications—while remaining comprehensible to the general public. It represents a rare and valuable achievement in AI security research.”
FedGL allows multiple users to train a shared algorithm with their graph data, while maintaining the privacy of that data. Problems could arise should a bad actor intentionally inject data to skew the results of the algorithm.
The research team developed its new attack called optimized distributed graph backdoor attack (Opt-GDBA), which is embedded in the training graph data. The new attack learns a customized trigger that is deployed by a smart system that finds the best spots to hide the malicious information, as well as adapt to different types of networks. This technique resulted in a 90 percent success rate across different data sets.
“The Opt-GDBA is an optimized and learnable attack that considers all aspects of FedGL, including the graph data’s structure, the node features, and the unique clients’ information,” Wang says.
The team further developed a provable defense against this new backdoor attack, which can be applied to deter other attacks. It works by breaking all incoming graph data into smaller pieces. Each piece is run through a mini detector, which determines whether that piece looks suspicious and uses a mathematical proof to guarantee the system will catch the attack.
The research team’s defense blocked every Opt-GDBA attack, as well as maintained more than 90 percent of legitimate data.
“The most significant challenge was developing a provable defense robust against both known attacks and future unknown threats capable of arbitrarily manipulating graph data,” Wang says. “Our team leveraged more than five years of pioneering work in provable defenses for AI models and systems by combining insights from robust statistics to develop attack-agnostic certification frameworks, graph theory to design topology-aware robustness bounds, and through collaborative research by partnering with domain experts in cybersecurity and AI.”
Wang was joined by Yuxin Yang, a Ph.D. student at Jilin University and Illinois Tech; Qiang Li, full professor of computer science at Jilin University; Jinyuan Jia, assistant professor of information sciences and technology at Pennsylvania State University; and Yuan Hong, associate professor of computing at the University of Connecticut and former assistant professor of computer science at Illinois Tech.