Computer Science Professor to Expand Award-Winning Fake News Research



By Casey Moffitt
Kai Shu 1280x850

An Illinois Institute of Technology computer science professor is expanding his award-winning research flagging fake news stories that litter social media channels as false or misleading.

Kai Shu, Gladwin Development Chair Assistant Professor of Computer Science, earned the 2020 Dean’s Dissertation Award from Ira A. Fulton Schools of Engineering at Arizona State University for his research of fake news. He has developed a computational model using social-context-aware artificial intelligence and machine learning to identify and explain fake news in real-world Twitter datasets with an average of 85 percent accuracy.

“We proposed a model called ‘dEFEND,’ which can predict fake news accurately and with explanation,” Shu says. “The idea of dEFEND is to create a transparent fake-news detection algorithm for decision-makers, journalists, and stakeholders to understand why a machine-learning algorithm makes such a prediction.”

dEFEND works to identify stories circulating in social media channels that contain intentionally false information. Social media is a ripe environment to disseminate fake news, as it is a low-cost and easily accessible forum that attracts a large audience who can quickly spread the information. A prototype of dEFEND for real-world use is in development.

Although social media contains a wealth of data, Shu says this data presents challenges. Social media data is incomplete, as it has missing data for users who are unwilling to share their profiles, and social media platforms only provide partial sample data to authenticated user requests. It is noisy, as social media users can be both passive consumers and active producers, which means the quality of user-generated content varies. The data is unstructured because some of it cannot directly fit into relational tables for computers to process. In addition to data concerns, the social networks themselves are also noisy, rife with malicious users such as spammers and bots.

Fake news also is diverse in terms of topics, content, publishing methods, and media platforms, using sophisticated linguistic styles to emulate true news.

As a result, obtaining expert annotated data is nearly impossible, leaving Shu to rely on weak social supervision signals through user engagements. For example, user engagement in fake news tends to be more polarizing and less neutral. This rule may not be as reliable as expert annotation, but it provides a component to help detect fake news.

“I am using explainable algorithms that can not only provide prediction results to indicate whether a piece of news is fake, but also provide additional explanations to the users,” Shu says. “This is important because if we can find explanations from the dataset, then users can understand which part of the news is more fake than others.”

Shu says he plans to expand his research beyond fake news detection and toward challenges in attribution, characterization, and mitigation. The goal of attribution is to verify the source of fake news. Characterization efforts will determine whether the information has malicious intent, is harmless, or contains other insightful traits. Mitigation will aim to proactively block target users or contain disinformation at early stages.

“The study of disinformation and fake news is still in the early stage, and there are a lot of promising research directions,” Shu says. “There is a pressing need of unified theoretical and computational research with complementary efforts from different disciplines to understand and tackle the issue of disinformation.”

Photo: Gladwin Development Chair Assistant Professor of Computer Science Kai Shu