Diminishing Damage in the Disinformation Age

Date

Author

By Casey Moffitt
Headshot of Gladwin Development Chair Assistant Professor of Computer Science Kai Shu

The distribution and consumption of misinformation and disinformation poses a threat to many aspects of a free society, and the emergence of generative artificial intelligence multiplies this threat due to the sheer volume it can produce.

However, Kai Shu, Gladwin Development Chair Assistant Professor of Computer Science at Illinois Institute of Technology, received a United States Department of Homeland Security grant through the Center for Accelerating Operational Efficiency to create new techniques that will combat the effects of misinformation and disinformation.

“With the powerful capacity of generative AI such as ChatGPT to generate human-like content, it may pose more challenges and potentially be more harmful than human-written misinformation,” Shu says. “Existing misinformation detection models that are heavily trained using human-written misinformation may be less effective when identifying misinformation generated by large language models.”

Shu argues that we live in a disinformation age, as it has littered news feeds across social media platforms and infiltrated more traditional and mainstream media outlets. People will act based on the misinformation that they absorb in areas such as health care, finance, politics, and more. Large language models (LLMs) could compound the issue due to the ease of generating misinformation and the vast scale it can produce.

“LLMs have shown promising capacities in generating human-like content,” Shu says. “For example, we can ask ChatGPT to ‘write a piece of news,’ and it will generate a piece of news with possible false dates and locations due to the intrinsic generate strategies and the lack of up-to-date information in the training data. LLMs can follow user instructions and generate misinformation with different types, domains, and errors.”

Shu’s research could result in new techniques that advance misinformation detection and improve the attribution of misinformation from human-written and LLM-generated sources. The research will also emphasize explainability, ensuring that the developed models are transparent and understandable to facilitate public adoption.

The research will utilize some of the LLM capabilities that are being used to generate the misinformation itself.    

“LLMs have demonstrated strong capacities in various tasks such as summarization and question answering,” he says. “We will investigate novel methods to differentiate human authors and ‘AI authors’ of misinformation.”

Shu says there are many challenges that the research presents, such as making detection more efficient and to develop explanations as to why the information is believed to be false or misleading.

“The different and new characteristics of LLM-generated misinformation is understudied, as well as how we can potentially combat LLM-generated misinformation,” he says. “The proposed research will systematically investigate the detection, attribution, and explanation of LLM-generated misinformation.”

Misinformation and disinformation research has significant importance and huge challenges, Shu says. The challenges range from our vulnerabilities to misinformation, to information providers’ bias, to the arms race environment between how misinformation is generated compared to how detection techniques are developed.

“Misinformation in the age of LLMs remains an underexplored problem in humanity, though it is pressing to investigate with multidisciplinary research,” Shu says. “I am excited about this research project because I see potentials of leveraging trustworthy AI techniques for social good, to detect and intervene in the misinformation that is written by human or even AI models themselves.”