LLMs Are Better Than People at Creating Convincing Misinformation, Illinois Tech Researchers’ Paper Shows
Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans. Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set out to examine whether LLM-generated misinformation can cause more harm than the human-generated variety of infospam.