Combating Misinformation and Disinformation with AI: Insights from Matthew Lease and Good Systems
In this episode of “Open AI Changes Everything,” host Stephen Walther discusses the challenges and potential solutions to misinformation and disinformation in today’s information-saturated world with Matthew Lease, Professor at the School of Information at the University of Texas at Austin and leader of the Good Systems initiative.
Understanding the Problem of Misinformation and Disinformation
Matthew Lease clarifies the differences between disinformation (strategic and intentional falsehoods), misinformation (unintentional spreading of false information), and other nuanced forms of harmful content such as conflicting information and malinformation. Lease highlights that misinformation isn’t always malicious but can still cause significant societal harm, fueling division, promoting violence, or spreading public health risks.
The Good Systems Project
Lease leads Good Systems, a comprehensive eight-year “moonshot” program by the University of Texas at Austin designed to create responsible and ethical AI technologies. With a team of over 120 researchers from various disciplines, Good Systems tackles complex societal challenges, including protecting information integrity.
AI and the Information Landscape
Lease’s specific Good Systems project, titled “Designing Responsible AI Technologies to Support Information Integrity,” aims to build AI-powered tools to help journalists, fact-checkers, and analysts combat misinformation more effectively. He explains that the sheer scale of misinformation on the internet makes it impossible for traditional, human-only fact-checking methods to keep pace.
Potential Solutions to the Misinformation Crisis
The discussion explores three primary approaches to combat misinformation:
- Expert Fact-checking: Highly accurate but limited by scalability.
- Crowdsourced Solutions: More scalable and democratic but require careful management to avoid amplifying biases.
- AI-driven Systems: Extremely scalable but currently limited in accuracy and nuanced understanding.
Lease emphasizes a blended approach that combines AI’s scalability, crowdsourced moderation’s breadth, and expert fact-checkers’ accuracy as the most promising solution.
Free Speech, AI, and Content Moderation
Lease addresses common fears surrounding content moderation and AI’s role in potentially suppressing free speech. He argues that responsible content moderation is necessary, comparing harmful misinformation to dangerous speech such as yelling “fire” in a crowded theater. Lease stresses the importance of transparency and careful oversight of AI systems to avoid unintended censorship.
Future of AI in Fact-checking
The episode explores AI’s potential to create a fairer, more consistent approach to information verification. Lease is cautiously optimistic, stating that while AI itself isn’t inherently good or bad, its impact depends on how responsibly and transparently it’s developed and implemented.
Staying Connected to Good Systems
Listeners are encouraged to follow the Good Systems project on social media or visit their website at Good Systems for updates and opportunities to engage directly.
This comprehensive discussion offers valuable insights into the complex relationship between AI, misinformation, and society, highlighting how innovative, responsible approaches to AI can support truth and integrity in the digital age.