Researchers take aim at machine learning to detect and dispel disinformation that proliferates online
WILTSHIRE, England – Disinformation has become a central feature of the COVID-19 crisis. According to a recent poll, false or misleading information about the pandemic reaches close to half of all online news consumers in the United Kingdom. C4ISRnet reports. Continue reading original article
The Military & Aerospace Electronics take:
8 July 2020 -- As this type of malign information and high-tech “deepfake” imagery can spread so fast online, it poses a risk to democratic societies worldwide by increasing public mistrust in governments and public authorities — a phenomenon referred to as “truth decay.” New research, however, highlights new ways to detect and dispel disinformation online.
There are several factors that may account for the rapid spread of disinformation during the COVID-19 pandemic. Given the global nature of the pandemic, more groups are using disinformation to further their agendas. Advances in machine learning also contribute to the problem, as disinformation campaigns powered by artificial intelligence extend the reach of malign information online and on social media platforms.
Research from Carnegie Mellon University suggests that social media “bots” may account for 45 to 60 percent of all reviewed Twitter activity related to COVID-19, in contrast to the 10 to 20 percent of Twitter activity for other events such as U.S. elections and natural disasters. These bots can automatically generate messages, advocate ideas, follow other users and use fake accounts to gain followers themselves.
Related: DISA looks for ways of using artificial intelligence (AI) to detect malware
John Keller, chief editor
Military & Aerospace Electronics