Using AI to Combat AI-Generated Disinformation

Showing How AI can combat Disinformation

Canada General Election 2019 and US Presidential Race 2020

As citizens worry about election outcomes and the interference in the democratic process in general and elections in specific, some governments are attempting to mitigate the risks and issues.  In December 2018, the Government of Canada’s Standing Committee on Access to Information, Privacy and Ethics released a report, Democracy under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly. This report, initiated in response to the Facebook/Cambridge Analytica scandal, examines, among other things, the risks posed to the Canadian electoral process by the manipulation of big data and artificial intelligence (AI). A year before in 2017, the Senate Intelligence Committee published a report titled Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution. This report, conducted a full review and produce a comprehensive intelligence report assessing Russian activities and intentions in recent U.S. elections in 2016.

Social media, and the big data it generates, is now so ubiquitous, it’s easy to forget that it’s a relatively recent phenomenon. As such, it’s been hard for legislators to track how such technological developments could be used to influence the Canadian electoral process, and how they should be regulated. Big data, and its manipulation, played a significant role in both the 2016 US election, and the Brexit vote in the UK earlier that year. Fake news, deliberately fabricated, edged into Facebook feeds alongside legitimate sources, and was shared by unsuspecting users. Fake Twitter accounts pressed extreme views, shifting and polarizing public discourse. According to Elodie Vialle, of Reporters Without Borders, false information spreads six times faster than accurate information.[1]

It is well known that AI plays a key role in the spread of disinformation. It powers social media algorithms. It can be programmed to generate content, including automated trolling, and it facilitates the micro-targeting of demographic groups on specific topics: all basic disinformation practices.

Yet what is less widely discussed is that

AI can also be used as a tool to combat disinformation.

Data science can locate trolls and fraudulent accounts: via algorithm, programs can be trained to identify potential bots and unusual political material.[2] While their reach can be enormous, the actual number of perpetrators is very small, and we have the scientific ability to track down who they are. Existing hate speech laws can then be used to prosecute them.

In today’s increasingly febrile global political climate, disinformation is a real and growing problem, both abroad and here in Canada and the United States. A solution is available. Given the upcoming Canadian federal election in October 2019 and the US presidential elections in 2020, proactive use of data science to counter manipulation efforts is both timely and necessary.

Links:

References:

[1] Staff, “Artificial Intelligence and Disinformation: Examining challenges and solutions,” Modern Diplomacy, March 8, 2019: online at: https://moderndiplomacy.eu/2019/03/08/artificial-intelligence-and-disinformation-examining-challenges-and-solutions/.

[2] European Parliamentary Research Service, Regulating disinformation with artificial intelligence, March 2019: online at: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf.