MONITORING POLITICAL FINANCING ISSUES ON SOCIAL MEDIA PART V

Monitoring Online Discourse During An Election: A 5-Part Series

How social medial monitoring can help Electoral Management Bodies to ascertain, measure, and validate political spending.

How politicians and their supporters invest in political messaging is rapidly changing. For the last few years, the amount of money spent on political advertising on the Internet has been growing exponentially. As well, new technologies present new advertising opportunities; automated agents such as bots amplify political messaging. All these developments create challenges for EMBs.


An EMB’s political financing team can use AI-based social media analytics to track political spending 

As both technologies and transparency reporting rules are in flux, legal regulation and national directives are often a few steps behind what is technologically possible, and significant loopholes emerge.

In many jurisdictions, after an election all candidates, whether they won or lost, must submit a financial spending report to the EMB, to ensure that they have remained within applicable spending limits. Enforcement is often complaints-based: the EMB will investigate an issue if a complainant has alerted them to it. Social media monitoring can help the EMB to ascertain, measure, and validate whether overspending or infractions of elections laws have occurred.


Breaches of political financing rules include both overspending and under-reporting

With the advent of bot technology, many candidates are utilizing open-source online bot widgets or hiring consultants to create them. It’s a cost-effective strategy: Rather than putting an ad in a newspaper for $2,000, a person could, by procuring a bot, spend $50 for the same message reach. This creates a volume of online messaging that in many jurisdictions would be considered equivalent to advertising. However, bots often aren’t accounted for in political financing regulations, so this kind of electoral messaging can fly under the radar.

Social media can help political financing departments within EMBs to:

the amount of online advertising spending by candidates, local constituency associations, registered third party bodies (spending money on an issue or candidate), and by a political party itself.

Social media data can be leveraged to:

Many major internet advertising hubs, pushed by regulators in various jurisdictions, now provide transparency reports on political spending on their platform. Some, like Twitter, have banned political advertising altogether.


MAJOR INTERNET PLATFORMS AND POLITICAL TRANSPARENCY

Facebook: Allows political advertising. Does not fact-check ads. Provides a sophisticated tool that reveals political spending on the platform.

Google: Limits audience targeting of election ads to age, gender, and general location (postal code level).[1] Provides a transparency report.

Microsoft: No political advertising allowed on their Bing search platform.

Reddit: Under new rules released in April 2020,[2] preparatory to the US elections in November, the platform:

  • manually reviews each political ad for messaging and creative content
  • does not accept ads from parties outside the US
  • only allows political ads at the federal level

It also lists spending on political ad campaigns that have run on Reddit since January 1, 2019.

Twitter: Bans all political advertising.


The data from these transparency reports, while valuable, is incomplete. The Campaign Legal Center (CLC) notes that only “4 percent of digital spending reported to the Federal Election Commission (FEC) by two secretly-funded Democratic groups appear in public archives maintained by Facebook, Google, Snapchat, or Twitter.”[3]

How does this happen? The problem is that platforms don’t have authoritative information to group all investments for a particular candidate together. It’s easy enough to use a different account or credit card to pay for advertising. As well, third party interest groups invest in promoting their favoured candidate, and they don’t always register as political advertisers. Other issues include:

In these ways, despite regulations that buyers of political advertising – whether candidate-based or issue-based – should be registered, such activities can often slip through the net.

KI Design’s experience shows that, by using the following three parameters while interrogating platforms’ transparency data, an EMB will get a better estimate of political financing – both of the actual money invested, and of who is spending it:

  1. Query a candidate’s own expenditure within their constituency;
  2. Query how much the candidate’s constituencyparty has spent;
  3. Query the platform for topics related to election issues: for example, farming subsidies in rural areas, pipeline creation, or tariffs on export of certain commodities. As platform transparency reports don’t include the geographical location of advertisers, queries should include multiple keywords to track spending in a particular location. As an example, if pipeline creation is an issue in that area, then by searching for pipeline-related keywords an EMB can discover who the payers are, and can see if they are registered. If findings show that money has been spent through other parties that aren’t registered, and are neither a political party, a candidate, or a registered lobby group – that’s a violation.

While an EMB won’t be able to get a complete picture of online political spending patterns from transparency reports, leveraged skillfully they can be a useful resource in an investigation.

We recommend that EMBs work with legislators to ensure that platforms include geographic location as part of their transparency reports. Furthermore, any page names, group names, or room names on platforms that are associated with a real entity, whether an individual or a corporation, should be made public.

The CLC advocates updating campaign finance regulation with “across-the-board rules for digital ad transparency.” In KI Design’s opinion, these rules should be clear and specific, ensuring that platforms report:

WHO is spending money?
WHEN did the spending occur?
WHAT topics were spent on?

Any new legislation should also mandate platforms to include obligatory post-mortem transparency reports: an enumeration of every single page/group/room that was advertised, and its association with a real entity (individual or registered corporation.) This would include pages that have been taken down.

Content on Telegram, WhatsApp, and WeChat poses a challenge for EMBs, as the data within them is not publicly available. We suggest that EMBs create a policy covering these platforms, indicating whether or not the EMB will:

The prevalence of social media causes a number of political financing-related issues for EMBs:

Many EMBs will benefit from consultation around the possibilities of social media monitoring, and companies such as KI Design can advise and implement tools. There are no prefab solutions, as laws are in flux and vary by jurisdiction, but KI Design can pilot EMBs to understand the capacities monitoring offers, and the issues it can be utilized to address.  By knowing what questions to ask, we can help you find the answers you need.


[1] See: https://www.blog.google/technology/ads/update-our-political-ads-policy/

[2] See: https://www.reddit.com/r/announcements/comments/g0s6tn/changes_to_reddits_political_ads_policy/

[3] Brendan Fischer, “New CLC Report Highlights Digital Transparency Loopholes in the 2020 Elections” (April 8, 2020), online at: https://campaignlegal.org/update/new-clc-report-highlights-digital-transparency-loopholes-2020-elections


Categories: Election

Monitoring Online Discourse During An Election: A 5-Part Series

PART I: INTRODUCTION

Online interference with elections is a reality of 21st century politics. Social media disinformation campaigns have targeted citizens of democracies across the globe and impacted public perceptions and poll results, often dramatically.

Disinformation: False information spread with the intent to deceive.

Misinformation: Inaccurate information spread without an intent to deceive.

Political campaigns, some less committed to accuracy than in the past, pay for online ads microtargeting particular demographics. Fake news, deliberately fabricated, edges into Facebook feeds alongside legitimate sources, and is shared by unsuspecting users. False Twitter accounts press extreme views, shifting and polarizing public discourse. Fake news spreads rapidly; according to Elodie Vialle of Reporters Without Borders, false information spreads six times faster than accurate information.[1]

“Domestic and international, state and non-state actors manipulate information online in order to shape voters’ choices or simply confuse and disorient citizens, paralyze democratic debate and undermine confidence in electoral processes.”[2]

This phenomenon has developed so rapidly, and is so pervasive, that it has been hard for legislators to know how to regulate it. NGOs and governmental agencies have stepped into the gap. Their primary weapon is social media analytics, powered by AI. Data science can locate trolls and fraudulent accounts: via algorithm, programs can be trained to identify potential bots and unusual political material.[3]

Many of these initiatives are based in the European Union. They track disinformation produced by local and/or foreign actors. Here are a few such organizations, with a brief summary of their work:

Debunk.eu

(Lithuanian)

Uses AI to analyse 20,000 online articles a day, using variables such as keywords, social interaction, and publication across multiple domains. The 2% deemed most likely to be disinformation are then analysed by volunteer factcheckers, and journalists write rebuttals of the most egregious.
Prague Security Studies Institute

(Czech Republic)

Uses the web crawler Versus to monitor suspicious sites, starting four to five weeks before an election; manual coders then analyse the content using variables such as message type, sentiment, and number of shares. The Institute produces weekly summaries of its findings, which are distributed to media outlets.
Computational Propaganda Project at the Oxford Internet Institute

(UK)

Focuses on disinformation campaigns and social media bots. Its in-house platform scrapes public posts, which are then classified by human coders who have familiarity with the monitored state’s  political culture. A Junk News Aggregator also tracks fake stories spreading on Facebook.

Other agencies analyse monitor social media around elections in Africa, Eastern Europe, and the Americas. Here’s one example:

Getúlio Vargas Foundation – Digital Democracy Room

(Brazil)

During the 2018 elections, DDR tracked data from Twitter, Facebook, and YouTube to analyse bot activity and international influence. Their Twitter analysis was facilitated by the ease of API access. DDR’s analysis was hampered by lack of access to data from WhatsApp, increasingly popular in Brazil.

Here in Canada, KI Design, a big data analytics solutions and research firm, built big data analytics to detect and identify dis- and misinformation around the 2019 Canadian federal election.

 

KI Design

(Canada)

KI Design utilized full firehose access to Twitter, as well as posts and comments on online news, blogs, forums, social networks, Facebook pages, Reddit, Tumblr, and YouTube. Using AI analytics and classification mining, we were able to identify disinformation and misinformation sources, discourses, content, and frequency of posting. We mapped relationships between various disinformation sources and within content.

This forthcoming series will dig deeper into how to monitor electoral disinformation, and the different issues and challenges involved.

 

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

 

[1] Staff, “Artificial Intelligence and Disinformation: Examining challenges and solutions,” Modern Diplomacy, March 8, 2019: online at: https://moderndiplomacy.eu/2019/03/08/artificial-intelligence-and-disinformation-examining-challenges-and-solutions/.

[2] Open Society, “Experiences of Social Media Monitoring During Elections: Cases and Best Practice to Inform Electoral Observation Missions,” May 2019; online at: https://www.opensocietyfoundations.org/publications/social-media-monitoring-during-elections-cases-and-best-practice-to-inform-electoral-observation-missions.

[3] European Parliamentary Research Service, Regulating disinformation with artificial intelligence, March 2019: online at: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf.


Categories: social

Should Laws Regulate Online Discourse?

@rcmpgrcpolice

Most Canadians woke up to news of the #RCMP #GRC launching a probe into hate speech by an Alt-Right group leader w

No alt text provided for this image

ho is seeking national party status. The probe came after an Anti Hate group filed a report.

The news article was shared hundreds of times.

The RCMP probe is timely, because hate online is a virus that is attacking democratic society

While hate online is not new, fake news and hate online have the potential of impacting democracy.

At this moment, a dis-information article is one of the most shared. The post claims that more than 100K foreigners are registered to vote. That article has been shared more than 4 thousand times.

#fakenews article

Is dis-information online persistent?

No alt text provided for this image

The answer is, it does fluctuate, however there is always a considerable amount of dis-information. The graph above shows how the volume of content indicating that illegals or foreigners are voting in Canada.

The bad news, currently there is no legal mechanism, to address this kind of discourse.

What do you think?

Tell us your opinion – here or through a twitter poll.


Categories: Innovation

Using AI to Combat AI-Generated Disinformation

AI can be impact election outcomes? how can this be combatted?