Categories: Election

Can disinformation be traced? #USelection2020

Tracing disinformation

Political parties in the United States, and the rest of the world, know fake news to be a threat to democratic process. News media often partakes in the conversation about disinformation and voter suppression. All parties consider the culprit to be the opposing party or foreign actors (state or independent).

Election management bodies consider disinformation a serious threat, but usually lack precise tools to address it. We know that disinformation exists and is affecting modern democracy, but can we trace the sources of disinformation? The simple answer is “Yes, but”.

Using Social Media AI Analytics we can trace disinformation sources – given a clear definition and a mandate

The challenge of addressing disinformation isn’t the technology, but establishing a clear definition and a mandate. Put simply,

  1. What is fake news, disinformation, or misinformation?
  2. What can we do about it?

I have discussed this terminology and the challenges of tracing disinformation in an e-book, Using Social Media Data to Transform Election Monitoring.

In 2019, we spent some time analyzing social media accounts that spread untrustworthy or unethical information. By untrustworthy, we mean unreliable or false. By unethical, we mean racist or hateful content that targets a group of people. We referred to such accounts as polluters, simply because there isn’t a Canadian or an American legal term for the culprits. We observed that polluters share three properties, they:

  1. Have similar or identical arguments
  2. Create a tight web of connection to other polluters
  3. Target the ‘Five Eye Countries’: Australia, Canada, New Zealand, United Kingdom, and the United States

The list below presents the top 40 polluters on Twitter in the last few months of 2019. Luckily, the public continues to report these accounts to @twitter and many of them are disabled or deleted by now.

About the Author

Dr. Wael Hassan is the CEO of KI Design. He has publications on privacy, de-identification, and social media analytics for election monitoring. Follow him at @drwhassan. Visit our blog at waelhassan.com or get a copy of Using Social Media Data to Transform Election Monitoring from Amazon.

Top 40 Polluters on Twitter

1) @SteveRedgrave4 2) @oldstocknews 3) @Evenings_Star 4) @Tjooitink 5) @vivianmtl 6) @SteveRedgrave4 7) @calgarykiaguy 8) @heathrodgirs 9) @MousseauJim 10) @SergeHalytsky 11) @SusanIverach 12) @chattycathy1226 13) @RoboHoward 14) @Maureenhommagm1 15) @MikeMw86 16) @KAreYouSerious 17) @lafleurmtl 18) @realmarekfe 19) @wavetossed 20) @Dekenfrank1 21) @jspoupart 22) @NtoAlaska 23) @peterdiane01 24) @1loriking 25) @Ez4u2say_Janis 26) @FouadBoussetta 27) @Real_Dr_Roy 28) @schwarzengel88 29) @bgallagb 30) @hydroqueen 31) @Kimdeerhunter 32) @mmaureen7 33) @OFASDRACING 34) @stephandprissy 35) @AlbertGoldie 36) @CONDESCENDANT 37) @HopeAldridge 38) @MurfAD 39) @TheSeaFarmer


Categories: Privacy

Outbreak Notification App Design

From Contact Tracing to Outbreak Notification

Call for Participation

This post is a call for participation for design thinkers – please email or tweet @drwhassan.

As countries assess how best to respond to the COVID-19 pandemic, many have introduced smartphone apps to help identify which users have been infected. These apps vary from country to country. Some are mandatory for citizens to use, such as apps released by the governments of China and Qatar (see inset images); most are not. Some are based on user tracking; others focus on contact tracing. Some utilize a central database; others use APIs from Apple and Google. At least one has already experienced a data breach. But all of them are coming under scrutiny for violating personal privacy.

Wherever personal data is shared, privacy becomes an issue.  In countries where use is voluntary, citizens are reluctant to download these apps. A poll by digital ad company Ogury showed that in France, where a centralised database approach has been adopted, only 2% of the population have downloaded the app, and only 33% would be willing to share any data with the government via the app. [1]

Public trust is a huge issue – given the frequency of data breaches, people are wary of uploading their personal information, even for the purposes of combatting COVID-19. In the USA, only 38% were prepared to share their data, and only 33% trusted the government to protect it. In the UK, the stats told a similar story, with only 40% believing that their data would be safe.[2]

In Canada, Alberta’s ABTraceTogether app was slammed by the provincial Privacy Commissioner for posing a “significant security risk.” The federal government’s COVID Alert app, released in Ontario and pending elsewhere, is promising, but the voluntary contact tracing app has user experience issues which may prevent it from being widely adopted.

In a recent informal poll which I conducted on my page, the proportion of people who were comfortable with installing a contact tracing application was 26%. Most of the people in the No Way camp were experienced professionals with in-depth knowledge of privacy, security, and information technology.

The private sector is bringing a different approach to contact tracing. Several developers have released customer registry systems to support contact tracing and outbreak notification at the level of individual businesses. Some of these are applications; some are online platforms. Privacy remains a concern, and seeing both a privacy gap and an adoption gap, I have designed an outbreak notification system for businesses, IoPlus.

Outbreak notification vs contact tracing

Contact Tracing

Simply said, contact tracing attempts to build a network of every physical interaction, and trace it backwards in the event that a person tests positive for COVID-19.

Contact tracing app are state-centric, require a centralized store, and control

Most implementations of contact tracing require a centralized data store with varying levels of power given to officials and businesses.

Outbreak Notification

Outbreak notification, on the other hand, is a subscription based model, in which citizens are notified if there has been an outbreak in places they have visited. The goal of this solution is to notify the individual to allow them to take action.

Outbreak notifications are citizen-centric, do not require the installation of a mobile application. The individual has the driver’s seat.

Technical Differences

From mathematical modelling, contact tracing resembles a neural network, in which every citizen can have as many connections as the population size. This model is subject to computing challenges. Outbreak notification, on the other hand, is a distributed model that is connected by the edges. The load and computational network is at the business/location level.

Privacy in Design Principles

Outbreak notification has been designed and built with privacy and security as a top priority. The IoPlus notification system relies on an individual mobile device (phone or tablet) leaving a digital “breadcrumb” at a visited location. Patrons and employees scan a posted barcode when they enter and leave a business, to “check in” and “check out.” Users can sign up for notifications via email or a social media account. After an infection is recorded, individuals who self-register will receive a notification through email or via their social media network, based on the contact information given. Those who do not subscribe can check whether they have been in contact with an infected person by simply going to the IoPlus web page. Based on the unique, encrypted “breadcrumb” generated during their visit, a patron can go to the IoPlus web page and can privately see whether they have been in contact with an infected person. No-one else can access that notification.

This method avoids tracking users via location data, and gives them the choice to check in and check out of participating businesses only when they wish.

Next Steps

KI Design is building Outbreak notification service that is:

  1. App-less: it doesn’t require users to install any apps.
  2. Server-less: Does not store user and tracking data in a hosting environment.
  3. Privacy in Design: Design artifacts are built with privacy in mind.

We are calling for contributors to participate in the design and promotion of IoPlus.

For more information please reach out to me at wael@kidesign.io or via twitter @drwhassan

Dr. Wael Hassan,

Founder and CEO of KI Design


Categories: Privacy

Do ‘Contact Tracing Apps’​ need a Privacy Test?

We are asking readers to contribute to this post – please comment in line or send directly to me wael@kidesign.io.

The Coronavirus continues to cause serious damage to humanity: loss of life, employment, and economic opportunity. In an effort to restart economic activity, governments at every level, local, regional, and national, have been working on a phased approach to re-opening. However, with re-opening comes a substantial risk of outbreaks (see a map of outbreaks across the world). Epidemiological studies are showing that shutdowns have been effective in preventing contagion, and recent reports from the United States indicate that some areas are reversing course back to a shutdown.

Why Contact Tracing?

One of the main strategies to support safer re-opening is the use of contact tracing apps. The World Health Organization (WHO) defines contact tracing as follows:

Contact tracing is the process of identifying, assessing, and managing people who have been exposed to a disease to prevent onward transmission. When systematically applied, contact tracing will break the chains of transmission of COVID-19 and is an essential public health tool for controlling the virus.

The Privacy Issue?

It is rather simple,

A data warehouse of sensitive personal information from multiple sources, with wide access, is a recipe for privacy failure.

A contact tracing data warehouse contains a uniquely sensitive combination of data types: location and movement, relationships between people, and medical information. This combination doesn’t exist in any other national database, which makes it a prime target for hackers, aggressive advertisers, and well-intended ignorant users.

Examples of Failures:

Two countries with advanced technologies, namely Norway and the UK, have pulled their contact tracing applications due to privacy concerns.

Norway

Norway’s health authorities said on Monday that they had suspended an app designed to help trace the spread of the new coronavirus after the national data protection agency said it was too invasive of privacy.

Launched in April, the smartphone app Smittestopp (“Infection stop”) was set up to collect location data to help authorities trace the spread of COVID-19, and inform users if they had been exposed to someone carrying the novel coronavirus.

The United Kingdom

A smartphone app to track the spread of Covid-19 may never be released, ministers admitted yesterday, as they abandoned a three-month attempt to create their own version of the technology.

The indication that the app “may never be released” suggests that the design was fundamentally incompatible with privacy. Some articles discussing the cancellation of the UK contact tracing app: timeswired.

Alberta

Alberta’s COVID-19 contact-tracing app is a ‘security risk’ on Apple devices as per privacy commissioner. The report can be found here.

Why are we failing – are designers reckless?

Designers of contact tracing applications are prioritizing speedy development and data sharing over privacy. No doubt, if we need contact tracing, we need it now, and the ability to share data quickly is paramount. So how can contact tracing be reconciled with privacy?

Do we need a privacy test?

I believe that the privacy issue goes beyond testing. We need a privacy framework/charter at the national level to ensure that any contact tracing application follows a set of rules.

Are there solutions?

Absolutely. Solutions start with implementing Privacy in Design; privacy must be considered early in the application design process. Data minimization, data distribution, and anonymization are a few of the tools that can be very effective at managing privacy in a public health situation.

My book, Privacy in Design, is available free on Kindle for prime subscribers.

follow @drwhassan for more information on privacy, social media analytics, and ethics of AI computing.


Categories: Privacy, Security

Blackbaud breach – Executive Options in light of Reports to OPC & ICO

Three Executive Actions to help mitigate further risk

If your company leverages Blackbaud CRM – this article will provide you of three actions that will help mitigate risk.

Blackbaud a reputable company that offers a customer relationship management system has been hit and paid off ransomware. According to G2, Blackbaud CRM is a cloud fundraising and relationship management solution built on Microsoft Azure specifically for enterprise-level fundraising and marketing needs. The company released an official statement on their website available here https://www.blackbaud.com/securityincident.

As a client, whether you have been notified or not of the breach, your organization has a opportunity to follow breach mitigation and notification protocols

Blackbaud has already notified its clients which data was breached, that said, regardless if you have received the notice or not you have been affected. These are three actions that will ensure that you limit your liability:

1- Request Contract Review and third Party Review: Review service contract with Blackbaud and any other third party managing your Razors Edge systems to ensure that it includes notification and risk assessment clauses.

2- Seek a confirmation from Blackbaud: Request a confirmation that ascertains whether donor data or any other identity credentials have been compromised.

3- Post a statement : If your aggregate data or credentials have been compromised , follow your internal breach notifications protocol.

In all cases your information security or IT department should follow breach mitigation protocols, including but not limited to : password reset, enable two factor authentication for administrators, and enabling off cloud backup.

Since the publication of this article the Office of Privacy Commissioner of Canada and the Information Commissioner’s Office of the United Kingdom have received notices of the breach.

You are invited to contribute to this article in the comments or by sending me a direct email at wael@kidesign.io. visit waelhassan.com for more articles on Privacy, Security, and Social Media Analytics.

Waël is on twitter @drwhassan


MONITORING POLITICAL FINANCING ISSUES ON SOCIAL MEDIA PART V

Monitoring Online Discourse During An Election: A 5-Part Series

How social medial monitoring can help Electoral Management Bodies to ascertain, measure, and validate political spending.

How politicians and their supporters invest in political messaging is rapidly changing. For the last few years, the amount of money spent on political advertising on the Internet has been growing exponentially. As well, new technologies present new advertising opportunities; automated agents such as bots amplify political messaging. All these developments create challenges for EMBs.


An EMB’s political financing team can use AI-based social media analytics to track political spending 

As both technologies and transparency reporting rules are in flux, legal regulation and national directives are often a few steps behind what is technologically possible, and significant loopholes emerge.

In many jurisdictions, after an election all candidates, whether they won or lost, must submit a financial spending report to the EMB, to ensure that they have remained within applicable spending limits. Enforcement is often complaints-based: the EMB will investigate an issue if a complainant has alerted them to it. Social media monitoring can help the EMB to ascertain, measure, and validate whether overspending or infractions of elections laws have occurred.


Breaches of political financing rules include both overspending and under-reporting

With the advent of bot technology, many candidates are utilizing open-source online bot widgets or hiring consultants to create them. It’s a cost-effective strategy: Rather than putting an ad in a newspaper for $2,000, a person could, by procuring a bot, spend $50 for the same message reach. This creates a volume of online messaging that in many jurisdictions would be considered equivalent to advertising. However, bots often aren’t accounted for in political financing regulations, so this kind of electoral messaging can fly under the radar.

Social media can help political financing departments within EMBs to:

the amount of online advertising spending by candidates, local constituency associations, registered third party bodies (spending money on an issue or candidate), and by a political party itself.

Social media data can be leveraged to:

Many major internet advertising hubs, pushed by regulators in various jurisdictions, now provide transparency reports on political spending on their platform. Some, like Twitter, have banned political advertising altogether.


MAJOR INTERNET PLATFORMS AND POLITICAL TRANSPARENCY

Facebook: Allows political advertising. Does not fact-check ads. Provides a sophisticated tool that reveals political spending on the platform.

Google: Limits audience targeting of election ads to age, gender, and general location (postal code level).[1] Provides a transparency report.

Microsoft: No political advertising allowed on their Bing search platform.

Reddit: Under new rules released in April 2020,[2] preparatory to the US elections in November, the platform:

  • manually reviews each political ad for messaging and creative content
  • does not accept ads from parties outside the US
  • only allows political ads at the federal level

It also lists spending on political ad campaigns that have run on Reddit since January 1, 2019.

Twitter: Bans all political advertising.


The data from these transparency reports, while valuable, is incomplete. The Campaign Legal Center (CLC) notes that only “4 percent of digital spending reported to the Federal Election Commission (FEC) by two secretly-funded Democratic groups appear in public archives maintained by Facebook, Google, Snapchat, or Twitter.”[3]

How does this happen? The problem is that platforms don’t have authoritative information to group all investments for a particular candidate together. It’s easy enough to use a different account or credit card to pay for advertising. As well, third party interest groups invest in promoting their favoured candidate, and they don’t always register as political advertisers. Other issues include:

In these ways, despite regulations that buyers of political advertising – whether candidate-based or issue-based – should be registered, such activities can often slip through the net.

KI Design’s experience shows that, by using the following three parameters while interrogating platforms’ transparency data, an EMB will get a better estimate of political financing – both of the actual money invested, and of who is spending it:

  1. Query a candidate’s own expenditure within their constituency;
  2. Query how much the candidate’s constituencyparty has spent;
  3. Query the platform for topics related to election issues: for example, farming subsidies in rural areas, pipeline creation, or tariffs on export of certain commodities. As platform transparency reports don’t include the geographical location of advertisers, queries should include multiple keywords to track spending in a particular location. As an example, if pipeline creation is an issue in that area, then by searching for pipeline-related keywords an EMB can discover who the payers are, and can see if they are registered. If findings show that money has been spent through other parties that aren’t registered, and are neither a political party, a candidate, or a registered lobby group – that’s a violation.

While an EMB won’t be able to get a complete picture of online political spending patterns from transparency reports, leveraged skillfully they can be a useful resource in an investigation.

We recommend that EMBs work with legislators to ensure that platforms include geographic location as part of their transparency reports. Furthermore, any page names, group names, or room names on platforms that are associated with a real entity, whether an individual or a corporation, should be made public.

The CLC advocates updating campaign finance regulation with “across-the-board rules for digital ad transparency.” In KI Design’s opinion, these rules should be clear and specific, ensuring that platforms report:

WHO is spending money?
WHEN did the spending occur?
WHAT topics were spent on?

Any new legislation should also mandate platforms to include obligatory post-mortem transparency reports: an enumeration of every single page/group/room that was advertised, and its association with a real entity (individual or registered corporation.) This would include pages that have been taken down.

Content on Telegram, WhatsApp, and WeChat poses a challenge for EMBs, as the data within them is not publicly available. We suggest that EMBs create a policy covering these platforms, indicating whether or not the EMB will:

The prevalence of social media causes a number of political financing-related issues for EMBs:

Many EMBs will benefit from consultation around the possibilities of social media monitoring, and companies such as KI Design can advise and implement tools. There are no prefab solutions, as laws are in flux and vary by jurisdiction, but KI Design can pilot EMBs to understand the capacities monitoring offers, and the issues it can be utilized to address.  By knowing what questions to ask, we can help you find the answers you need.


[1] See: https://www.blog.google/technology/ads/update-our-political-ads-policy/

[2] See: https://www.reddit.com/r/announcements/comments/g0s6tn/changes_to_reddits_political_ads_policy/

[3] Brendan Fischer, “New CLC Report Highlights Digital Transparency Loopholes in the 2020 Elections” (April 8, 2020), online at: https://campaignlegal.org/update/new-clc-report-highlights-digital-transparency-loopholes-2020-elections


Categories: Innovation

KI Live Video Conferences

Description

KI Live Video enables online communication for audio meetings, video meetings, and seminars, with built-in features such as chat, screen sharing, and recording. The plugin enables long-distance or international communication, enhance collaboration, and reduce travel costs. Employees at every level within an organization can use video conferencing tools to host or attend virtual meetings with fellow employees, company partners, or customers, no matter where the attendees are physically located.

KI Live video conferencing eliminates the need for in-person attendance in both quick scrums and important meetings, adding convenience to daily schedules for all involved, improving client relationships, and ensuring open and consistent communication between teams.

BASIC FEATURES OFFERED BY KI:

KI LIVE PROVIDES E-VISIT CAPABILITY.

It enables health care practitioners to effectively evaluate, diagnose, and treat patients remotely. eVisit is a web conferencing and mobile-ready application that provides an alternative mode of treatment to in-person clinic visits. With KI Live eVisit, health care professionals can provide real-time treatment and care to their patients around the clock, in addition to offering their patients convenience, bettering patient follow-up and engagement, and reducing the number of missed appointments and cancellations. KI Live can be utilized by any health care professional in any health care specialty.
KI Live Features:
a) Encrypt patient data to remain in compliance with HIPAA, PHIPA, and HIA regulation
b) Utilize public communication networks -based technology
c) Adhere to strict reporting and transparency requirements.

WHAT YOU SHOULD KNOW ABOUT KI LIVE VIDEO

KI Live Video can be an incredibly flexible tool in a business’s software ecosystem. You can use its internal check-ins, conference calls, external meetings, and presentations.

WHAT ARE THE BENEFITS OF USING KI LIVE VIDEO

Save money and resources with cheaper long-distance and international communication options.
Eliminate geographic barriers and allow your team to work remotely.
Enhance team collaboration by allowing for increased engagement through screen sharing and file sharing.
Reduce travel costs by allowing people to join meetings from the comfort of their office.

WHY USE KI LIVE VIDEO CONFERENCING

KI Live Video offers all the benefits that come with face-to-face communication, without the cost of commuting or travelling. For businesses, this means no hiccups in communication for remote employees, potential prospects, or outside stakeholders, regardless of travel capability. This can be especially useful for small businesses looking to grow without exorbitant travel costs.
KI Live Video can be used as a webinar Software.

WHO USES KI LIVE VIDEO

In the post-COVID-19 era businesses of all types and sizes find multiple uses for KI Live Video. It proved to be a powerful check-in tool for companies with multiple branches or upper management in separate locations. It’s also used by growing businesses to expand their prospects and check in with employees without the need for travel. Even mid-market and enterprise-level businesses use video conferencing in their daily operations to connect with stakeholders and prospects.
KI Live Video is appealing to freelancers or other self-employed individuals.

KI LIVE VIDEO CONFERENCING SOFTWARE FEATURES

Video calling – Offers High-quality video for one-to-one or conference.

Audio calling —Allows participants to join conference calls with just audio, or video and audio.

Recording –  KI Live conferencing allows users to record a video or audio conference call so it can be reviewed later.

Screen share — This feature allows participants to share their screens alongside, or instead of, a webcam feed.

Text chat — Feature live text chat for participants to use alongside or instead of audio. These text chats can be recorded and referred to later. Allow peer-to-peer or peer-to-group text chatting outside of video meetings, as well.

Scheduling — Provides the ability to schedule meetings in-app.

Presentations — Allow for presentation hosting.

HOW TO USE

Start work with KI Live Video Conference for WordPress
https://youtu.be/68bJetc-C4o

How to attach the KI Live Video Conference plugin to Zoom
https://youtu.be/aGjy6OTFIok

How to attach the KI Live Video Conference plugin to Rainbow
https://youtu.be/aB6T7-asj2k

Create KI Live Video Conference – Zoom
https://youtu.be/S6G-J3ESwwk

Create KI Live Video Conference – Rainbow
https://youtu.be/0znvFar4gGw


Categories: Election

KI DESIGN NATIONAL ELECTION SOCIAL MEDIA MONITORING PLAYBOOK — PART IV of V

Monitoring Online Discourse During An Election: A 5-Part Series

How to monitor social media with AI-based tools during an election campaign

 Traditional election monitoring is a formalized process in democratic countries, set out in the mandate of the national Electoral Management Body (EMB). As social and digital cultures change, however, EMBs are finding it useful to expand their monitoring capacities to include social media.

Given the media coverage of interference with the 2016 US and UK elections, and the fallout from the Cambridge Analytica debacle, politicians and the public are wary of the impact social media manipulation can have on electoral processes.

As well, automated tools like bots, created locally or outside the country, disrupt the existing system. They amplify political messaging, yet are not currently covered by political financing regulations; and they can disseminate disinformation.

Tracking social media allows an EMB to stay on top of operational issues during an election period (see Part III: Managing Operational Issues) and also to detect and track disinformation and its spread (see Part II: Identifying Disinformation).

This Playbook is designed for EMBs. As a template, it will obviously need to be adapted, depending on jurisdiction. It describes social media monitoring as a function within an EMB, and assumes that the EMB has access to a AI-based social media monitoring tool, such as KI Social.

To be effective, this process should be put in place well before the first milestone of the election period.

THE PLAYBOOK

1.    Setting up

Before the project begins:

·         Ensure there is clarity regarding the goals of the social media monitoring. A key issue is scope: does the EMB mandate include monitoring of operational issues, or disinformation and misinformation, or all of the above? (see Part III: Managing Operational Issues).

·         Does the mandate include tracking voter issues expressed within national borders, or will it also include voters travelling or living abroad? This is important for the many nations with large expatriate communities. If results should include social media posts from voting citizens residing outside their country, it won’t be possible to limit data by geographic location.

·         Will the monitoring function be active (occurring continuously), passive (occurring once a week, for example), or retroactive (taking place after each election milestone is completed)?

·         In this phase, EMBs should inventory its internal staff capacity; for example, does its staff include social media content producers, policy personnel, social media analysts, social media monitors, or data scientists? If not, arrange with an experienced vendor such as KI Design to provide these services.

 

Technical set-up: The technical team within the EMB should document:

a.    All EMB web and social media assets

b.    All relevant national and international news sites

c.    Lessons learned from the previous election

d.    A list of all confirmed candidates when it is finalized

e.    All political party data and web assets

f.     Details of political spending on Twitter, Reddit, Facebook pages, and other platforms

2.    Acquire a social media service provider with the following capabilities:

a.    Full firehose access to data, going back at least to one previous election. It’s vital to be able to analyse data from the previous election, to understand what potential issues may occur in the current one. For example, there may be specific complaints related to a particular location, or to the capacity of polling station staff. That said, historic analysis will not provide a complete picture; new issues and ambiguities will arise.

b.    Ability to track keywords that are not necessarily related to elections; for example, power outages, roadblocks, protests, etc.

c.    Geolocation capacities:

i.    For posts with geolocation tags, the tool should display post locations on a map. For example, posts may complain about ballot non-delivery in a certain region.

ii.    For posts without geolocation tags, the tool should have the ability to group them and map them visually, to show concentration; for example, a post without a geolocation tag may state “unable to find [named] polling station on EMB website”; the visualisation component is important so that logistical issues can be dealt with collectively rather than as individual instances.

d.    The tool should permit custom views for various EMB staff skillsets. For example: content producers would want to measure the volume of incoming and outgoing messages on the EMB’s social media channel; data scientists may want to write sophisticated queries; issue managers may need views that show whether or not posts have been responded to.

e.    Your vendor should be able to provide data science analytics and application dashboard customization expertise.

f.     Your vendor needs to have experience in provisioning data science services to EMBs.

3.    Noise elimination

A query contains an expression that’s composed of keywords, emojis, and urls to be tracked. These will include both keywords you are looking for, and many keywords that you don’t want to be in the search. For example: In a national election, if the query contains the phrase “election monitor,” the result could include any and all election monitoring occurring anywhere in the world, as well as any election monitoring within municipalities, cities, unions, associations, or the UN, or from other regional-level elections.

In an election context, without noise elimination, some 90% of the results of a query are irrelevant. Hence, a large portion of the query should be dedicated to eliminating these irrelevant results.

Issues to be aware of include:

a.    Elections in other countries may be taking place concurrently; for example, an Indian provincial election and a national UK election. Especially if both countries share a common language, online discourses may include overlapping content; for example, place names or street names.

b.    Name similarities of candidates with other citizens.

c.    Election talk on social media will be dominated by countries with higher per capita access to the Internet, and in particular those whose citizens most frequent Twitter; such as the US, UK, and France.

d.    In multi-lingual countries, where many unofficial languages are spoken, queries should aim to capture election discourse in languages other than the official one/s. With that comes the need for noise elimination related to the nation/s where those other languages are dominant.

4.    Your social media monitoring tool should include three distinct functions:

a.    Dashboards and reports: Real time, periodic (e.g., every four hours), daily, weekly, or monthly

b.    Data feeds: Each feed is dedicated to:

i.    Operational issue/s

ii.    Capturing of EMB’s footprint; this means that this feed would be dictated to finding any occurrences of posts that mention the EMB, its leadership, or the relevant legislation

iii.    All parties and all candidates (including any events or investments or announcements by the political parties)

iv.    Data feeds specific to disinformation and misinformation

c.    Alerts of any media mentions that are of particular interest to the EMB

5.    Create specific filters to target election milestones

a.    Content regarding election steps prior to election day:

i.    Voter registration

ii.    Ballot mailing

iii.    Citizens moving residence

iv.    Allowed pieces of identification

v.    Election date

vi.    References to bias by EMB officials

vii.    Impersonation of EMB or political candidates

b.    On voting days (advance polling and election day):

i.    Lineups

ii.    Availability of paper ballots

iii.    Registration issues

iv.    Staff issues

v.    Directions to polling station

vi.    Power outages

vii.    Poll relocation

c.  Ballot counting hours: Analysing concerns and content appearing after polling stations are closed and before the results were issued.

d.  Post-election reporting: providing aggregate data on the monitoring activity and the number of situations averted, mitigated, or responded to.

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues


Categories: Election

MANAGING OPERATIONAL ISSUES DURING AN ELECTION PART III of V

Monitoring Online Discourse During An Election: A 5-Part Series

The advantages of managing election logistical issues through social media.

Organizing the logistics of an election is a complex process. It’s a question of scale; the sheer numbers involved – of voters, of polling options and locations, and of election materials – means that things can, and will, go wrong.

POTENTIAL OPERATIONAL ISSUES

  • Delay in receiving ballots in the mail
  • Questions about options of voting electronically or by mail
  • Incorrect name or address on ballot
  • How to find information on where to vote
  • Confusion regarding polling station location
  • Confusion regarding the hours that polling stations are open
  • Accessibility issues
  • Confusion regarding what ID to bring
  • Power outages that impact polling stations
  • Road blocks and construction impeding access to a polling station
  • Whether polling hours are delayed
  • Whether there is a long line-up and voting is delayed
  • Availability, and courtesy, of EMB personnel
  • Conflicts at the polling station
  • Issues re third-party election monitors (if applicable)
  • Police presence
  • Dead people and non-citizens voting

Operational issues can be divided into two types. There are logistical concerns, such as:

As well, there are problems caused by the propagation of disinformation (or misinformation).

What role is played by Disinformation?

There can often be an overlap between Operational Issues, Disinformation, and Misinformation. Tweets regarding the location of a particular polling station fall into the Operational Issues category, but that information may be mistaken (Misinformation) or deliberately misleading (Disinformation). There is an almost complete overlap between Disinformation and Misinformation – the only difference is the intent behind the sharing of inaccurate information.

As the table below demonstrates, many Operational Issues may also become targets of Disinformation or Misinformation.

 

Why should EMBs monitor social media?

EMBs have a formal complaints process, and if concerns are raised outside that process, EMB staff are not obligated to respond. However, given the pervasive nature of social media, vexed voters are much more likely to grouse on the Internet than to file a formal complaint. Social media has become an informal complaints process; Twitter in particular. With its use of hashtags, Twitter dominates social media election discourse. (Election discussion on Facebook, Telegram, and WhatsApp takes place on private pages.) The chart below shows social media discourse around the 2019 UK general election with the hashtag #GE2019, by volume.

What can EMBs do about social media-based complaints?

Complaints can fall into one of several categories:

Social media as a mass communication tool: Social media messaging can mitigate public discontent, respond proactively to problems, and send broad messaging demonstrating that the EMB is in control of the situation. For example: after complaints of robocalls which state the election date has changed, the EMB could tweet that these robocalls are giving false information and should be ignored. Such messaging will be picked up by traditional media.

When should EMBs monitor social media?

Monitoring should occur throughout the election period, Election milestones tend to be flashpoints when online discourse increases – these are highlighted in the diagram below.

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

 

 

 


Categories: Election

Identifying Disinformation — Part II

Monitoring Online Discourse During An Election: A 5-Part Series

Using AI to track disinformation during an election campaign.

 

How can online disinformation be identified and tracked? KI Design provided social media monitoring solutions for the 2019 Canadian federal election.[1] KI Social is a suite of tools designed to support three main areas of Electoral Management Board (EMB) electoral monitoring as it pertains to social media:

Disinformation: False information spread deliberately to deceive, including:

Operational issues: Problems related to practical aspects of the voting process.

Political financing issues: This can be divided into two main categories:

Online voter discourse occurs in waves. It generally peaks in the event of any significant political incidents, and around election milestones such as:

In providing monitoring of online electoral discourse, KI Social analyses posts originating on Facebook and Instagram pages, Twitter, Reddit, Tumblr, Vimeo, YouTube (including comments), blogs, forums, and online news sources (including comments). The platform provides real-time monitoring, sentiment and emotion analysis, and geo-location mapping.

Using AI analytics and classification mining, KI Social can identify disinformation and misinformation sources, discourses, content, and frequency of posting. The platform maps relationships between various disinformation sources, and within content.

How Disinformation Impacts Elections, and Voters

Disinformation undermines democracy. That’s its purpose. Usually extreme in nature, fake news polarizes people, creating or exacerbating social and political divisions, and breeding cynicism and political disengagement. Even the reporting of disinformation campaigns (Russian electoral interference, for example) adds to the destabilization, making people wary of what to believe.

“Over the past five years, there has been an upward trend in the amount of cyber threat activity against democratic processes globally…. Against elections, adversaries use cyber capabilities to suppress voter turnout …”[2]

Disinformation campaigns often focus on election processes. The aim is to lower voter turnout, by preventing people from voting, or simply by making them less inclined to do so. This can play out in different ways.

Below, I list examples of some of the topics of disinformation, taken from posts from different countries, that can be found in the social media universe during an election period.

Making it harder for people to physically vote

False information gets spread about the location of a polling station, or its hours of operation, or power outages on site causing long line-ups. Fake news like this creates confusion, making people less likely to vote.

 Undermining voter trust in the EMB

Findings included:

 

This was fake news – no-one can vote in a federal election in Canada unless they have become a citizen. Other posts criticized the government for allowing prisoners to vote, even though this is a well-established right under the Canadian Charter of Rights and Freedoms. Both these types of posts foster a sense of disenchantment with “the system.”

Other posts:

Undermining voter trust in the electoral process generally

Findings included:

Polarizing the electorate

 

Other sources of disinformation include:

 

Identifying Disinformation, and Those Who Disseminate It

In monitoring elections, the social media analyst is confronted with an enormous amount of data. The key to accessing and interpreting that data is the keywords and queries the analyst chooses to use. To be effective, these must be shaped by a close understanding of the political context. This process is “highly selective,” notes Democracy Reporting International. “It is not possible to have a comprehensive view of what happens on all social media in an election. Making the right choices of what to look for is one of the main challenges of social media monitoring.”[4]

 

To help circumvent this challenge, the KI Social platform:

 

When analysts track disinformation, they usually have preconceived ideas of what it will look like. EMBs are very familiar with mainstream media, and how to track potential disinformation within it (false claims by politicians, for example). Traditional manual modes of monitoring rely on this history of prior examples, and require that human monitors read and analyze every single post to decide whether it’s disinformation.

 

Some EMBs expand their capabilities by leveraging automated tools. However, standard automated data queries are still based on prior examples and thus are error prone.

 

A third way of tracking disinformation is via AI-based tools. Such tools avoid these pitfalls by allowing the analyst to track unprecedented volumes of data, as well as its location and context and the sentiment being expressed. Negative emotion is key, as it is the main determinant for disinformation.

 

The diagram below illustrates how KI Social’s methodology can be used by EMBs to track disinformation.

 

Removing unwanted data is conducted at every stage of the process (for example, if analysts are studying an election in France, and there are elections in Ivory Coast at the same time, the Ivory Coast data will need to be filtered out of the results).

 

Disinformation can be expressed as an algorithm:

 

Disinformation =       (hate OR distrust OR obfuscation) x volume   
                  Election context x (anger OR disgust OR sadness OR fear)

 

Animated by these queries, KI Social provides the ability to answer the following questions:

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

 

Follow me at @drwhasssan

 


[1] This article reflects the views of KI Design, and not those of Elections Canada. The full report on how Elections Canada uses social media monitoring tools, including those created by KI Design, in the 2019 federal election can be found here: Office of the Chief Electoral Officer of Canada, Report on the 43rd General Election of October 21, 2019: https://www.elections.ca/res/rep/off/sta_ge43/stat_ge43_e.pdf.

[2] Communications Security Establishment, Cyber Threats to Canada’s Democratic Process, Government of Canada 2017, page 5.

[3] All the posts below are in the public domain; nevertheless, we have removed the identity of the poster in the screenshots we provide.

[4] Democracy Reporting International, “Discussion Paper: Social Media Monitoring in Elections,” December 2017, page 2; online at: https://democracy-reporting.org/wp-content/uploads/2018/02/Social-Media-Monitoring-in-Elections.pdf


Categories: Election

Monitoring Online Discourse During An Election: A 5-Part Series

PART I: INTRODUCTION

Online interference with elections is a reality of 21st century politics. Social media disinformation campaigns have targeted citizens of democracies across the globe and impacted public perceptions and poll results, often dramatically.

Disinformation: False information spread with the intent to deceive.

Misinformation: Inaccurate information spread without an intent to deceive.

Political campaigns, some less committed to accuracy than in the past, pay for online ads microtargeting particular demographics. Fake news, deliberately fabricated, edges into Facebook feeds alongside legitimate sources, and is shared by unsuspecting users. False Twitter accounts press extreme views, shifting and polarizing public discourse. Fake news spreads rapidly; according to Elodie Vialle of Reporters Without Borders, false information spreads six times faster than accurate information.[1]

“Domestic and international, state and non-state actors manipulate information online in order to shape voters’ choices or simply confuse and disorient citizens, paralyze democratic debate and undermine confidence in electoral processes.”[2]

This phenomenon has developed so rapidly, and is so pervasive, that it has been hard for legislators to know how to regulate it. NGOs and governmental agencies have stepped into the gap. Their primary weapon is social media analytics, powered by AI. Data science can locate trolls and fraudulent accounts: via algorithm, programs can be trained to identify potential bots and unusual political material.[3]

Many of these initiatives are based in the European Union. They track disinformation produced by local and/or foreign actors. Here are a few such organizations, with a brief summary of their work:

Debunk.eu

(Lithuanian)

Uses AI to analyse 20,000 online articles a day, using variables such as keywords, social interaction, and publication across multiple domains. The 2% deemed most likely to be disinformation are then analysed by volunteer factcheckers, and journalists write rebuttals of the most egregious.
Prague Security Studies Institute

(Czech Republic)

Uses the web crawler Versus to monitor suspicious sites, starting four to five weeks before an election; manual coders then analyse the content using variables such as message type, sentiment, and number of shares. The Institute produces weekly summaries of its findings, which are distributed to media outlets.
Computational Propaganda Project at the Oxford Internet Institute

(UK)

Focuses on disinformation campaigns and social media bots. Its in-house platform scrapes public posts, which are then classified by human coders who have familiarity with the monitored state’s  political culture. A Junk News Aggregator also tracks fake stories spreading on Facebook.

Other agencies analyse monitor social media around elections in Africa, Eastern Europe, and the Americas. Here’s one example:

Getúlio Vargas Foundation – Digital Democracy Room

(Brazil)

During the 2018 elections, DDR tracked data from Twitter, Facebook, and YouTube to analyse bot activity and international influence. Their Twitter analysis was facilitated by the ease of API access. DDR’s analysis was hampered by lack of access to data from WhatsApp, increasingly popular in Brazil.

Here in Canada, KI Design, a big data analytics solutions and research firm, built big data analytics to detect and identify dis- and misinformation around the 2019 Canadian federal election.

 

KI Design

(Canada)

KI Design utilized full firehose access to Twitter, as well as posts and comments on online news, blogs, forums, social networks, Facebook pages, Reddit, Tumblr, and YouTube. Using AI analytics and classification mining, we were able to identify disinformation and misinformation sources, discourses, content, and frequency of posting. We mapped relationships between various disinformation sources and within content.

This forthcoming series will dig deeper into how to monitor electoral disinformation, and the different issues and challenges involved.

 

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

 

[1] Staff, “Artificial Intelligence and Disinformation: Examining challenges and solutions,” Modern Diplomacy, March 8, 2019: online at: https://moderndiplomacy.eu/2019/03/08/artificial-intelligence-and-disinformation-examining-challenges-and-solutions/.

[2] Open Society, “Experiences of Social Media Monitoring During Elections: Cases and Best Practice to Inform Electoral Observation Missions,” May 2019; online at: https://www.opensocietyfoundations.org/publications/social-media-monitoring-during-elections-cases-and-best-practice-to-inform-electoral-observation-missions.

[3] European Parliamentary Research Service, Regulating disinformation with artificial intelligence, March 2019: online at: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf.