Categories: Election

KI DESIGN NATIONAL ELECTION SOCIAL MEDIA MONITORING PLAYBOOK — PART IV of V

Monitoring Online Discourse During An Election: A 5-Part Series

How to monitor social media with AI-based tools during an election campaign

 Traditional election monitoring is a formalized process in democratic countries, set out in the mandate of the national Electoral Management Body (EMB). As social and digital cultures change, however, EMBs are finding it useful to expand their monitoring capacities to include social media.

Given the media coverage of interference with the 2016 US and UK elections, and the fallout from the Cambridge Analytica debacle, politicians and the public are wary of the impact social media manipulation can have on electoral processes.

As well, automated tools like bots, created locally or outside the country, disrupt the existing system. They amplify political messaging, yet are not currently covered by political financing regulations; and they can disseminate disinformation.

Tracking social media allows an EMB to stay on top of operational issues during an election period (see Part III: Managing Operational Issues) and also to detect and track disinformation and its spread (see Part II: Identifying Disinformation).

This Playbook is designed for EMBs. As a template, it will obviously need to be adapted, depending on jurisdiction. It describes social media monitoring as a function within an EMB, and assumes that the EMB has access to a AI-based social media monitoring tool, such as KI Social.

To be effective, this process should be put in place well before the first milestone of the election period.

THE PLAYBOOK

1.    Setting up

Before the project begins:

·         Ensure there is clarity regarding the goals of the social media monitoring. A key issue is scope: does the EMB mandate include monitoring of operational issues, or disinformation and misinformation, or all of the above? (see Part III: Managing Operational Issues).

·         Does the mandate include tracking voter issues expressed within national borders, or will it also include voters travelling or living abroad? This is important for the many nations with large expatriate communities. If results should include social media posts from voting citizens residing outside their country, it won’t be possible to limit data by geographic location.

·         Will the monitoring function be active (occurring continuously), passive (occurring once a week, for example), or retroactive (taking place after each election milestone is completed)?

·         In this phase, EMBs should inventory its internal staff capacity; for example, does its staff include social media content producers, policy personnel, social media analysts, social media monitors, or data scientists? If not, arrange with an experienced vendor such as KI Design to provide these services.

 

Technical set-up: The technical team within the EMB should document:

a.    All EMB web and social media assets

b.    All relevant national and international news sites

c.    Lessons learned from the previous election

d.    A list of all confirmed candidates when it is finalized

e.    All political party data and web assets

f.     Details of political spending on Twitter, Reddit, Facebook pages, and other platforms

2.    Acquire a social media service provider with the following capabilities:

a.    Full firehose access to data, going back at least to one previous election. It’s vital to be able to analyse data from the previous election, to understand what potential issues may occur in the current one. For example, there may be specific complaints related to a particular location, or to the capacity of polling station staff. That said, historic analysis will not provide a complete picture; new issues and ambiguities will arise.

b.    Ability to track keywords that are not necessarily related to elections; for example, power outages, roadblocks, protests, etc.

c.    Geolocation capacities:

i.    For posts with geolocation tags, the tool should display post locations on a map. For example, posts may complain about ballot non-delivery in a certain region.

ii.    For posts without geolocation tags, the tool should have the ability to group them and map them visually, to show concentration; for example, a post without a geolocation tag may state “unable to find [named] polling station on EMB website”; the visualisation component is important so that logistical issues can be dealt with collectively rather than as individual instances.

d.    The tool should permit custom views for various EMB staff skillsets. For example: content producers would want to measure the volume of incoming and outgoing messages on the EMB’s social media channel; data scientists may want to write sophisticated queries; issue managers may need views that show whether or not posts have been responded to.

e.    Your vendor should be able to provide data science analytics and application dashboard customization expertise.

f.     Your vendor needs to have experience in provisioning data science services to EMBs.

3.    Noise elimination

A query contains an expression that’s composed of keywords, emojis, and urls to be tracked. These will include both keywords you are looking for, and many keywords that you don’t want to be in the search. For example: In a national election, if the query contains the phrase “election monitor,” the result could include any and all election monitoring occurring anywhere in the world, as well as any election monitoring within municipalities, cities, unions, associations, or the UN, or from other regional-level elections.

In an election context, without noise elimination, some 90% of the results of a query are irrelevant. Hence, a large portion of the query should be dedicated to eliminating these irrelevant results.

Issues to be aware of include:

a.    Elections in other countries may be taking place concurrently; for example, an Indian provincial election and a national UK election. Especially if both countries share a common language, online discourses may include overlapping content; for example, place names or street names.

b.    Name similarities of candidates with other citizens.

c.    Election talk on social media will be dominated by countries with higher per capita access to the Internet, and in particular those whose citizens most frequent Twitter; such as the US, UK, and France.

d.    In multi-lingual countries, where many unofficial languages are spoken, queries should aim to capture election discourse in languages other than the official one/s. With that comes the need for noise elimination related to the nation/s where those other languages are dominant.

4.    Your social media monitoring tool should include three distinct functions:

a.    Dashboards and reports: Real time, periodic (e.g., every four hours), daily, weekly, or monthly

b.    Data feeds: Each feed is dedicated to:

i.    Operational issue/s

ii.    Capturing of EMB’s footprint; this means that this feed would be dictated to finding any occurrences of posts that mention the EMB, its leadership, or the relevant legislation

iii.    All parties and all candidates (including any events or investments or announcements by the political parties)

iv.    Data feeds specific to disinformation and misinformation

c.    Alerts of any media mentions that are of particular interest to the EMB

5.    Create specific filters to target election milestones

a.    Content regarding election steps prior to election day:

i.    Voter registration

ii.    Ballot mailing

iii.    Citizens moving residence

iv.    Allowed pieces of identification

v.    Election date

vi.    References to bias by EMB officials

vii.    Impersonation of EMB or political candidates

b.    On voting days (advance polling and election day):

i.    Lineups

ii.    Availability of paper ballots

iii.    Registration issues

iv.    Staff issues

v.    Directions to polling station

vi.    Power outages

vii.    Poll relocation

c.  Ballot counting hours: Analysing concerns and content appearing after polling stations are closed and before the results were issued.

d.  Post-election reporting: providing aggregate data on the monitoring activity and the number of situations averted, mitigated, or responded to.

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues


Categories: Election

MANAGING OPERATIONAL ISSUES DURING AN ELECTION PART III of V

Monitoring Online Discourse During An Election: A 5-Part Series

The advantages of managing election logistical issues through social media.

Organizing the logistics of an election is a complex process. It’s a question of scale; the sheer numbers involved – of voters, of polling options and locations, and of election materials – means that things can, and will, go wrong.

POTENTIAL OPERATIONAL ISSUES

  • Delay in receiving ballots in the mail
  • Questions about options of voting electronically or by mail
  • Incorrect name or address on ballot
  • How to find information on where to vote
  • Confusion regarding polling station location
  • Confusion regarding the hours that polling stations are open
  • Accessibility issues
  • Confusion regarding what ID to bring
  • Power outages that impact polling stations
  • Road blocks and construction impeding access to a polling station
  • Whether polling hours are delayed
  • Whether there is a long line-up and voting is delayed
  • Availability, and courtesy, of EMB personnel
  • Conflicts at the polling station
  • Issues re third-party election monitors (if applicable)
  • Police presence
  • Dead people and non-citizens voting

Operational issues can be divided into two types. There are logistical concerns, such as:

As well, there are problems caused by the propagation of disinformation (or misinformation).

What role is played by Disinformation?

There can often be an overlap between Operational Issues, Disinformation, and Misinformation. Tweets regarding the location of a particular polling station fall into the Operational Issues category, but that information may be mistaken (Misinformation) or deliberately misleading (Disinformation). There is an almost complete overlap between Disinformation and Misinformation – the only difference is the intent behind the sharing of inaccurate information.

As the table below demonstrates, many Operational Issues may also become targets of Disinformation or Misinformation.

 

Why should EMBs monitor social media?

EMBs have a formal complaints process, and if concerns are raised outside that process, EMB staff are not obligated to respond. However, given the pervasive nature of social media, vexed voters are much more likely to grouse on the Internet than to file a formal complaint. Social media has become an informal complaints process; Twitter in particular. With its use of hashtags, Twitter dominates social media election discourse. (Election discussion on Facebook, Telegram, and WhatsApp takes place on private pages.) The chart below shows social media discourse around the 2019 UK general election with the hashtag #GE2019, by volume.

What can EMBs do about social media-based complaints?

Complaints can fall into one of several categories:

Social media as a mass communication tool: Social media messaging can mitigate public discontent, respond proactively to problems, and send broad messaging demonstrating that the EMB is in control of the situation. For example: after complaints of robocalls which state the election date has changed, the EMB could tweet that these robocalls are giving false information and should be ignored. Such messaging will be picked up by traditional media.

When should EMBs monitor social media?

Monitoring should occur throughout the election period, Election milestones tend to be flashpoints when online discourse increases – these are highlighted in the diagram below.

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

 

 

 


Categories: Election

Identifying Disinformation — Part II

Monitoring Online Discourse During An Election: A 5-Part Series

Using AI to track disinformation during an election campaign.

 

How can online disinformation be identified and tracked? KI Design provided social media monitoring solutions for the 2019 Canadian federal election.[1] KI Social is a suite of tools designed to support three main areas of Electoral Management Board (EMB) electoral monitoring as it pertains to social media:

Disinformation: False information spread deliberately to deceive, including:

Operational issues: Problems related to practical aspects of the voting process.

Political financing issues: This can be divided into two main categories:

Online voter discourse occurs in waves. It generally peaks in the event of any significant political incidents, and around election milestones such as:

In providing monitoring of online electoral discourse, KI Social analyses posts originating on Facebook and Instagram pages, Twitter, Reddit, Tumblr, Vimeo, YouTube (including comments), blogs, forums, and online news sources (including comments). The platform provides real-time monitoring, sentiment and emotion analysis, and geo-location mapping.

Using AI analytics and classification mining, KI Social can identify disinformation and misinformation sources, discourses, content, and frequency of posting. The platform maps relationships between various disinformation sources, and within content.

How Disinformation Impacts Elections, and Voters

Disinformation undermines democracy. That’s its purpose. Usually extreme in nature, fake news polarizes people, creating or exacerbating social and political divisions, and breeding cynicism and political disengagement. Even the reporting of disinformation campaigns (Russian electoral interference, for example) adds to the destabilization, making people wary of what to believe.

“Over the past five years, there has been an upward trend in the amount of cyber threat activity against democratic processes globally…. Against elections, adversaries use cyber capabilities to suppress voter turnout …”[2]

Disinformation campaigns often focus on election processes. The aim is to lower voter turnout, by preventing people from voting, or simply by making them less inclined to do so. This can play out in different ways.

Below, I list examples of some of the topics of disinformation, taken from posts from different countries, that can be found in the social media universe during an election period.

Making it harder for people to physically vote

False information gets spread about the location of a polling station, or its hours of operation, or power outages on site causing long line-ups. Fake news like this creates confusion, making people less likely to vote.

 Undermining voter trust in the EMB

Findings included:

 

This was fake news – no-one can vote in a federal election in Canada unless they have become a citizen. Other posts criticized the government for allowing prisoners to vote, even though this is a well-established right under the Canadian Charter of Rights and Freedoms. Both these types of posts foster a sense of disenchantment with “the system.”

Other posts:

Undermining voter trust in the electoral process generally

Findings included:

Polarizing the electorate

 

Other sources of disinformation include:

 

Identifying Disinformation, and Those Who Disseminate It

In monitoring elections, the social media analyst is confronted with an enormous amount of data. The key to accessing and interpreting that data is the keywords and queries the analyst chooses to use. To be effective, these must be shaped by a close understanding of the political context. This process is “highly selective,” notes Democracy Reporting International. “It is not possible to have a comprehensive view of what happens on all social media in an election. Making the right choices of what to look for is one of the main challenges of social media monitoring.”[4]

 

To help circumvent this challenge, the KI Social platform:

 

When analysts track disinformation, they usually have preconceived ideas of what it will look like. EMBs are very familiar with mainstream media, and how to track potential disinformation within it (false claims by politicians, for example). Traditional manual modes of monitoring rely on this history of prior examples, and require that human monitors read and analyze every single post to decide whether it’s disinformation.

 

Some EMBs expand their capabilities by leveraging automated tools. However, standard automated data queries are still based on prior examples and thus are error prone.

 

A third way of tracking disinformation is via AI-based tools. Such tools avoid these pitfalls by allowing the analyst to track unprecedented volumes of data, as well as its location and context and the sentiment being expressed. Negative emotion is key, as it is the main determinant for disinformation.

 

The diagram below illustrates how KI Social’s methodology can be used by EMBs to track disinformation.

 

Removing unwanted data is conducted at every stage of the process (for example, if analysts are studying an election in France, and there are elections in Ivory Coast at the same time, the Ivory Coast data will need to be filtered out of the results).

 

Disinformation can be expressed as an algorithm:

 

Disinformation =       (hate OR distrust OR obfuscation) x volume   
                  Election context x (anger OR disgust OR sadness OR fear)

 

Animated by these queries, KI Social provides the ability to answer the following questions:

 

Part of a 5-part series on

Monitoring Online Discourse During An Election:

PART ONE:  Introduction

PART TWO:  Identifying Disinformation

PART THREE:  Managing Operational issues

PART FOUR:  KI Design National Election Social Media Monitoring Playbook

PART FIVE:  Monitoring Political Financing Issues

 

Follow me at @drwhasssan

 


[1] This article reflects the views of KI Design, and not those of Elections Canada. The full report on how Elections Canada uses social media monitoring tools, including those created by KI Design, in the 2019 federal election can be found here: Office of the Chief Electoral Officer of Canada, Report on the 43rd General Election of October 21, 2019: https://www.elections.ca/res/rep/off/sta_ge43/stat_ge43_e.pdf.

[2] Communications Security Establishment, Cyber Threats to Canada’s Democratic Process, Government of Canada 2017, page 5.

[3] All the posts below are in the public domain; nevertheless, we have removed the identity of the poster in the screenshots we provide.

[4] Democracy Reporting International, “Discussion Paper: Social Media Monitoring in Elections,” December 2017, page 2; online at: https://democracy-reporting.org/wp-content/uploads/2018/02/Social-Media-Monitoring-in-Elections.pdf


How do I permanently delete my facebook account?

whereas there is a facebook page that makes people believe that their account was deleted, information in actual fact is never deleted. Whats their excuse, someone else may have liked your picture or article.

From facebook pages:

If you don’t think you’ll use Facebook again, you can request to have your account permanently deleted. Please keep in mind that you won’t be able to reactivate your account or retrieve anything you’ve added. Before you do this, you may want to download a copy of your info from Facebook. Then, if you’d like your account permanently deleted with no option for recovery, log into your account and let us know.

When you delete your account, people won’t be able to see it on Facebook. It may take up to 90 days from the beginning of the deletion process to delete all of the things you’ve posted, like your photos, status updates or other data stored in backup systems. While we are deleting this information, it is inaccessible to other people using Facebook.

Some of the things you do on Facebook aren’t stored in your account. For example, a friend may still have messages from you even after you delete your account. That information remains after you delete your account.


Categories: social

Social Media’s Big Data Collection

By  Aydin Farrokhi and Dr. Wael Hassan

To highlight a few areas in which big data has helped to improve the organizational processes, the following are real examples worth mentioning. In education, some institutions have used big data to identify student candidates for advanced classes. In finance, big data has been used  to provide access to credit through non-traditional methods, for example, LexisNexis created an alternative credit scoring (Risk View) system which provides alternative ways to score creditworthiness. In healthcare, a tailored medicare is a new approach for disease treatment  and prevention based on an individual’s environment and lifestyle. In human resources, Google is using big data to help promote a more diverse workforce.

 

All above said, a concern arises that certain groups of people will be categorized and excluded through the use of big data. In some cases, customers’ credit limits have been lowered not because of their payment history but because of where they had shopped.

 

Also of concern is the exposure of people’s sensitive data. The results of a study performed which combined data on Facebook “Likes” with limited survey information was found to be staggering. The researchers were able to accurately predict:

 

Male user’s sexual orientation -88% of the time

User’s ethnic origin-95% of the time

User’s religion (Christian or Muslim)-82% of the time

A Democrat or Republican-85% of the time

Used alcohol, drugs, or cigarettes -65% to 75% of the time

 

Big data may even increase the incidents of fraud. Fraudsters can target vulnerable consumers and offer disingenuous services or goods for scamming purposes. Big data analytics allows organizations (or fraudster) to more easily and accurately identify persons who are drawn to sweepstake offers or who are vulnerable prospects.

Other malicious intent could occur with companies offering consumers choices to quote and infer misleading conclusion from a likeminded preselected group of people that big data provided them.


Overcoming the Challenges of Privacy of Social Media in Canada

By Aydin Farrokhi and Dr. Wael Hassan

In Canada data protection is regulated by both federal and provincial legislation. Organizations and other companies who capture and store personal information are subject to several laws in Canada. In the course of commercial activities, the federal Personal Information Protection and Electronic Documents Act (PIPEDA) became law in 2004. PIPEDA requires organizations to obtain consent from individual whose data being collected, used, or disclosed to third parties. By definition personal data includes any information that can be used to identify an individual other than information that is publicly available. Personal information can only be used for the purpose it was collected and individuals have the right to access their personal information held by an organization.

Amendments to PIPEDA 

The compliance and enforcement in PIPEDA may not be strong enough to address big data privacy aspects. The Digital Privacy Act (Also known as Bill S_4) received Royal Assent and now is law. Under this law if it becomes entirely enforced, the Privacy Commissioner can bring a motion against the violating company and a fine up to $100,000.

The Digital Privacy Act amends and expands PIPEDA in several respects:

 

  1. The definition of “consent” is updated: It adds to PIPEDA’s consent and knowledge requirement. The DPA requires reasonable expectation that the individual understands what they are consenting to. The expectation is that the individual understands the nature, purpose and consequence of the collection, use or disclosure of their personal data. Children and vulnerable individuals have specific

There are some exceptions to this rule. Managing employees, fraud investigations and certain business transactions are to name a few.

  1. Breach reporting to the Commissioner is mandatory (not yet in force)
  2. Timely breach notifications to be sent to the impacted individuals: the mandatory notification must explain the significance of the breach and what can be done, or has been done to lessen the risk of the
  3. Breach record keeping mandated: All breaches affecting personal information whether or not there has been a real risk of significant harm is mandatory to be kept for records. These records may be requested by the Commissioner or be required in discovery by litigant or asked by the insurance company to assess the premiums for cyber
  4. Failure to report a breach to the Commissioner or the impacted individuals may result in significant

Cross-Border Transfer of Big Data

The federal Privacy Commissioner’s position in personal information transferred to a foreign third party is that transferred information is subject to the laws and regulations of the foreign country and no contracts can override those laws. There is no consent required for transferring personal data to a foreign third party. Depending on the sensitivity of the personal data a notification to the affected individuals that their information may be stored or accessed outside  of Canada and potential impact this may have on their privacy rights.

 Personal Information- Ontario Privacy Legislations

The Freedom of Information and Protection of Privacy Act, the Municipal Freedom of Information and Protection of Privacy Act and Personal Health Information Protection Act are three major legislations that organizations such as government ministries, municipalities, police services, health care providers and school boards are to comply with when collecting, using and disclosing personal information. The office of the Information and Privacy Commissioner of Ontario (IPC) is responsible for monitoring and enforcing these acts.

In big data projects the IPC works closely with government institutions to ensure compliance with the laws. With big data projects, information collected for one reason may be collectively used with information acquired for another reasons. If not properly managed, big data projects may be contrary to Ontario’s privacy laws.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Categories: social

A Lesson to Know: The Unforgiving Culture of Social Media

For better or for worse, for decades public figures ranging from celebrities to CEOs to politicians to athletes have been notoriously remembered for 10-or-15-second snippets of speech endlessly repeated as quotes in newspapers, snippets in television commercials or on news reports until they come to capsulize the person.

For example, way back in 2o14, Elon Musk, the billionaire genius behind SpaceX, PayPal, Tesla Motors and a host of other companies, was asked about what he was looking for in potential hires for his team. He was specifically asked about what colleges or universities he was partial to. He responded that you do not need to go to college or university, “or even highschool”. This was later followed by the now-famous 2015 quote of his, “ If you don’t make it at Tesla, you go work at Apple”. Yahoo’s former CEO Marissa Mayer was also under intense scrutiny when she made the rather obnoxious comment: “The baby’s been way easier than everyone made it out to be.” These are only minor examples of words that were stated within seconds, and yet, have profound impact on public opinion for a lifetime.

With the advent of social media, there is no longer a curtain public figures can hang behind, in protection of their privacy or for when they say things that their publicists probably wish they did not. Nowadays, everything is seen or heard and caught on some form of recording and shared over and over again through social media. Rooted in the creation of social platforms such as YouTube, Facebook and, later, Twitter, viral content allows for the mass wide-spread of messages, ensuring that even those unaware of public figures’ quotes are up to speed. 

Digging deeper into the media coverage that surrounded the aftermath of the viral spread of these messages elucidates the one of the largest challenges to social media- the negative and unfairly reactive nature of it. In today’s era dominated by an overt need to be politically correct, social media has increased our ongoing desire to react without appropriate research or context.

Now, more than ever before, social figures have a responsibility to be careful about their actions as social media is unforgiving. Public figures should always remember that they are public figures who will be scrutinized every day, to masses of people. This is even more prominent when their offensive words and actions are placed on social media for the entire world to see. And you can never be too careful when your public image impacts thousands of people, especially young people who look up to you. It is important that public figures be more responsible with what they share online, and always remember that what they say publically can come back to haunt them. Public figures are influencers for a reason, and their reach extends to many- and thanks to social media, will do so for a long, long time.

 

 

 

 

 

 

 

 

 


Categories: Security, social

Social Media as Political Warfare

The rise of social media has immeasurable power. Whereas in the past, people would get their recent news updates from television or the radio, now it is regular, lay people (often in 140 characters or less) spreading the news. While sharing opinion on social media outlets has the power to liberate and empower people, the messages spread can be harmful and downright abusive.
Social media has changed history. The creation of tactical narratives spread through social media channels is now at the core of modern strategic communication strategies in business, politics and even in warfare. Particularly in politics around the world, the ease of spreading messages from your finger trips has led to a phenomenon in political warfare that has shaped public’s opinions and influence election outcomes. Such digital manipulation has even gone so far as to make policy makers, military leaders and intelligence agencies struggle adapt to the changing climate.

One example of this is the election season of Iran. At the time, the majority of the posts were bashing the election and did not even come from the people of Iran themselves. In particular, it was the Western media outlets that were bursting with an outbreak in protesters using Twitter, blogs, and other social media outlets to spread propaganda, coordinate rallies, share information (that may or may not have been true), and locate supporters. Extensive media coverage highlighted the role of social networking, both in helping organize activities and in causing a rise in cyber activism surrounding the Iranian protests that resulted in an unpresented global debate. In just exploring the sheer volume of information published in real-time through social networks, one can see how this was just one of the most major world event that was broadcasted worldwide almost entirely via social media. While social media allowed an international community of protestors (and some supporters) an unprecedented peek into the turmoil afflicting Iran, politicians also were found to be using social media as a way to mobilize voters as the societal messages discussed on social media became campaign themes for presidential candidates. One of the largest issues, however, was the discussion of how politicians in Iran were using the social networks to advance their own political schemata, yet still opposed free access to the internet for all.

A secondary example of political warfare that steamed from social media is the over 8000 tweets on terrorist and racist comments that came within hours of Saudi Arabia’s announcement of a Saturday a new terrorist-monitoring center. The Ideological War Center, which launched operations in April of this year, stated that it would correct what it calls “misguidance” about Islam through its channels on Facebook, Twitter and Youtube. Within hours of its announcement, the Ideological War Center attracted numerous people, including individuals who have been born, lived and grew up in non-Muslim countries. However, there was no time to distinguish false stories from real ones about what the Islamic faith really entails. Instead, each new post contributed to some element of racist thought, which seems counteractive to the Center’s aim of exposing mistakes, allegations, suspicions and deceptive techniques promoted by extremists and terrorists. As such an ideologist war of sorts has been created in which the aim of deterring terrorist and extremist organizations has been met with the continuous breeding of false, racist ideas that linger and thieve on social media platforms.

Lastly, Saudi Arabia and other Arab states that have severed ties with Qatar have declared severe penalties for those who support Qatar. The Attorney General has made it very clear that it is now punishable by law to show sympathy on social media or by any other means of communication for Qatar. The cybercrime law came into effect in December 2012 and covers a comprehensive scope of offences in categories including undermining state security, political stability, morality and proper conduct. The Federal Public Prosecution also announced that according to the Federal Penal Code and the Federal law decree on Combating Information Technology Crimes, anyone who threaten the interests, national unity and stability of the UAE will face a jail term from three to 15 years, and a fine not less than AED 500,000 ($136,000). “Strict and firm action will be taken against anyone who shows sympathy or any form of bias towards Qatar, or against anyone who objects to the position of the United Arab Emirates, whether it be through the means of social media, or any type of written, visual or verbal form,” United Arab Emirates Attorney General Hamad Saif al-Shamsi was quoted as saying in a statement.

From Facebook to Twitter, YouTube, Snapchat, or Instagram social media users are besieged with political content, and participate in it readily, which has led to social media being seen as a type of political warfare. Unlike traditional media, social media has a heightened reach, frequency, permanence and immediacy. As such, social media has become a loud speaker of beliefs, a designer of meaning and a producer of conflicts.

There are three things that need to be done to help minimizing the negative effects of this for politicians, and for those concerned about upholding true information: the better filtering of chatbots that post negative, and disruptive; the better identification of fake news; and the better identification of mass manipulation. Current technological developments in artificial intelligence, such as Chatbots that serve as conversational entities relying on artificial intelligence to spread information or in most cases concerted and repeated skewed information, have become important factors in this war of words. Further, fake news is growing and causing a culture of digital anonymity that facilitates hate speech and misinformation to manipulate a mass amount of people. Fortunately, companies that use social media analytics tools, such as KI SOCIAL, are in a position whereby their teams have the technical and intellectual means to detect fake news and Chatbots and the knowledge to better identify mass manipulations (and how to respond).
The political landscape has changed quite a bit in the last couple of decades and social media, in part, is responsible for this change. While social media can be a source of good, it has also come at a price- as a commodity of political warfare.


Categories: social

Building a Social Media Pipeline

Social Media Boston University

A Presentation by Dr. Waël Hassan at Boston University School of Media & Communications

Abstract: Companies who developed Success Criteria, established their Style, decided their Sources, Setup a business process, whilst they survey their results are winning big on social media. The most unknown part of building an enterprise social media service is how to build a social media pipeline. This presentation describes how to do that.

 

[flipbook pdf=”https://waelhassan.com/wp-content/uploads/2017/06/Building-Your-Social-Media-Pipeline-by-@Drwhassan.pdf”]

 


Categories: social

Why Social Media Matters

As the global uptake of social media continues to climb, a new outlet to target a wide range of demographics has emerged. While traditional forms of media such as print advertisements, billboards and networking have remained prominent in the past, use of social media as a form of marketing has exponentially increased as more users join social networks. A new paradigm shift is evident with the uprising of social media – although the primary purpose was to connect like-minded individuals together, networks have evolved to compete with newspapers, television and magazines. 90% of Twitter users rely on Tweets for news information while Facebook and Snapchat host videos garnering billions of views. Approximately 100 million photos are posted via Instagram per day while many 50-70% of consumers make purchases based on social media opinions and marketing.

Quick Facts:

For a business, social media can create awareness and exposure, drive conversation and generate demand. Regardless of product niche, the social network environment will undoubtedly host a space for your target audience. With the prowess of networks such as Facebook, Instagram and Twitter, information can be disseminated at fast paces and often at a low cost to unique audiences across the world. Through activities such as social posts, promotions, responses, shares and outreach to social influencers, businesses can expose their brand to target audiences, increasing engagement and customer retention in the online sphere.

Evaluating Social Media’s Benefits

Before drafting a social media strategy, it is critical to assess what social media can produce for your company and what your end-goal is. Secondary questions that can be asked are:

Understanding Social Media Engagement

Engagement acts as a measure for effective content – it determines in simple terms what materials resonate with audiences and what is ineffective in creating awareness. Although engagement is not directly correlated with a high return on investment, it is an indicator of performance success.

Creating positive impressions of your brand – engaging potential clients – is a crucial step to spreading brand awareness. To build momentum for audiences with the purpose of persuading them to view your product or services as desirable, successful engagement convinces potential clients into action. Once customers are gained, engagement allows for dialogue between you and the client, gaging their brand perception and satisfaction levels.

Balancing Your Social Media Potential

Sharing and engagement is an important aspect of social media. Delivering well-liked content on specific platforms to the desired audience is one process that can bring a multitude of benefits – if conducted properly. However, social media doesn’t just concern posting, but executing a well-devised strategy, aimed at meeting a goal. Moreover, it isn’t effective to just push content into the social sphere with the expectation that a positive reception is guaranteed. Successful social media strategy often requires continual growth and maintenance. While social media does not generally provide a primary ROI, it can help you connect with distant audiences or demographics and as a result, generate increased brand awareness and affinity.

KI Social can help ensure your social media presence is a success – contact us to see how we can help.