Using AI to Combat AI-Generated Disinformation

AI can be impact election outcomes? how can this be combatted?

Canada General Election 2019 and US Presidential Race 2020

As citizens worry about election outcomes and the interference in the democratic process in general and elections in specific, some governments are attempting to mitigate the risks and issues.  In December 2018, the Government of Canada’s Standing Committee on Access to Information, Privacy and Ethics released a report, Democracy under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly. This report, initiated in response to the Facebook/Cambridge Analytica scandal, examines, among other things, the risks posed to the Canadian electoral process by the manipulation of big data and artificial intelligence (AI). A year before in 2017, the Senate Intelligence Committee published a report titled Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution. This report, conducted a full review and produce a comprehensive intelligence report assessing Russian activities and intentions in recent U.S. elections in 2016.

Social media, and the big data it generates, is now so ubiquitous, it’s easy to forget that it’s a relatively recent phenomenon. As such, it’s been hard for legislators to track how such technological developments could be used to influence the Canadian electoral process, and how they should be regulated. Big data, and its manipulation, played a significant role in both the 2016 US election, and the Brexit vote in the UK earlier that year. Fake news, deliberately fabricated, edged into Facebook feeds alongside legitimate sources, and was shared by unsuspecting users. Fake Twitter accounts pressed extreme views, shifting and polarizing public discourse. According to Elodie Vialle, of Reporters Without Borders, false information spreads six times faster than accurate information.[1]

It is well known that AI plays a key role in the spread of disinformation. It powers social media algorithms. It can be programmed to generate content, including automated trolling, and it facilitates the micro-targeting of demographic groups on specific topics: all basic disinformation practices.

Yet what is less widely discussed is that

AI can also be used as a tool to combat disinformation.

Data science can locate trolls and fraudulent accounts: via algorithm, programs can be trained to identify potential bots and unusual political material.[2] While their reach can be enormous, the actual number of perpetrators is very small, and we have the scientific ability to track down who they are. Existing hate speech laws can then be used to prosecute them.

In today’s increasingly febrile global political climate, disinformation is a real and growing problem, both abroad and here in Canada and the United States. A solution is available. Given the upcoming Canadian federal election in October 2019 and the US presidential elections in 2020, proactive use of data science to counter manipulation efforts is both timely and necessary.




[1] Staff, “Artificial Intelligence and Disinformation: Examining challenges and solutions,” Modern Diplomacy, March 8, 2019: online at:

[2] European Parliamentary Research Service, Regulating disinformation with artificial intelligence, March 2019: online at:

Smart Privacy Auditing – An Ontario Healthcare Case Study

IPCS Smart Privacy Auditing Seminar

On September 13, Dr. Waël Hassan, was a panelist at the Innovation Procurement Case Study Seminar on Smart Privacy Auditing, hosted by Mackenzie Innovation Institute (Mi2) and the Ontario Centres of Excellence (OCE). The seminar attracted leaders from the health care sector, the private information and technology industry, and privacy authorities. The seminar explored the concept of innovative procurement via the avenue of competitive dialogue, in addition to demonstrating the power and benefits of using artificial intelligence to automates the process of auditing all PHI accesses within a given hospital or health network.

What are the benefits of participating in an innovative procurement process, particularly competitive dialogue?

An innovative procurement partnership between Mi2, Mackenzie Health, Michael Garron Hospital, and Markham Stouffville Hospital was supported by the OCE’s REACH grant and sought to identify an innovative approach to auditing that could be applicable to the privacy challenges faced by numerous hospitals with different practices, policies, and information systems. Rather than focus on how the solution should operate, the partners collaboratively identified six outcome-based specifications the procured audit tool would be required to meet.

By identifying key priorities and specifying the outcomes a solution should achieve, Competitive Dialogue establishes a clear and mutual understanding of expectations. This can help the private sector narrow down solution options to a model best-suited for the contracting authority’s unique context. The feedback loop provided by the iterative rounds (if used) enables vendors to clarify any confusion and customize proposals to the contracting authority’s unique needs, staff workflows, and policy contexts.

Competitive Dialogue is an opportunity for transparent communication that gives vendors the opportunity to learn more intimate details of what the contracting authority, in this case Mackenzie Health, needs from a solution. Because hospitals are not tech or security experts, they often struggle to accurately identify and define what solutions they need to solve a particular issue, and thus a traditional procurement process is rarely ideal since there is little to no room for clarification or feedback. This process is more flexible than the traditional procurement process and thereby allows for more creativity and innovative thinking processes during the initial proposal development. Encouraging creativity and creating a competitive environment in which competing vendors may be sounding ideas off each other results in higher quality proposals and final solutions.

 Mackenzie Health Case Study

Mackenzie Health employs over 450 physicians and 2,600 other staff members, processes nearly 55,000 patient medical record accesses every day, and has just one privacy officer to monitor everything. Mackenzie Health’s privacy needs far outweigh its capacity, so they turned to the private sector for an innovative solution.

Section 37(1) of PHIPA outlines the possible uses of personal health information, and these guidelines are based on the purpose underlying the activities. Because the legal framework is centred on purpose, KI Design’s approach is to explain the purpose for accessing a given medical record. The core of this technology is more commonly known as an explanation-based auditing system (EBAS) designed and patented by Dr. Fabbri of Maize Analytics.

To detect unauthorized accesses, the technology has the capability of identifying an intelligible connection between the patient and the employee accessing the patient’s records. AI changes the fundamental question underlying auditing tools from “who is accessing patient records without authorization?” to, “for what purpose are hospital staff accessing patient records?” Asking this question helps the technology break down staff workflows and identify common and unique purposes for accessing any given medical record, which are further categorized as either authorized access or unexplained access, which may then flagged as potentially unauthorized behaviour. The technology is able to filter out the authorized accesses, which are usually 98% to 99% of all accesses, so that the Privacy Officer can focus on the much smaller number of unexplained and flagged accesses.

Why is the private sector interested in health care?

Health care is an extremely complex system operated by the province and service providers. The province is a specialist in governance and regulation, the service providers are specialists in medicine – neither are experts in privacy or security. Companies such as KI Design are interested in filling the expertise gap within the health care sector by working closely in tandem with health care providers and the Information & Privacy Commissioner to adapt privacy and security solutions that are suitable for their working realities. There is irrevocable value added in having a privacy and security expert working directly with hospitals and other health service providers to assist in refining privacy best practices and implementing a privacy tool that will improve privacy and security outcomes without restricting the workflows of health practitioners.

To learn more on how AI solutions improve Audit visit