Big Data’s Big Privacy Leak – Metadata and Data Lakes, Part 3

Bringing Privacy Regulation into an AI World, Part 3: Big Data’s Big Privacy Leak: Metadata and Data Lakes

This seven-part series explores, from a Canadian perspective, options for effective privacy regulation in an AI context.

For a long time, access control has been the principal means of protecting personal information. Fifty years ago, this meant locked file cabinets. Today, much of our personal data is protected by passwords. This system, refined over the past fifty years, has been highly effective in securing data and minimizing data breaches. But the advent of big data and AI has moved the goalposts. Access control cannot protect all of the personal data we reveal as we navigate the internet. Further, most internet users are now more concerned about how the companies to which they entrust their personal information are using it, than about the risk of data theft. To adapt to this rapidly evolving digital environment, it will be necessary to rethink access control and develop stronger practices for controlling the use of personal data.

To adapt to this rapidly evolving digital environment, it will be necessary to rethink access control and develop stronger practices for controlling the use of personal data.

Many of our daily online activities are regulated by passwords. They safeguard our online lives, giving us access to everything from our smartphones and bank accounts to the many websites where we shop and entertain ourselves. Passwords are keys securing our personal information and property. The security they give us is known as access control.

Yet there is one type of personal data that passwords cannot protect: the traces we leave every time we use the Internet and phone networks.  The details of our activity as network users are known as metadata. We can keep our personal information under cyber-lock and key, but not our metadata. We can erase browser cookies, but the search engine’s log of our browsing patterns and search keywords remains.

The ground rules of personal data protection have not changed, despite general confusion about how they apply in rapidly-changing contexts. Fair information principles, the bedrock of Canadian privacy legislation, state that organizations should only collect, use, share, and retain personal information for specific purposes to which individuals have consented. Any information, or combination of information, that is detailed enough to potentially identify a person is considered personal information, and these rules apply.

Yet as larger and larger volumes of data are collected and aggregated by big data initiatives, it becomes more and more difficult to define precisely what is considered personal information. “Data lakes” – massive repositories of relatively unstructured data collected from one or several sources, often without a specific purpose in mind – are a highly valuable asset for companies, providing a wide variety of data for potential future analysis, or for sale to other companies.

Data lakes often contain a mix of metadata and personal content. In combination, these can frequently identify specific individuals. For example, publicly available and searchable databases of Twitter activity show tweets by geographic location – positions so specific as to reveal street addresses. In the commercial realm, big box retailers use customers’ debit and credit card numbers to link their various purchases, and have developed customer sales algorithms so refined that they can identify the purchase patterns of pregnant women and send them coupons for baby products. Personally-identifiable data is of far more value to marketers than aggregate data, and powerful AI technologies can be harnessed to re-identify anonymous data.

Legally, personal data can only be collected and used for specific purposes to which individuals have given consent. AI systems, however, blur the line between anonymous data and personal information by making it possible to identify individuals and infer more detailed personal information by combining data from multiple sources. Controlling access to data does not address the most significant privacy risks of AI initiatives. To protect privacy in a big data world, it will be necessary to develop more sophisticated strategies to govern the use and sharing of personal data, as I will explore in my next posts.


Moving from Access Control to Use Control

in an AI world you can’t talk about Access control but Use Control Rather


Is AI Compatible with Privacy Principles?

Bringing Privacy Regulation into an AI World, Part 2

This seven-part series explores, from a Canadian perspective, options for effective privacy regulation in an AI context.

Many experts on privacy and artificial intelligence (AI) have questioned whether AI technologies such as machine learning, predictive analytics, and deep learning are compatible with basic privacy principles. It is not difficult to see why; while privacy is primarily concerned with restricting the collection, use, retention and sharing of personal information, AI is all about linking and analyzing massive volumes of data in order to discover new information.

“AI presents fundamental challenges to all foundational privacy principles as formulated in PIPEDA [Canada’s Personal Information Protection and Electronic Documents Act].”

Office of the Privacy Commissioner of Canada

The Office of the Privacy Commissioner (OPC) of Canada recently stated that, “AI presents fundamental challenges to all foundational privacy principles as formulated in PIPEDA [Canada’s Personal Information Protection and Electronic Documents Act].”[1] The OPC notes that AI systems require large amounts of data to train and test algorithms, and that this conflicts with the principle of limiting collection of personal data. [2] In addition, organizations that use AI often do not know ahead of time how they will use data or what insights they will find.[3] This certainly appears to contradict the PIPEDA principles of identifying the purposes of data collection in advance (purpose specification), and collecting, using, retaining, and sharing data only for these purposes (data minimization).[4]

So, is it realistic to expect that AI systems respect the privacy principles of purpose specification and data minimization?

I will begin by stating clearly that I believe that people have the right to control their personal data. To abandon the principles of purpose specification and data minimization would be to allow organizations to collect, use, and share personal data for their own purposes, without individuals’ informed consent. These principles are at the core of any definition of privacy, and must be protected. Doing so in an AI context, however, will require creative new approaches to data governance.

I have two suggestions towards implementing purpose specification and data minimization in an AI context:

  1. Require internal and third-party auditing

Data minimization – the restriction of data collection, use, retention and disclosure to specified purposes – can be enforced by adding to legal requirements regular internal auditing and third-party auditability.

As currently formulated, the Ten Fair Information Principles upon which PIPEDA is based do not specifically include auditing and auditability. The first principle, Accountability, should be amended to include requirements for auditing and auditability. Any company utilizing AI technologies – machine learning, predictive analytics, and deep learning – should be required to perform technical audits to ensure that all data collection, retention, use, and disclosure complies with privacy principles. AI systems should be designed in such a way that third party auditors can perform white box assessments to verify compliance.

2. Tie accountability to purpose of collection

The core of the concept of data minimization is that personal data should only be collected for purposes specified at the time of collection, to which data subjects have given consent. While in AI contexts, data is increasingly unstructured and more likely to be used and shared for multiple purposes, data use and disclosure can still be limited to specified purposes. Data minimization can be enforced by implementing purpose-based systems that link data to specific purposes and capture event sequences – that is, the internal uses of the data in question.

To that end, I suggest the following:

i) Canadian privacy law very clearly states that the collection, retention, use, and disclosure of personal data must be for a specified purpose. As I mentioned above, the fair information principle of accountability should be revised to require audits that demonstrate that all collection, use, retention and disclosure is tied to a specified purpose, and otherwise complies with all other fair information principles.

ii) Organizations should be required to prove and document that the sequences of events involved in data processing are tied to a specified purpose.

To continue with the example from my previous post on legislating AI:

The robotics club of which Alex is a member announces it has a partnership with Aeroplan. Under current regulations, notifying members of this data sharing partnership is sufficient, as long as the club points to Aeroplan’s privacy policy. However, given the advanced capacities of AI-enhanced data processing, the company should spell out which specific data processing activities will be applied to the data.

For example, the club’s privacy policy could include the following:

“As part of our partnership with Aeroplan, we may share the data we collect on you with Aeroplan, including your demographic data (your age and address, for example), and the frequency of your visits to our various club locations.

Aeroplan will provide us with information about you, including your income class metrics (your approximate gross earnings per year, and the band of your gross annual earnings) and information regarding your online activities and affinities; for example, your preferred gas station brand and favourite online stores, combined with the volume of your purchases.”

This notification provides a much clearer explanation of the purpose of the club’s partnership with Aeroplan than is currently standard in privacy policy text. It informs clients about data collection and sharing practices, as is required, but also describes the types of personal information that are being inferred using data analytics. With this information, clients are in a much better position to decide whether they are comfortable sharing personal data with organizations that will use it for targeted marketing.

AI will require new approaches to enforcing the data protection principles of data minimization and purpose specification. While AI systems have the capacity greatly to increase the scope of data collection, use, retention and sharing, they also have the capacity to track the purposes of these data processing activities. Maintaining the link between data and specified purposes is the key to enforcing privacy principles in a big data environment.


[1] Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence, 2020.

[2] Centre for Information Policy Leadership, First Report: Artificial Intelligence and Data Protection in Tension, Oct 2018, pg. 12-13. The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018.

[3] The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018. See blog post from lawyer, Doug Garnett, AI & Big Data Question: What happened to the distinction between primary and secondary research? Mar 22 2019.

[4] The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018.


Categories: Privacy

A 3D Test for Evaluating COVID Alert: Canada’s Official Coronavirus App

Great news – Canada has just released its free COVID-19 exposure notification app [1], COVID Alert. Several questions now arise: Is it private and secure? Will it be widely adopted? And how effective will it be at slowing the spread of the virus? We have evaluated the COVID alert app against three dimensions: Concept, Implementation, and User Experience. We grade the concept as leading-edge (A+), the implementation, just adequate (C), and the user experience less than satisfactory (D).

Ontario Digital Service (ODS) and Canadian Digital Service (CDS) built the app based on a reference implementation by Shopify, with CDS taking operational responsibility and ownership. The security architecture was reviewed by Blackberry and Cylance. Health Canada performed an Application Privacy Assessment[2], which was reviewed by the Office of Privacy Commissioner of Canada[3] and the Information Privacy Commissioner of Ontario.

HOW COVID ALERT WORKS

  1. Via Bluetooth, the app remembers to which phones it has come in close physical proximity.
  2. When a person contracts COVID-19, she or he can submit code into the app declaring their status.
  3. The app will check daily to see if anyone you’ve been near has reported testing positive.
  4. If you’ve been near an infected person in the past 2 weeks, you’ll get a notification.

At present, there isn’t enough data to provide a proper assessment of COVID Alert

However, I can offer my thoughts on the three aspects of design mentioned above:

CONCEPT

Canada got it right – a successful COVID-19-related app that focuses primarily on its benefit to users, i.e. notification, rather than tracking. A tracking app needs to track everyone’s routes and interactions all the time; this captures way too much private data, making it a tempting treasure-trove to hackers. Privacy concerns will impede adoption of tracking apps.

COVID Alert side-steps these concerns by focusing only on notification. All other countries that have developed an app have built a tracking device to be installed on a cell phone, and included a notification feature. Canada, on the other hand, has built a notification app. The fact that its use is voluntary will further boost public confidence.

Grade for concept: A+

IMPLEMENTATION

Apps may be built for the public, for healthcare providers, or for business use. Canada has chosen to build an app for the public. For apps created for the business or healthcare sectors, adoption is a given. The main challenge for a public app is: Will the public adopt it? It will need to reach a critical mass of adoptees to be successful. Without that critical mass, the app will provide little to no benefit.

COVID Alert’s server and app are both open source. This is an encouraging decision, as it makes it business-friendly, and improves public trust through expert scrutiny of the code.

The choice to focus on adoption by individuals is a strong point for privacy, but a challenge to effective implementation. In contrast, an app designed for business, aimed at detecting outbreaks connected to particular business locations, might raise more complex privacy issues, but could be implemented much more widely with support from the private sector.

The Canadian government had the option of implementing a COVID-19 data network between citizens, businesses, and public health. This app, unfortunately, only covers the individual, with a manual link to public health. How could this have been improved? A data exchange platform would have been a wiser choice, as it would help boost business adoption.

Grade for implementation: C

USER EXPERIENCE

While I’m not an expert, I’d say that the app user experience is marked by three things:

Grade for usability: D

Takeaway and Next Steps

The COVID Alert app is a positive and important concept; from a conceptual standpoint, Canada is ahead of all other solutions to date. Ideally, its implementation would go beyond the boundaries of an app. The current approach creates a basis for expansion. I intend to fully leverage the federal app by building an end-to-end solution, IoPlus, that focuses on business adoption.

References:

[1] https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert.html

[2] https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert/privacy-policy/assessment.html

[3] https://nationalpost.com/news/canada/hackers-target-canadians-with-fake-covid-19-contact-tracing-app-disguised-as-official-government-software

To read more about Wael’s outbreak notification design, follow this link. To learn about enterprise corporate compliance feel free to download Privacy in Design: a Practical Guide for Corporate Compliance from the Kindle Store.


Categories: Privacy

Outbreak Notification App Design

From Contact Tracing to Outbreak Notification

Call for Participation

This post is a call for participation for design thinkers – please email or tweet @drwhassan.

As countries assess how best to respond to the COVID-19 pandemic, many have introduced smartphone apps to help identify which users have been infected. These apps vary from country to country. Some are mandatory for citizens to use, such as apps released by the governments of China and Qatar (see inset images); most are not. Some are based on user tracking; others focus on contact tracing. Some utilize a central database; others use APIs from Apple and Google. At least one has already experienced a data breach. But all of them are coming under scrutiny for violating personal privacy.

Wherever personal data is shared, privacy becomes an issue.  In countries where use is voluntary, citizens are reluctant to download these apps. A poll by digital ad company Ogury showed that in France, where a centralised database approach has been adopted, only 2% of the population have downloaded the app, and only 33% would be willing to share any data with the government via the app. [1]

Public trust is a huge issue – given the frequency of data breaches, people are wary of uploading their personal information, even for the purposes of combatting COVID-19. In the USA, only 38% were prepared to share their data, and only 33% trusted the government to protect it. In the UK, the stats told a similar story, with only 40% believing that their data would be safe.[2]

In Canada, Alberta’s ABTraceTogether app was slammed by the provincial Privacy Commissioner for posing a “significant security risk.” The federal government’s COVID Alert app, released in Ontario and pending elsewhere, is promising, but the voluntary contact tracing app has user experience issues which may prevent it from being widely adopted.

In a recent informal poll which I conducted on my page, the proportion of people who were comfortable with installing a contact tracing application was 26%. Most of the people in the No Way camp were experienced professionals with in-depth knowledge of privacy, security, and information technology.

The private sector is bringing a different approach to contact tracing. Several developers have released customer registry systems to support contact tracing and outbreak notification at the level of individual businesses. Some of these are applications; some are online platforms. Privacy remains a concern, and seeing both a privacy gap and an adoption gap, I have designed an outbreak notification system for businesses, IoPlus.

Outbreak notification vs contact tracing

Contact Tracing

Simply said, contact tracing attempts to build a network of every physical interaction, and trace it backwards in the event that a person tests positive for COVID-19.

Contact tracing app are state-centric, require a centralized store, and control

Most implementations of contact tracing require a centralized data store with varying levels of power given to officials and businesses.

Outbreak Notification

Outbreak notification, on the other hand, is a subscription based model, in which citizens are notified if there has been an outbreak in places they have visited. The goal of this solution is to notify the individual to allow them to take action.

Outbreak notifications are citizen-centric, do not require the installation of a mobile application. The individual has the driver’s seat.

Technical Differences

From mathematical modelling, contact tracing resembles a neural network, in which every citizen can have as many connections as the population size. This model is subject to computing challenges. Outbreak notification, on the other hand, is a distributed model that is connected by the edges. The load and computational network is at the business/location level.

Privacy in Design Principles

Outbreak notification has been designed and built with privacy and security as a top priority. The IoPlus notification system relies on an individual mobile device (phone or tablet) leaving a digital “breadcrumb” at a visited location. Patrons and employees scan a posted barcode when they enter and leave a business, to “check in” and “check out.” Users can sign up for notifications via email or a social media account. After an infection is recorded, individuals who self-register will receive a notification through email or via their social media network, based on the contact information given. Those who do not subscribe can check whether they have been in contact with an infected person by simply going to the IoPlus web page. Based on the unique, encrypted “breadcrumb” generated during their visit, a patron can go to the IoPlus web page and can privately see whether they have been in contact with an infected person. No-one else can access that notification.

This method avoids tracking users via location data, and gives them the choice to check in and check out of participating businesses only when they wish.

Next Steps

KI Design is building Outbreak notification service that is:

  1. App-less: it doesn’t require users to install any apps.
  2. Server-less: Does not store user and tracking data in a hosting environment.
  3. Privacy in Design: Design artifacts are built with privacy in mind.

We are calling for contributors to participate in the design and promotion of IoPlus.

For more information please reach out to me at wael@kidesign.io or via twitter @drwhassan

Dr. Wael Hassan,

Founder and CEO of KI Design


Categories: Privacy

Do ‘Contact Tracing Apps’​ need a Privacy Test?

We are asking readers to contribute to this post – please comment in line or send directly to me wael@kidesign.io.

The Coronavirus continues to cause serious damage to humanity: loss of life, employment, and economic opportunity. In an effort to restart economic activity, governments at every level, local, regional, and national, have been working on a phased approach to re-opening. However, with re-opening comes a substantial risk of outbreaks (see a map of outbreaks across the world). Epidemiological studies are showing that shutdowns have been effective in preventing contagion, and recent reports from the United States indicate that some areas are reversing course back to a shutdown.

Why Contact Tracing?

One of the main strategies to support safer re-opening is the use of contact tracing apps. The World Health Organization (WHO) defines contact tracing as follows:

Contact tracing is the process of identifying, assessing, and managing people who have been exposed to a disease to prevent onward transmission. When systematically applied, contact tracing will break the chains of transmission of COVID-19 and is an essential public health tool for controlling the virus.

The Privacy Issue?

It is rather simple,

A data warehouse of sensitive personal information from multiple sources, with wide access, is a recipe for privacy failure.

A contact tracing data warehouse contains a uniquely sensitive combination of data types: location and movement, relationships between people, and medical information. This combination doesn’t exist in any other national database, which makes it a prime target for hackers, aggressive advertisers, and well-intended ignorant users.

Examples of Failures:

Two countries with advanced technologies, namely Norway and the UK, have pulled their contact tracing applications due to privacy concerns.

Norway

Norway’s health authorities said on Monday that they had suspended an app designed to help trace the spread of the new coronavirus after the national data protection agency said it was too invasive of privacy.

Launched in April, the smartphone app Smittestopp (“Infection stop”) was set up to collect location data to help authorities trace the spread of COVID-19, and inform users if they had been exposed to someone carrying the novel coronavirus.

The United Kingdom

A smartphone app to track the spread of Covid-19 may never be released, ministers admitted yesterday, as they abandoned a three-month attempt to create their own version of the technology.

The indication that the app “may never be released” suggests that the design was fundamentally incompatible with privacy. Some articles discussing the cancellation of the UK contact tracing app: timeswired.

Alberta

Alberta’s COVID-19 contact-tracing app is a ‘security risk’ on Apple devices as per privacy commissioner. The report can be found here.

Why are we failing – are designers reckless?

Designers of contact tracing applications are prioritizing speedy development and data sharing over privacy. No doubt, if we need contact tracing, we need it now, and the ability to share data quickly is paramount. So how can contact tracing be reconciled with privacy?

Do we need a privacy test?

I believe that the privacy issue goes beyond testing. We need a privacy framework/charter at the national level to ensure that any contact tracing application follows a set of rules.

Are there solutions?

Absolutely. Solutions start with implementing Privacy in Design; privacy must be considered early in the application design process. Data minimization, data distribution, and anonymization are a few of the tools that can be very effective at managing privacy in a public health situation.

My book, Privacy in Design, is available free on Kindle for prime subscribers.

follow @drwhassan for more information on privacy, social media analytics, and ethics of AI computing.


Categories: Privacy, Security

Blackbaud breach – Executive Options in light of Reports to OPC & ICO

Three Executive Actions to help mitigate further risk

If your company leverages Blackbaud CRM – this article will provide you of three actions that will help mitigate risk.

Blackbaud a reputable company that offers a customer relationship management system has been hit and paid off ransomware. According to G2, Blackbaud CRM is a cloud fundraising and relationship management solution built on Microsoft Azure specifically for enterprise-level fundraising and marketing needs. The company released an official statement on their website available here https://www.blackbaud.com/securityincident.

As a client, whether you have been notified or not of the breach, your organization has a opportunity to follow breach mitigation and notification protocols

Blackbaud has already notified its clients which data was breached, that said, regardless if you have received the notice or not you have been affected. These are three actions that will ensure that you limit your liability:

1- Request Contract Review and third Party Review: Review service contract with Blackbaud and any other third party managing your Razors Edge systems to ensure that it includes notification and risk assessment clauses.

2- Seek a confirmation from Blackbaud: Request a confirmation that ascertains whether donor data or any other identity credentials have been compromised.

3- Post a statement : If your aggregate data or credentials have been compromised , follow your internal breach notifications protocol.

In all cases your information security or IT department should follow breach mitigation protocols, including but not limited to : password reset, enable two factor authentication for administrators, and enabling off cloud backup.

Since the publication of this article the Office of Privacy Commissioner of Canada and the Information Commissioner’s Office of the United Kingdom have received notices of the breach.

You are invited to contribute to this article in the comments or by sending me a direct email at wael@kidesign.io. visit waelhassan.com for more articles on Privacy, Security, and Social Media Analytics.

Waël is on twitter @drwhassan


Categories: Privacy

Police use of AI-based facial recognition – Privacy threats and opportunities !!

This article describes the issue of Police use of AI-based facial recognition technology, discusses why it poses a problem, describes the methodology of assessment, and proposes a solution 

The CBC reported on March 3[1]  that the federal privacy watchdog in Canada and three of its provincial counterparts will jointly investigate police use of facial-recognition technology supplied by US firm Clearview AI.

Privacy Commissioner Daniel Therrien will be joined in the probe by ombudsmen from British Columbia, Alberta, and Quebec.

Meanwhile, in Ontario, the Information and Privacy Commissioner has requested that any Ontario police service using Clearview AI’s tool stop doing so.[2]

The Privacy Commissioners have acted following media reports raising concerns that the company is collecting and using personal information without consent.

The investigation will check whether the US technology company scrapes photos from the internet without consent. “Clearview can unearth items of personal information — including a person’s name, phone number, address or occupation — based on nothing more than a photo,” reported the CBC.[1] Clearview AI is also under scrutiny in the US, where senators are querying whether its scraping of social media images puts it in violation of online child privacy laws.

In my opinion, there are three factors that could get Clearview AI, and its Canadian clients, in hot water. Here are the issues as I see them:

  1. The first issue: Collecting and aggregating data without consent. Even though the photos may have been procured under contract from social media networks, the linking of database photos to demographic information is a big no-no from an individual privacy perspective. Facebook’s infamous experience with the now-dissolved Cambridge Analytica was another example of data being repurposed. It’s possible that, through “contract engineering” (drafting complex contracts with lots of caveats and conditional clauses), Clearview has gained contractually permissible access to Canadians’ photos. However, linking that data with demographic information would be considered a violation of Twitter and Facebook’s terms of use.
  2.  The second issue: Not providing evidence of a Privacy Impact Assessment. A Privacy Impact Assessment is used to measure the impact of a technology or updated business process on personal privacy. Governments at all levels go through these assessments when new tools are being introduced. It’s reasonable  to expect that Canadian agencies, such as police services, would go through the federal government’s own Harmonized Privacy and Security Assessment before introducing a new technology.
  3. The third issue: Jurisdiction. Transferring data about Canadians into the United States may be a violation of citizens’ privacy, especially if the data contains personal information. Certain provinces, including British Columbia and Nova Scotia, have explicit rules about preventing personal data from going south of the border.

How will Privacy Commissioners decide if this tool is acceptable?

The  R v. Oakes four part test [3] will be used to assess the tool’s impact. This requires considering the “four part test” used by courts and legal advisors to ascertain whether a law or program can justifiably intrude upon privacy rights. The elements of this test: necessity, proportionality, effectiveness, and minimization. All four requirements must be met.

My assessment of the use of Clearview AI’s technology from the Oakes Test perspective:

  1. Necessity: Policing agencies will have no problem proving that looking for and identifying a suspect is necessary. However …
  2. Proportionality: Identifying all individuals, and exposing their identities to a large group of people, is by no means proportional.
  3. Effectiveness: The tool’s massive database might be effective in catching suspects; however, known criminals don’t usually have social media accounts.
  4. Minimality: Mass data capturing and linking doesn’t appear to be a minimalistic approach.

The federal Privacy Commissioner publishes its methodology at this link[4].

Are there any solutions?

Yes, AI-based solutions are available. Here at KI Design, we are developing a vision application that allows policing agencies to watch surveillance videos with everyone blurred out except the person for whom they have surveillance warrant. For more information, reach out to us.

References:

  1. https://www.cbc.ca/news/canada/windsor/windsor-police-clearview-ai-1.5483550
  2. https://www.ipc.on.ca/information-and-privacy-commissioner-of-ontario-statement-on-toronto-police-service-use-of-clearview-ai-technology/
  3. https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/117/index.do
  4. https://www.priv.gc.ca/en/privacy-topics/surveillance/police-and-public-safety/gd_sec_201011/

Categories: Privacy

Best-Practice Data Transfers for Canadian Companies – III – Vendor Contracts

PREPARING FOR DATA TRANSFER – CLAUSES FOR VENDOR CONTRACTS

A three-part series from KI Design:

Part I: Data Outsourcing

Part II: Cross-border Data Transfers

The following guidelines are best-practice recommendations for ensuring that transferred data is processed in compliance with standard regulatory privacy laws.

While a contract creates legal obligations for a Vendor, your company must still take proactive measures to oversee data protection, as it retains legal responsibility for transferred data. So where the Vendor is providing services that involve data transfer, include the following clauses in your contract:

Privacy and Security Standards

  1. The Vendor confirms that it will manage the data through the data lifecycle according to the privacy standards followed by [your company]. The Vendor will provide documentation to confirm that these standards are being followed.
  2. The Vendor will demonstrate that it has audited, high-level technical and organizational security practices in place.
  3. The Vendor will ensure that all data to be transferred is encrypted or de-identified as needed.
  4. If the Vendor will be using another downstream data processor to fulfill part of the contract, the Vendor will inform [your company] of this, and will implement with that third party a contract containing data protection measures equal to those in the contract between [your company] and the Vendor.

Integrity of Data

Data Breaches

Data Ownership

Auditing

 

OTHER THINGS TO CONSIDER

Have you:

Focusing on data protection issues from the procurement process onward will diminish data breach and other security risks. Create a Request For Proposals template that ensures security elements are included in the evaluation process, and audit and monitor outsourcing operating environments for early detection of any suspicious activity. Limit data transfers across company, provincial, or national borders, and avoid any unintended cross-border data transfers.

REMEMBER: Your company is still legally responsible for transferred data

A three-part series from KI Design:

For further information on data transfers, and privacy compliance matters generally, see Waël Hassan’s book Privacy in Design: A Practical Guide to Corporate Compliance, available on Amazon.

 


Categories: Privacy

Best-Practice Data Transfers for Canadian Companies – Part II

CROSS-BORDER DATA TRANSFERS

A three-part series from KI Design: Part I: Data Outsourcing , Part III: Preparing for Data Transfer – Clauses for Vendor Contracts

When personal information (PI) is moved across federal or provincial boundaries in the course of commercial activity, it’s considered a cross-border data transfer.

Transferring data brings risk. As well as increasing the dangers of unauthorized access and use, it raises legal complications: the data will become subject to the laws of the country to which it’s being transferred. Your company will need to take legal advice to make sure you’re aware of what laws are applicable, and what that may mean in terms of compliance.

Remember: Once the data is transferred, your organization will continue to have the same legal obligations to data subjects. Even when the PI is in a different jurisdiction, privacy requirements laid down by the federal Personal Information Protection and Electronic Documents Act (PIPEDA), such as obtaining a data subject’s consent for sharing their data, are still in play.

If your organization chooses to transfer PI to a company outside Canada, you’ll need to notify any affected individuals, ideally at the time of data collection. Depending on the type of information involved, these individuals may be customers or employees. The notice must make it clear to the data subject that their personal information may be processed by a foreign company, and thus become subject to foreign laws. Data subjects should be advised that foreign legislation (such as the USA PATRIOT Act) might grant that country’s courts, law enforcement, or national security authorities the power to access their PI without their knowledge or consent.

Once an individual has consented to the terms and purposes of data collection, they don’t then have the right to refuse to have their information transferred, as long as the transfer is in accordance with the original intended purpose of collection.

Legal Requirements: Data Outsourcing across Jurisdictions

CANADA: PIPEDA regulates all personal data that flows across national borders in the course of private sector commercial transactions, regardless of other applicable provincial privacy laws.[i]

Outsourcing personal data processing activities is allowed under PIPEDA, but all reasonable steps must be taken to protect the data while it is abroad.

Because of the high standards PIPEDA sets for protecting Canadians’ personal information, the privacy risks of sharing data with non-EU-based foreign companies are greater than if your company were sharing data with a Canadian organization.

When personal information is transferred internationally, it also becomes subject to the laws of the new jurisdiction. These cannot be bypassed by contractual terms asserting protection from data surveillance. Foreign jurisdiction laws cannot be overridden.

US privacy law is constantly evolving, through a series of individual cases and a patchwork of federal and state laws. This piecemeal approach to privacy regulation makes it challenging to evaluate privacy compliance.

For Canadian organizations using US-based data processing services, the differences between Canadian and US privacy models raise valid concerns about enforcement. Canadians do not have access to Federal Trade Commission complaint processes (unless a US consumer law has been broken). Despite signing contracts that include privacy provisions, Canadian organizations rarely have the resources to pursue litigation against major US Internet companies. In practical terms, this means that US companies may not be legally accountable to Canadian clients.

Recent US data surveillance laws make Canadian PI held by US companies even more vulnerable. Several provinces have passed legislation prohibiting public bodies, such as healthcare and educational institutions, from storing personal information outside Canada. Alberta’s Personal Information Protection Act creates statutory requirements regarding private sector outsourcing of data. The Act requires that organizations transferring PI across Canadian borders for processing (rather than a simple transfer of PI) must have given affected individuals prior notice of the transfer, as well as the opportunity to contact an informed company representative with any questions. It also imposes a mandatory data breach reporting obligation. BC’s Personal Information Protection Act contains similar requirements. Quebec’s stricter private-sector privacy law restricts the transfer of data outside the province.[ii]

“Organizations must be transparent about their personal information handling practices. This includes advising customers that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities.” 

– Office of the Privacy Commissioner

Sector-specific Canadian operations may face additional legal requirements. Outsourcing the processing of health information will be regulated by the various provincial health information laws, for example. While the Ontario Personal Health Information Protection Act doesn’t limit cross-border PI transfers, it does prohibit the disclosure of PI to persons outside Ontario without the consent of affected individuals.

 

UNITED STATES: The USA PATRIOT Act declares that all information collected by US companies or stored in the US is subject to US government surveillance. Foreign data subjects have little recourse to protect the privacy of their personal information held by US multinational corporations, which include most cloud computing service providers.

 

EUROPE: The European approach to data sharing across jurisdictions is based on territory: foreign companies must comply with the laws of the countries in which their customers reside.

 

The EU’s General Data Protection Regulation (GDPR) generally prohibits the transfer of personal information to recipients outside the EU unless:

For foreign companies to operate in Europe, national regulators in each jurisdiction within the EU will have to assess the legal compliance of company codes of conduct. These will have to contain satisfactory Privacy Principles (e.g., transparency, data quality, security) and effective implementation tools (e.g., auditing, training, complaints management), and demonstrate that they are binding. Codes of conduct must apply to all parties involved in the business of the data controller or the data processor, including employees, and all parties must ensure compliance. (For instance, under the GDPR, cloud computing service providers will almost certainly have to locate servers outside the US to protect data from American surveillance privacy violations.)

Canada is currently deemed an “adequate” jurisdiction by the EU because of the privacy protections provided by PIPEDA (although be aware that adequacy decisions are reviewed every four years, and so that may change). Your company will still need to make sure that data transfer protocols follow the GDPR’s requirements, which are stricter than those mandated by PIPEDA. Consent is something you’ll need to pay particular attention to. The GDPR does not allow an opt-out option; consent to data processing must be informed and specific.

Given the scale of financial penalties under the GDPR, it’s best to consult legal counsel to ensure that you have dotted your i’s and crossed your t’s.

Regulating Data Sharing between Organizations: A Cross-border Analysis

EU and North American laws around data sharing reflect very different understandings of responsibility for protecting privacy. At first glance, US and Canadian laws mandate that personal data shared with a third party be bound by a policy, the provisions of which ought to be equally or more stringent than the terms to which data subjects agreed when they initially released their personal information. However, these North American privacy laws only hold accountable the primary service provider that first collected the data; privacy breaches by data recipients are considered to be violations of contractual obligations, but not violations of privacy rights.

The European Union’s General Data Protection Regulation, in contrast, adopts a shared responsibility model for data sharing; both service providers (in this context, data collectors) and subcontractors (data processors or other third-party vendors) are responsible for enforcing privacy provisions. Data collectors are not permitted to share personal data with a third party unless it is possible to guarantee the enforcement of equal or stronger privacy provisions than those found in the original agreements with data subjects. This shared responsibility model reflects greater privacy maturity, by shifting from an exclusive focus on adequate policy and contracts to ensuring effective implementation through monitoring and governance of all data holders.

For further information on data transfers, and privacy compliance matters generally, see Waël Hassan’s book Privacy in Design: A Practical Guide to Corporate Compliance, available on Amazon.

A three-part series from KI Design:

[i] For further information, see Office of the Privacy Commissioner, “Businesses and Your Personal Information,” online at: https://www.priv.gc.ca/en/privacy-topics/your-privacy-rights/businesses-and-your-personal-information/.

[ii] For further information, see George Waggott, Michael Reid, & Mitch Koczerginski, “Cloud Computing: Privacy and Other Risks,” McMillan LLP, December 2013, online at: https://mcmillan.ca/Files/166506_Cloud%20Computing.pdf.

[iii] For further information, see the analysis by Dr. Detlev Gabel & Tim Hickman in Unlocking the EU General Data Protection Regulation: A Practical Handbook on the EU’s New Data Protection Law, Chapter 13, White & Case website, 22 Jul 2016, online at: https://www.whitecase.com/publications/article/chapter-13-cross-border-data-transfers-unlocking-eu-general-data-protection.