Series: Bringing Privacy Regulation into an AI World

Bringing Privacy Regulation into an AI World

Over the past decade, privacy has become an increasing concern for the public as data analytics have expanded exponentially in scope. Big data has become a part of our everyday lives in ways that most people are not fully aware of and don’t understand. Governments are struggling to keep up with the pace of innovation and figure out how to regulate a big data sector that supersedes national borders.

Governments are struggling to keep up with the pace of innovation and figure out how to regulate a big data sector that supersedes national borders.

Different jurisdictions have taken different approaches to privacy regulation in the new context of big data, machine learning, and artificial intelligence (AI). The European Union is in the lead, having updated its privacy legislation, established a “digital single market” across Europe, and resourced a strong enforcement system. In the United States, privacy remains governed by a patchwork of federal and state legislation, largely sector-specific and often referencing outdated technologies. The US Federal Trade Commission is powerful and assertive in punishing corporations that fail to protect data from theft, but has rarely attempted to regulate the big data market. Canada’s principle-based privacy legislation remains relevant, but the Office of the Privacy Commissioner (OPC) acknowledged recently that “PIPEDA [the Personal Information Protection and Electronic Documents Act] falls short in its application to AI systems.”[1] As the OPC states, AI creates new privacy risks with serious human rights implications, including automated bias and discrimination[2]. Given the pace of technological innovation, there may not be much time left to establish a “human-centered approach to AI.”[3]

This series will explore, from a Canadian perspective, options for effective privacy regulation in an AI context. I will discuss the following topics: 

  1. Do we need to legislate AI? 
  1. Are privacy principles compatible with AI? 
  1. Big data’s big privacy leak – metadata and data lakes 
  1. Access control in a big data context 
  1. Moving from access control to use control 
  1. Implementing use control – the next generation of data protection 
  1. Why Canadian privacy enforcement needs teeth 

[1] Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence, 2020.

[2] Ibid.

[3] G20 Ministerial Statement on Trade and Digital Economy, 2019.


Is AI Compatible with Privacy Principles?

Bringing Privacy Regulation into an AI World, Part 2

This seven-part series explores, from a Canadian perspective, options for effective privacy regulation in an AI context.

Many experts on privacy and artificial intelligence (AI) have questioned whether AI technologies such as machine learning, predictive analytics, and deep learning are compatible with basic privacy principles. It is not difficult to see why; while privacy is primarily concerned with restricting the collection, use, retention and sharing of personal information, AI is all about linking and analyzing massive volumes of data in order to discover new information.

“AI presents fundamental challenges to all foundational privacy principles as formulated in PIPEDA [Canada’s Personal Information Protection and Electronic Documents Act].”

Office of the Privacy Commissioner of Canada

The Office of the Privacy Commissioner (OPC) of Canada recently stated that, “AI presents fundamental challenges to all foundational privacy principles as formulated in PIPEDA [Canada’s Personal Information Protection and Electronic Documents Act].”[1] The OPC notes that AI systems require large amounts of data to train and test algorithms, and that this conflicts with the principle of limiting collection of personal data. [2] In addition, organizations that use AI often do not know ahead of time how they will use data or what insights they will find.[3] This certainly appears to contradict the PIPEDA principles of identifying the purposes of data collection in advance (purpose specification), and collecting, using, retaining, and sharing data only for these purposes (data minimization).[4]

So, is it realistic to expect that AI systems respect the privacy principles of purpose specification and data minimization?

I will begin by stating clearly that I believe that people have the right to control their personal data. To abandon the principles of purpose specification and data minimization would be to allow organizations to collect, use, and share personal data for their own purposes, without individuals’ informed consent. These principles are at the core of any definition of privacy, and must be protected. Doing so in an AI context, however, will require creative new approaches to data governance.

I have two suggestions towards implementing purpose specification and data minimization in an AI context:

  1. Require internal and third-party auditing

Data minimization – the restriction of data collection, use, retention and disclosure to specified purposes – can be enforced by adding to legal requirements regular internal auditing and third-party auditability.

As currently formulated, the Ten Fair Information Principles upon which PIPEDA is based do not specifically include auditing and auditability. The first principle, Accountability, should be amended to include requirements for auditing and auditability. Any company utilizing AI technologies – machine learning, predictive analytics, and deep learning – should be required to perform technical audits to ensure that all data collection, retention, use, and disclosure complies with privacy principles. AI systems should be designed in such a way that third party auditors can perform white box assessments to verify compliance.

2. Tie accountability to purpose of collection

The core of the concept of data minimization is that personal data should only be collected for purposes specified at the time of collection, to which data subjects have given consent. While in AI contexts, data is increasingly unstructured and more likely to be used and shared for multiple purposes, data use and disclosure can still be limited to specified purposes. Data minimization can be enforced by implementing purpose-based systems that link data to specific purposes and capture event sequences – that is, the internal uses of the data in question.

To that end, I suggest the following:

i) Canadian privacy law very clearly states that the collection, retention, use, and disclosure of personal data must be for a specified purpose. As I mentioned above, the fair information principle of accountability should be revised to require audits that demonstrate that all collection, use, retention and disclosure is tied to a specified purpose, and otherwise complies with all other fair information principles.

ii) Organizations should be required to prove and document that the sequences of events involved in data processing are tied to a specified purpose.

To continue with the example from my previous post on legislating AI:

The robotics club of which Alex is a member announces it has a partnership with Aeroplan. Under current regulations, notifying members of this data sharing partnership is sufficient, as long as the club points to Aeroplan’s privacy policy. However, given the advanced capacities of AI-enhanced data processing, the company should spell out which specific data processing activities will be applied to the data.

For example, the club’s privacy policy could include the following:

“As part of our partnership with Aeroplan, we may share the data we collect on you with Aeroplan, including your demographic data (your age and address, for example), and the frequency of your visits to our various club locations.

Aeroplan will provide us with information about you, including your income class metrics (your approximate gross earnings per year, and the band of your gross annual earnings) and information regarding your online activities and affinities; for example, your preferred gas station brand and favourite online stores, combined with the volume of your purchases.”

This notification provides a much clearer explanation of the purpose of the club’s partnership with Aeroplan than is currently standard in privacy policy text. It informs clients about data collection and sharing practices, as is required, but also describes the types of personal information that are being inferred using data analytics. With this information, clients are in a much better position to decide whether they are comfortable sharing personal data with organizations that will use it for targeted marketing.

AI will require new approaches to enforcing the data protection principles of data minimization and purpose specification. While AI systems have the capacity greatly to increase the scope of data collection, use, retention and sharing, they also have the capacity to track the purposes of these data processing activities. Maintaining the link between data and specified purposes is the key to enforcing privacy principles in a big data environment.


[1] Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence, 2020.

[2] Centre for Information Policy Leadership, First Report: Artificial Intelligence and Data Protection in Tension, Oct 2018, pg. 12-13. The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018.

[3] The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018. See blog post from lawyer, Doug Garnett, AI & Big Data Question: What happened to the distinction between primary and secondary research? Mar 22 2019.

[4] The Office of the Victorian Information Commissioner, Artificial Intelligence and Privacy, 2018.


Categories: Privacy

A 3D Test for Evaluating COVID Alert: Canada’s Official Coronavirus App

Great news – Canada has just released its free COVID-19 exposure notification app [1], COVID Alert. Several questions now arise: Is it private and secure? Will it be widely adopted? And how effective will it be at slowing the spread of the virus? We have evaluated the COVID alert app against three dimensions: Concept, Implementation, and User Experience. We grade the concept as leading-edge (A+), the implementation, just adequate (C), and the user experience less than satisfactory (D).

Ontario Digital Service (ODS) and Canadian Digital Service (CDS) built the app based on a reference implementation by Shopify, with CDS taking operational responsibility and ownership. The security architecture was reviewed by Blackberry and Cylance. Health Canada performed an Application Privacy Assessment[2], which was reviewed by the Office of Privacy Commissioner of Canada[3] and the Information Privacy Commissioner of Ontario.

HOW COVID ALERT WORKS

  1. Via Bluetooth, the app remembers to which phones it has come in close physical proximity.
  2. When a person contracts COVID-19, she or he can submit code into the app declaring their status.
  3. The app will check daily to see if anyone you’ve been near has reported testing positive.
  4. If you’ve been near an infected person in the past 2 weeks, you’ll get a notification.

At present, there isn’t enough data to provide a proper assessment of COVID Alert

However, I can offer my thoughts on the three aspects of design mentioned above:

CONCEPT

Canada got it right – a successful COVID-19-related app that focuses primarily on its benefit to users, i.e. notification, rather than tracking. A tracking app needs to track everyone’s routes and interactions all the time; this captures way too much private data, making it a tempting treasure-trove to hackers. Privacy concerns will impede adoption of tracking apps.

COVID Alert side-steps these concerns by focusing only on notification. All other countries that have developed an app have built a tracking device to be installed on a cell phone, and included a notification feature. Canada, on the other hand, has built a notification app. The fact that its use is voluntary will further boost public confidence.

Grade for concept: A+

IMPLEMENTATION

Apps may be built for the public, for healthcare providers, or for business use. Canada has chosen to build an app for the public. For apps created for the business or healthcare sectors, adoption is a given. The main challenge for a public app is: Will the public adopt it? It will need to reach a critical mass of adoptees to be successful. Without that critical mass, the app will provide little to no benefit.

COVID Alert’s server and app are both open source. This is an encouraging decision, as it makes it business-friendly, and improves public trust through expert scrutiny of the code.

The choice to focus on adoption by individuals is a strong point for privacy, but a challenge to effective implementation. In contrast, an app designed for business, aimed at detecting outbreaks connected to particular business locations, might raise more complex privacy issues, but could be implemented much more widely with support from the private sector.

The Canadian government had the option of implementing a COVID-19 data network between citizens, businesses, and public health. This app, unfortunately, only covers the individual, with a manual link to public health. How could this have been improved? A data exchange platform would have been a wiser choice, as it would help boost business adoption.

Grade for implementation: C

USER EXPERIENCE

While I’m not an expert, I’d say that the app user experience is marked by three things:

Grade for usability: D

Takeaway and Next Steps

The COVID Alert app is a positive and important concept; from a conceptual standpoint, Canada is ahead of all other solutions to date. Ideally, its implementation would go beyond the boundaries of an app. The current approach creates a basis for expansion. I intend to fully leverage the federal app by building an end-to-end solution, IoPlus, that focuses on business adoption.

References:

[1] https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert.html

[2] https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert/privacy-policy/assessment.html

[3] https://nationalpost.com/news/canada/hackers-target-canadians-with-fake-covid-19-contact-tracing-app-disguised-as-official-government-software

To read more about Wael’s outbreak notification design, follow this link. To learn about enterprise corporate compliance feel free to download Privacy in Design: a Practical Guide for Corporate Compliance from the Kindle Store.


Categories: Privacy

Outbreak Notification App Design

From Contact Tracing to Outbreak Notification

Call for Participation

This post is a call for participation for design thinkers – please email or tweet @drwhassan.

As countries assess how best to respond to the COVID-19 pandemic, many have introduced smartphone apps to help identify which users have been infected. These apps vary from country to country. Some are mandatory for citizens to use, such as apps released by the governments of China and Qatar (see inset images); most are not. Some are based on user tracking; others focus on contact tracing. Some utilize a central database; others use APIs from Apple and Google. At least one has already experienced a data breach. But all of them are coming under scrutiny for violating personal privacy.

Wherever personal data is shared, privacy becomes an issue.  In countries where use is voluntary, citizens are reluctant to download these apps. A poll by digital ad company Ogury showed that in France, where a centralised database approach has been adopted, only 2% of the population have downloaded the app, and only 33% would be willing to share any data with the government via the app. [1]

Public trust is a huge issue – given the frequency of data breaches, people are wary of uploading their personal information, even for the purposes of combatting COVID-19. In the USA, only 38% were prepared to share their data, and only 33% trusted the government to protect it. In the UK, the stats told a similar story, with only 40% believing that their data would be safe.[2]

In Canada, Alberta’s ABTraceTogether app was slammed by the provincial Privacy Commissioner for posing a “significant security risk.” The federal government’s COVID Alert app, released in Ontario and pending elsewhere, is promising, but the voluntary contact tracing app has user experience issues which may prevent it from being widely adopted.

In a recent informal poll which I conducted on my page, the proportion of people who were comfortable with installing a contact tracing application was 26%. Most of the people in the No Way camp were experienced professionals with in-depth knowledge of privacy, security, and information technology.

The private sector is bringing a different approach to contact tracing. Several developers have released customer registry systems to support contact tracing and outbreak notification at the level of individual businesses. Some of these are applications; some are online platforms. Privacy remains a concern, and seeing both a privacy gap and an adoption gap, I have designed an outbreak notification system for businesses, IoPlus.

Outbreak notification vs contact tracing

Contact Tracing

Simply said, contact tracing attempts to build a network of every physical interaction, and trace it backwards in the event that a person tests positive for COVID-19.

Contact tracing app are state-centric, require a centralized store, and control

Most implementations of contact tracing require a centralized data store with varying levels of power given to officials and businesses.

Outbreak Notification

Outbreak notification, on the other hand, is a subscription based model, in which citizens are notified if there has been an outbreak in places they have visited. The goal of this solution is to notify the individual to allow them to take action.

Outbreak notifications are citizen-centric, do not require the installation of a mobile application. The individual has the driver’s seat.

Technical Differences

From mathematical modelling, contact tracing resembles a neural network, in which every citizen can have as many connections as the population size. This model is subject to computing challenges. Outbreak notification, on the other hand, is a distributed model that is connected by the edges. The load and computational network is at the business/location level.

Privacy in Design Principles

Outbreak notification has been designed and built with privacy and security as a top priority. The IoPlus notification system relies on an individual mobile device (phone or tablet) leaving a digital “breadcrumb” at a visited location. Patrons and employees scan a posted barcode when they enter and leave a business, to “check in” and “check out.” Users can sign up for notifications via email or a social media account. After an infection is recorded, individuals who self-register will receive a notification through email or via their social media network, based on the contact information given. Those who do not subscribe can check whether they have been in contact with an infected person by simply going to the IoPlus web page. Based on the unique, encrypted “breadcrumb” generated during their visit, a patron can go to the IoPlus web page and can privately see whether they have been in contact with an infected person. No-one else can access that notification.

This method avoids tracking users via location data, and gives them the choice to check in and check out of participating businesses only when they wish.

Next Steps

KI Design is building Outbreak notification service that is:

  1. App-less: it doesn’t require users to install any apps.
  2. Server-less: Does not store user and tracking data in a hosting environment.
  3. Privacy in Design: Design artifacts are built with privacy in mind.

We are calling for contributors to participate in the design and promotion of IoPlus.

For more information please reach out to me at wael@kidesign.io or via twitter @drwhassan

Dr. Wael Hassan,

Founder and CEO of KI Design


Categories: Privacy

Police use of AI-based facial recognition – Privacy threats and opportunities !!

This article describes the issue of Police use of AI-based facial recognition technology, discusses why it poses a problem, describes the methodology of assessment, and proposes a solution 

The CBC reported on March 3[1]  that the federal privacy watchdog in Canada and three of its provincial counterparts will jointly investigate police use of facial-recognition technology supplied by US firm Clearview AI.

Privacy Commissioner Daniel Therrien will be joined in the probe by ombudsmen from British Columbia, Alberta, and Quebec.

Meanwhile, in Ontario, the Information and Privacy Commissioner has requested that any Ontario police service using Clearview AI’s tool stop doing so.[2]

The Privacy Commissioners have acted following media reports raising concerns that the company is collecting and using personal information without consent.

The investigation will check whether the US technology company scrapes photos from the internet without consent. “Clearview can unearth items of personal information — including a person’s name, phone number, address or occupation — based on nothing more than a photo,” reported the CBC.[1] Clearview AI is also under scrutiny in the US, where senators are querying whether its scraping of social media images puts it in violation of online child privacy laws.

In my opinion, there are three factors that could get Clearview AI, and its Canadian clients, in hot water. Here are the issues as I see them:

  1. The first issue: Collecting and aggregating data without consent. Even though the photos may have been procured under contract from social media networks, the linking of database photos to demographic information is a big no-no from an individual privacy perspective. Facebook’s infamous experience with the now-dissolved Cambridge Analytica was another example of data being repurposed. It’s possible that, through “contract engineering” (drafting complex contracts with lots of caveats and conditional clauses), Clearview has gained contractually permissible access to Canadians’ photos. However, linking that data with demographic information would be considered a violation of Twitter and Facebook’s terms of use.
  2.  The second issue: Not providing evidence of a Privacy Impact Assessment. A Privacy Impact Assessment is used to measure the impact of a technology or updated business process on personal privacy. Governments at all levels go through these assessments when new tools are being introduced. It’s reasonable  to expect that Canadian agencies, such as police services, would go through the federal government’s own Harmonized Privacy and Security Assessment before introducing a new technology.
  3. The third issue: Jurisdiction. Transferring data about Canadians into the United States may be a violation of citizens’ privacy, especially if the data contains personal information. Certain provinces, including British Columbia and Nova Scotia, have explicit rules about preventing personal data from going south of the border.

How will Privacy Commissioners decide if this tool is acceptable?

The  R v. Oakes four part test [3] will be used to assess the tool’s impact. This requires considering the “four part test” used by courts and legal advisors to ascertain whether a law or program can justifiably intrude upon privacy rights. The elements of this test: necessity, proportionality, effectiveness, and minimization. All four requirements must be met.

My assessment of the use of Clearview AI’s technology from the Oakes Test perspective:

  1. Necessity: Policing agencies will have no problem proving that looking for and identifying a suspect is necessary. However …
  2. Proportionality: Identifying all individuals, and exposing their identities to a large group of people, is by no means proportional.
  3. Effectiveness: The tool’s massive database might be effective in catching suspects; however, known criminals don’t usually have social media accounts.
  4. Minimality: Mass data capturing and linking doesn’t appear to be a minimalistic approach.

The federal Privacy Commissioner publishes its methodology at this link[4].

Are there any solutions?

Yes, AI-based solutions are available. Here at KI Design, we are developing a vision application that allows policing agencies to watch surveillance videos with everyone blurred out except the person for whom they have surveillance warrant. For more information, reach out to us.

References:

  1. https://www.cbc.ca/news/canada/windsor/windsor-police-clearview-ai-1.5483550
  2. https://www.ipc.on.ca/information-and-privacy-commissioner-of-ontario-statement-on-toronto-police-service-use-of-clearview-ai-technology/
  3. https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/117/index.do
  4. https://www.priv.gc.ca/en/privacy-topics/surveillance/police-and-public-safety/gd_sec_201011/

Categories: Privacy

Best-Practice Data Transfers for Canadian Companies – III – Vendor Contracts

PREPARING FOR DATA TRANSFER – CLAUSES FOR VENDOR CONTRACTS

A three-part series from KI Design:

Part I: Data Outsourcing

Part II: Cross-border Data Transfers

The following guidelines are best-practice recommendations for ensuring that transferred data is processed in compliance with standard regulatory privacy laws.

While a contract creates legal obligations for a Vendor, your company must still take proactive measures to oversee data protection, as it retains legal responsibility for transferred data. So where the Vendor is providing services that involve data transfer, include the following clauses in your contract:

Privacy and Security Standards

  1. The Vendor confirms that it will manage the data through the data lifecycle according to the privacy standards followed by [your company]. The Vendor will provide documentation to confirm that these standards are being followed.
  2. The Vendor will demonstrate that it has audited, high-level technical and organizational security practices in place.
  3. The Vendor will ensure that all data to be transferred is encrypted or de-identified as needed.
  4. If the Vendor will be using another downstream data processor to fulfill part of the contract, the Vendor will inform [your company] of this, and will implement with that third party a contract containing data protection measures equal to those in the contract between [your company] and the Vendor.

Integrity of Data

Data Breaches

Data Ownership

Auditing

 

OTHER THINGS TO CONSIDER

Have you:

Focusing on data protection issues from the procurement process onward will diminish data breach and other security risks. Create a Request For Proposals template that ensures security elements are included in the evaluation process, and audit and monitor outsourcing operating environments for early detection of any suspicious activity. Limit data transfers across company, provincial, or national borders, and avoid any unintended cross-border data transfers.

REMEMBER: Your company is still legally responsible for transferred data

A three-part series from KI Design:

For further information on data transfers, and privacy compliance matters generally, see Waël Hassan’s book Privacy in Design: A Practical Guide to Corporate Compliance, available on Amazon.

 


Categories: Privacy

Best-Practice Data Transfers for Canadian Companies – Part II

CROSS-BORDER DATA TRANSFERS

A three-part series from KI Design: Part I: Data Outsourcing , Part III: Preparing for Data Transfer – Clauses for Vendor Contracts

When personal information (PI) is moved across federal or provincial boundaries in the course of commercial activity, it’s considered a cross-border data transfer.

Transferring data brings risk. As well as increasing the dangers of unauthorized access and use, it raises legal complications: the data will become subject to the laws of the country to which it’s being transferred. Your company will need to take legal advice to make sure you’re aware of what laws are applicable, and what that may mean in terms of compliance.

Remember: Once the data is transferred, your organization will continue to have the same legal obligations to data subjects. Even when the PI is in a different jurisdiction, privacy requirements laid down by the federal Personal Information Protection and Electronic Documents Act (PIPEDA), such as obtaining a data subject’s consent for sharing their data, are still in play.

If your organization chooses to transfer PI to a company outside Canada, you’ll need to notify any affected individuals, ideally at the time of data collection. Depending on the type of information involved, these individuals may be customers or employees. The notice must make it clear to the data subject that their personal information may be processed by a foreign company, and thus become subject to foreign laws. Data subjects should be advised that foreign legislation (such as the USA PATRIOT Act) might grant that country’s courts, law enforcement, or national security authorities the power to access their PI without their knowledge or consent.

Once an individual has consented to the terms and purposes of data collection, they don’t then have the right to refuse to have their information transferred, as long as the transfer is in accordance with the original intended purpose of collection.

Legal Requirements: Data Outsourcing across Jurisdictions

CANADA: PIPEDA regulates all personal data that flows across national borders in the course of private sector commercial transactions, regardless of other applicable provincial privacy laws.[i]

Outsourcing personal data processing activities is allowed under PIPEDA, but all reasonable steps must be taken to protect the data while it is abroad.

Because of the high standards PIPEDA sets for protecting Canadians’ personal information, the privacy risks of sharing data with non-EU-based foreign companies are greater than if your company were sharing data with a Canadian organization.

When personal information is transferred internationally, it also becomes subject to the laws of the new jurisdiction. These cannot be bypassed by contractual terms asserting protection from data surveillance. Foreign jurisdiction laws cannot be overridden.

US privacy law is constantly evolving, through a series of individual cases and a patchwork of federal and state laws. This piecemeal approach to privacy regulation makes it challenging to evaluate privacy compliance.

For Canadian organizations using US-based data processing services, the differences between Canadian and US privacy models raise valid concerns about enforcement. Canadians do not have access to Federal Trade Commission complaint processes (unless a US consumer law has been broken). Despite signing contracts that include privacy provisions, Canadian organizations rarely have the resources to pursue litigation against major US Internet companies. In practical terms, this means that US companies may not be legally accountable to Canadian clients.

Recent US data surveillance laws make Canadian PI held by US companies even more vulnerable. Several provinces have passed legislation prohibiting public bodies, such as healthcare and educational institutions, from storing personal information outside Canada. Alberta’s Personal Information Protection Act creates statutory requirements regarding private sector outsourcing of data. The Act requires that organizations transferring PI across Canadian borders for processing (rather than a simple transfer of PI) must have given affected individuals prior notice of the transfer, as well as the opportunity to contact an informed company representative with any questions. It also imposes a mandatory data breach reporting obligation. BC’s Personal Information Protection Act contains similar requirements. Quebec’s stricter private-sector privacy law restricts the transfer of data outside the province.[ii]

“Organizations must be transparent about their personal information handling practices. This includes advising customers that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities.” 

– Office of the Privacy Commissioner

Sector-specific Canadian operations may face additional legal requirements. Outsourcing the processing of health information will be regulated by the various provincial health information laws, for example. While the Ontario Personal Health Information Protection Act doesn’t limit cross-border PI transfers, it does prohibit the disclosure of PI to persons outside Ontario without the consent of affected individuals.

 

UNITED STATES: The USA PATRIOT Act declares that all information collected by US companies or stored in the US is subject to US government surveillance. Foreign data subjects have little recourse to protect the privacy of their personal information held by US multinational corporations, which include most cloud computing service providers.

 

EUROPE: The European approach to data sharing across jurisdictions is based on territory: foreign companies must comply with the laws of the countries in which their customers reside.

 

The EU’s General Data Protection Regulation (GDPR) generally prohibits the transfer of personal information to recipients outside the EU unless:

For foreign companies to operate in Europe, national regulators in each jurisdiction within the EU will have to assess the legal compliance of company codes of conduct. These will have to contain satisfactory Privacy Principles (e.g., transparency, data quality, security) and effective implementation tools (e.g., auditing, training, complaints management), and demonstrate that they are binding. Codes of conduct must apply to all parties involved in the business of the data controller or the data processor, including employees, and all parties must ensure compliance. (For instance, under the GDPR, cloud computing service providers will almost certainly have to locate servers outside the US to protect data from American surveillance privacy violations.)

Canada is currently deemed an “adequate” jurisdiction by the EU because of the privacy protections provided by PIPEDA (although be aware that adequacy decisions are reviewed every four years, and so that may change). Your company will still need to make sure that data transfer protocols follow the GDPR’s requirements, which are stricter than those mandated by PIPEDA. Consent is something you’ll need to pay particular attention to. The GDPR does not allow an opt-out option; consent to data processing must be informed and specific.

Given the scale of financial penalties under the GDPR, it’s best to consult legal counsel to ensure that you have dotted your i’s and crossed your t’s.

Regulating Data Sharing between Organizations: A Cross-border Analysis

EU and North American laws around data sharing reflect very different understandings of responsibility for protecting privacy. At first glance, US and Canadian laws mandate that personal data shared with a third party be bound by a policy, the provisions of which ought to be equally or more stringent than the terms to which data subjects agreed when they initially released their personal information. However, these North American privacy laws only hold accountable the primary service provider that first collected the data; privacy breaches by data recipients are considered to be violations of contractual obligations, but not violations of privacy rights.

The European Union’s General Data Protection Regulation, in contrast, adopts a shared responsibility model for data sharing; both service providers (in this context, data collectors) and subcontractors (data processors or other third-party vendors) are responsible for enforcing privacy provisions. Data collectors are not permitted to share personal data with a third party unless it is possible to guarantee the enforcement of equal or stronger privacy provisions than those found in the original agreements with data subjects. This shared responsibility model reflects greater privacy maturity, by shifting from an exclusive focus on adequate policy and contracts to ensuring effective implementation through monitoring and governance of all data holders.

For further information on data transfers, and privacy compliance matters generally, see Waël Hassan’s book Privacy in Design: A Practical Guide to Corporate Compliance, available on Amazon.

A three-part series from KI Design:

[i] For further information, see Office of the Privacy Commissioner, “Businesses and Your Personal Information,” online at: https://www.priv.gc.ca/en/privacy-topics/your-privacy-rights/businesses-and-your-personal-information/.

[ii] For further information, see George Waggott, Michael Reid, & Mitch Koczerginski, “Cloud Computing: Privacy and Other Risks,” McMillan LLP, December 2013, online at: https://mcmillan.ca/Files/166506_Cloud%20Computing.pdf.

[iii] For further information, see the analysis by Dr. Detlev Gabel & Tim Hickman in Unlocking the EU General Data Protection Regulation: A Practical Handbook on the EU’s New Data Protection Law, Chapter 13, White & Case website, 22 Jul 2016, online at: https://www.whitecase.com/publications/article/chapter-13-cross-border-data-transfers-unlocking-eu-general-data-protection.


Categories: Privacy

Best-Practice Data Transfers for Canadian Companies – I – Outsourcing

DATA OUTSOURCING

In our digitally interconnected world, most organizations that handle personal information will transfer it to a third party at some stage of the data life cycle. Your company may send personal information (PI) to an external service provider such as PayPal to process customer payments – that’s a data transfer. Perhaps you hired a data disposal company to destroy data at the end of its life span – that’s a data transfer. Your company may outsource payroll – that means you’re transferring employee data. Any sharing or transmitting of data, electronic or hard copy, is considered a transfer.

But remember: all transfers of personal information must be compliant with the Personal Information Protection and Electronic Documents Act (PIPEDA) and any relevant provincial and sector-specific privacy laws. So, be aware that the many business advantages of data outsourcing are offset by increased security risks, as we’ll see below. And when PI flows into another jurisdiction, the situation becomes more complex.

The key take-away is this:

When you transfer personal information, even if it passes into another jurisdiction, you retain accountability for its care.

A common type of data transfer is outsourcing: handing over aspects of the provision and management of data computing and storage to a third party. A cloud database managed by a third party is a common example of data outsourcing. Within a data outsourcing design, data sets are often stored together with an application – this connects to an external server, which can then assume data management.

There are many advantages to delegating a business process to an external service provider; these can include efficiency, lower labour costs, and enhanced quality and innovation. (Data processing is often outsourced offshore, to foreign businesses: this raises other issues, which are addressed in Part II: Cross-border Data Transfers.

However, data outsourcing brings its own challenges and security risks. Can you guarantee that your data processor will not misuse the data in its care? Can you ensure that access controls will be enforced, and policy updates supported, by your processor? Will the processor commit to as rigorous a Privacy Framework as your company has?

The greatest danger with data outsourcing is the risk of a security breach. According to Trustwave’s 2013 Global Security Report, in 63% of global data breach investigations, “a third party responsible for system support, development and/or maintenance introduced the security deficiencies exploited by attackers.”[i] Patrick Thibodeau, senior editor of Computerworld, stresses that companies utilizing the advantages of data outsourcing “need to go through an exhaustive due-diligence process and examine every possible contingency.”[ii]

Encrypting the data to be outsourced can prevent both outside attacks and inappropriate access from the server itself. It’s also helpful to combine authorization policies with encryption methods, so that access control requirements are bundled together with the data.

Before transferring data, think carefully: is the personal information component actually needed? If you can ensure that the data is (irreversibly) anonymized, and keep careful records of having done so, the personal information will disappear and data protection principles will no longer apply.

PIPEDA doesn’t prevent organizations from outsourcing the processing of data, but the Office of the Privacy Commissioner cautions that organizations outsourcing PI need to take “all reasonable steps to protect that information from unauthorized uses and disclosures while it is in the hands of the third-party processor.”[iii]

Legal Requirements

CANADA: Under PIPEDA, the “transfer” of data is considered a “use” by a company, as opposed to a “disclosure” – this is because the processing of information by a third party is still done for the purposes for which the PI was originally collected. “Processing” is interpreted as any use of the information by a third party for its intended purpose at the time of collection.

PIPEDA’s first Privacy Principle, Accountability, states:

“An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing. The organization shall use contractual or other means to provide a comparable level of protection while the information is being processed by a third party.”

This statement has three key clauses; we’ll look at each in turn.

1) “An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing.” The onus of responsibility lies with your organization, even once information has been transferred to a third party; you cannot outsource legal liability. This means that you’ll need to know exactly what data protection safeguards your data processor has in place, and be able to monitor them during the transfer process.

2) An organization needs to ensure a “comparable level of protection while the information is being processed by a third party.” According to the Office of the Privacy Commissioner, this means that the third party must provide a level of data protection comparable to the protection that would have been in place had the data not been transferred.[iv] (The protection should be generally equivalent, but it doesn’t necessarily have to be exactly the same across the board.)

3) “The organization shall use contractual or other means” to comply with legal privacy requirements. There should be a written agreement in every instance where personal information is transferred to a third party. A contract cannot transfer responsibility, but it can describe necessary measures a data processor must take to optimally safeguard personal information, and clearly delineate the responsibilities of each party.

In an effort to protect PI and reduce risks, PIPEDA’s restrictions encourage organizations to minimize data transfers, and only to use them for reasonable purposes.

Quebec has passed legislation[v] that imposes strict rules on private-sector organizations using, transferring, or disclosing personal information outside Quebec, even if the PI is being transferred to another Canadian province. Under the law, data transfer or disclosure is prohibited unless it can be guaranteed that the PI will not be used or disclosed for other purposes than those for which it was transferred, or disclosed to third parties without consent.

UNITED STATES: While no federal law creates a general requirement for data owners regarding data protection during transfer, sectoral laws may do so: for example, the Health Insurance Portability and Accountability Act imposes strict regulations on covered entities seeking to disclose personal health information to a service provider. State laws may also impose security standards; for example, California requires organizations transferring data to third parties to contractually oblige those third parties to maintain reasonable security protocols.

EUROPE: Free transfer of personal data within member states is integral to the founding principles of the EU. As long as the data is transferred in compliance with the strict requirements of the General Data Protection Regulation, the Regulation does not restrict data flows within the European Union or European Economic Area.

For further information on data transfers, and privacy compliance matters generally, see Waël Hassan’s book Privacy in Design: A Practical Guide to Corporate Compliance, available on Amazon.

A three-part series from KI Design:

ENDNOTES

[i] Trustwave 2013 Global Security Report, p. 10, online at: https://www.trustwave.com/Resources/Library/Documents/2013-Trustwave-Global-Security-Report/.

[ii] Patrick Thibodeau, “Offshore risks are numerous, say those who craft contracts,” Computerworld, 3 November 2003, p. 12, online at: https://www.computerworld.com/article/2573865/it-outsourcing/offshore-risks-are-numerous–say-those-who-craft-contracts.html.

[iii] For more information, see the OPC’s “Privacy and Outsourcing for Businesses” guidelines, online at: https://www.priv.gc.ca/en/privacy-topics/outsourcing/02_05_d_57_os_01/.

[iv] Office of the Privacy Commissioner, “Guidelines for Processing Personal Data Across Borders,” January 2009, online at: https://www.priv.gc.ca/en/privacy-topics/personal-information-transferred-across-borders/gl_dab_090127/.

[v] P-39.1 – Act respecting the protection of personal information in the private sector, online at:

http://www.legisquebec.gouv.qc.ca/en/showdoc/cs/P-39.1.


Categories: About Waël, Privacy

Privacy in Design: A Practical Guide to Corporate Compliance

I am pleased to announce that Privacy in Design Book is now available for preorder on Amazon and Kindle.

The book describes a journey into achieving corporate compliance while maintaining and improving business objectives.

Compliance is an essential resource for privacy officers, executive leadership, and boards of directors.

From developing a privacy program, through data handling and protection, to auditing and monitoring privacy maturity, Waël Hassan presents his expert advice, clearly delineating applicable legal obligations and increased regulatory requirements, including the GDPR. He explores privacy best practices on numerous topical issues, such as workplace and employment privacy practices, data transfer and cloud computing, data analytics, and data breach avoidance and management.

The book is divided into four sections:

  1. Part I: Navigating the Legal Landscape of Privacy Protection
  2. Part II: Bringing Your Organization into Best-practices Privacy Part
  3. III: How to De-identify Personal Information
  4. Part IV: Privacy and Big Data

By implementing the principles and practices outlined in this book, you’ll make privacy compliance benefit your business and become a competitive advantage.

About Waël Hassan:

Dr. Waël Hassan is the founder of KI Design – his full bio is available at About

Categories: Privacy

Urban Data Responsibility – The Battle for Toronto

The initial excitement over Alphabet’s Smartcity may be dwindling out of the perception that the tech giant will use the new development in the Harbourfront to collect personal data. The special attention given by interest groups to a project that actually has engaged the public and shown good faith may be giving companies the wrong lesson: Don’t engage the public and no one will care.

For several years, Turn Style, now Yelp Wifi, has captured, linked, and shared consumer confidential data with no public engagement and no protest from advocates.

By protesting against companies who are engaging the public – interest groups may be doing Privacy a dis-service

The project, run by Sidewalk Labs is set to be an ambitious feat which incorporates innovation with sustainability to build a city that is ‘smart’—technology that responds to users to create a highly efficient and responsive landscape. On the one hand, the public is excited for the opportunity to live in a highly efficient neighborhood whose core is sustainability and innovation, and on the other, the public is alarmed by advocates who claim that the project’s data collection and sharing is alarming. The graph below illustrates how feelings of excitement are progressively being overtaken by feelings of fear.

The real question we should be asking is whether the data being collected, is much different from the surveillance techniques we interact with daily. Traffic cameras- the low-tech version to Apples’s sensors- already track our movements. Our Presto cards, now increasingly necessary to use public transportation, store our travel data and can reveal where we live, work and who we travel with. Yelp Wifi is a known data predator, which indiscriminately and without consent tracks Torontonians’ entry and exit into 600+ local businesses. We sign onto unsecured servers to gain access to Wi-Fi when we are at a café or shopping mall, and most of us already give access to more information than we realize via the cellphones we carry and use to share personal information, at all times. Yelp’s retention policy is effectively indefinite. Opting out of their services is definitely not accessible even for the tech savvy.

Here is an excerpt of Yelp wifi data retention:

We (Yelp) retain all Personally Identifiable Information and Non-Personally Identifiable Information until the date you first access the Services or the time at which you instruct us in accordance with the terms hereof to remove your Personally Identifiable Information or Non-Personally Identifiable Information.

I have been following Yelp’s Wifi traction on privacy from when it was a startup on King West called Turnstyle. Their CEO was quoted “I want to desensitize people’s need for privacy”. Their traction on privacy has been disappointing. Reviewing their policies over the years, I found that:

Yelp Wifi’s retention policy is confusing and inaccessible, it violates the reasonable expectation of privacy

In my opinion, and despite all the noise, Sidewalk Labs’ proposal is reasonable. Their Digital Governance Proposal has principles that demonstrate good faith. Meanwhile, advocates are pushing for anonymization, a technique that allows the removal of personal identity from any sensor data.

In this discussion, some argue that the issue is not that Sidewalk Labs will collect data, it is that a corporation is cementing itself—literally—in the place of local government. What access will it have and who will it share our information with?

Like any good government, in order for citizens to have a voice and prevent any giant from taking over—political or corporate—there needs to be checks and balances in place to ensure compliance. In reviewing their proposal, I found that:

The sidewalk proposal is reasonable, however it is missing an important tenant of data protection, that is Audit.

Much like any public corporation that exposes its financial documents to a third party to perform its financial audit, Sidewalk Labs’ proposal is missing the potential of a technical audit by a third-party assessor.

I also find the counter points made by advocates to be lacking. The argument that anonymization offers protections in big data is misplaced:

Anonymization may be moot because data will be released to companies that have other sources to blend

The current negotiations happening are important but unless we understand how they will be enforced and regulated over time, they remain policy when in fact action is needed. This is new territory from a legal, political, and business standpoint and the truth is, Canadians do not have robust protections in place to safeguard them from privacy exploitation. As the law unfortunately drags behind, we must be proactive in how we build our security governance. Privacy audit companies have long been in the business of protecting our data—they ensure information is being stored and shared responsibly and, the way it’s intended.

As we continue to debate the Harbourfront project we must resist falling back onto tropes of progress versus preservation of the norm. Initially, we must realize that our norms most likely share more of our data than we would like. Then, we must understand that change is inevitable, but we have a chance to be part of that change and direct its course. Privacy Auditing allows us the opportunity to consistently ensure that our data is being used in the ways we intend for it to be used.

Now is not the time to step away from negotiations, particularly from a company that is welcoming feedback. How the project is developed and instituted will set a precedence and influence, not only for the Harbourfront area, but what we can expect from corporate governance and the future of privacy laws. It is in our utmost interest to take full interest and engage as extensively as we can to ensure an outcome that keeps its promise of innovation and sustainability.

As a private citizen, I welcome businesses that are open to listening and are engaging the public to expressing their opinion. The effort by Sidewalk Toronto and their partners is a work in progress that will need more attention and third party attestation.

Stay tuned for our upcoming pieces that continue to inform on Privacy by Design in the Big Data environment.