Do We Need to Legislate AI?

Bringing Privacy Regulation into an AI World, Part 1

In recent years, artificial intelligence (AI) and big data have subtly changed many aspects of our daily lives in ways that we may not yet fully understand. As the Office of the Privacy Commissioner (OPC) of Canada states, “AI has great potential in improving public and private services, and has helped spur new advances in the medical and energy sectors among others.” [1]The same source, however, notes that AI has created new privacy risks with serious human rights implications, including automated bias and discrimination. Most countries’ privacy legislation, including Canada’s, was not written with these technologies in mind. The privacy principles on which Canada’s privacy laws are based remain relevant, but are sometimes difficult to apply to a complex new situation. Given these difficulties, the OPC has questioned whether Canada should consider defining AI in law and creating specific rules to govern it.[2]

I would argue, in contrast, that the technological neutrality of Canada’s privacy legislation is the reason it has aged better than laws in other jurisdictions, notably the US, that reference specific technologies. The European Union’s recently updated and exemplary privacy legislation deliberately takes a principle-based approach rather than focusing on particular technologies. 

I thoroughly support the principles of technological neutrality, and I do not recommend the creation of specific rules for AI.

I thoroughly support the principles of technological neutrality, and I do not recommend the creation of specific rules for AI. Technologies are ephemeral and volatile, and they change rapidly over time; what the technological concept of “the cloud” meant ten years ago is very different from what exists today. AI is evolving all the time. Creating a legal definition of AI would make privacy legislation hard to draft, and harder to adjudicate. Doing so could easily turn any court case on privacy and AI into a fist-fight between expert witnesses offering competing interpretations. 

AI adds a new element to the classic data lifecycle of collection, use, retention and disclosure: the creation of new information through linking or inference. For privacy principles to be upheld, data created by AI processes needs to be subject to the same limitations of use, retention and disclosure as the raw personal data from which it was generated. It is important to note that, conceptually, AI is not a form of data processing; rather, it is a form of collection. AI’s importance in the privacy domain lies in its impact – which is that it expands on the data collected directly from individuals.  

An example:

Alex is the client of a robotics club. She provided her personal information on sign-up. The club, which has various locations in different cities, offers its patrons a mobile app to locate the nearest venue; Alex signed up to the app. The club’s AI analytics systems can track the stops that Alex makes en route to the club, and infer her preferred stopping places – the library, a tea shop, a gas station.  

The robotics club has a café, and wants to know what its patrons like, so they can serve it. The club’s data processing has ascertained that Alex stops frequently at a tea shop on the way to the club; it infers that she likes tea. People share ideas and book recommendations through the app, and the club also makes recommendations to patrons. Alex has recommended Pride and Prejudice; the club’s AI infers that she would also enjoy Jane Eyre, and recommends it to her. The club’s AI system also searches her public Facebook posts to analyze her interests and recommend other books and products she might like. 

AI systems go far beyond analyzing data that individuals have voluntarily provided. They frequently collect data indirectly, for example, by collecting public social media posts without individuals’ knowledge or consent. Through linking and inference, AI uses data from various sources to create new data, almost always without the consent of data subjects. This creation of knowledge is a form of data collection. If regulation can deal with privacy issues at the level of collection, it has also dealt with use, since collection is the gateway to use.  

Therefore, I recommend changing the legal definition of data collection to include the creation of data through linking or inference, as well as indirect collection. Under Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), organizations may only collect personal data for purposes identified to individuals before or at the time of collection. Defining the creation of data as a form of data collection would mean that information could be created through AI analytics only for specified purposes to which data subjects have consented. 

To summarize, I do not believe that it is advisable or necessary to create specific legislation to govern privacy in the context of AI systems. The creation of new information through data analytics can be governed effectively by the same principles that govern the direct collection of personal data. 


[1] Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence, 2020.


Bringing Privacy Regulation into an AI World:

PART ONE: Do We Need to Legislate AI?

PART TWO: Is AI Compatible with Privacy Principles?

PART THREE: Big data’s big privacy leak – metadata and data lakes 

PART FOUR: Access control in a big data context 

PART FIVE: Moving from access control to use control 

PART SIX: Implementing use control – the next generation of data protection 

PART SEVEN: Why Canadian privacy enforcement needs teeth