Series: Bringing Privacy Regulation into an AI World
Bringing Privacy Regulation into an AI World
Over the past decade, privacy has become an increasing concern for the public as data analytics have expanded exponentially in scope. Big data has become a part of our everyday lives in ways that most people are not fully aware of and don’t understand. Governments are struggling to keep up with the pace of innovation and figure out how to regulate a big data sector that supersedes national borders.
Different jurisdictions have taken different approaches to privacy regulation in the new context of big data, machine learning, and artificial intelligence (AI). The European Union is in the lead, having updated its privacy legislation, established a “digital single market” across Europe, and resourced a strong enforcement system. In the United States, privacy remains governed by a patchwork of federal and state legislation, largely sector-specific and often referencing outdated technologies. The US Federal Trade Commission is powerful and assertive in punishing corporations that fail to protect data from theft, but has rarely attempted to regulate the big data market. Canada’s principle-based privacy legislation remains relevant, but the Office of the Privacy Commissioner (OPC) acknowledged recently that “PIPEDA [the Personal Information Protection and Electronic Documents Act] falls short in its application to AI systems.” As the OPC states, AI creates new privacy risks with serious human rights implications, including automated bias and discrimination. Given the pace of technological innovation, there may not be much time left to establish a “human-centered approach to AI.”
This series explores, from a Canadian perspective, options for effective privacy regulation in an AI context. In seven parts, I discuss:
 Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence, 2020.