Bringing Privacy Regulation into an AI World, Part 4
This seven-part series explores, from a Canadian perspective, options for effective privacy regulation in an AI context.
Access control, the ability to control access to information, has long been the primary tool of online security. The great advantage of access control is that it tells you who has access to what information at any given time. Its limitation is that it cannot tell you what the user does to the data, and in what context. Once write access or editorial access is given, an authorized user can in theory do anything with the data – it can be rewritten, deleted, or copied and passed on. Simply put, access control is only effective as long as data doesn’t leave its source realm, or its initiator’s sphere of influence.
As AI and data use continue to evolve, the access control model is becoming outmoded. Data moves far beyond its origination point, often passing rapidly from collector to purchaser to further purchasers in a chain of unregulated data trading. Data brokers buy web and social media traffic, combining them with data such as customer support, public records, and phone and Internet metadata to create highly detailed profiles of hundreds of millions of consumers. AI allows a data scientist to make inferences about consumer preferences, and sort consumers into multiple categories, passing these on to retailers for targeted advertising.
Individuals usually grant access to their information so that a business may provide services. Often the customer cannot access these services without opting in to information sharing. If you give your email to a clothing store, you can expect to receive advertising related to the products they sell. But your information rapidly travels far beyond the company you made your initial agreement with. There’s a world of difference between sharing your data with your favourite store and consenting to its unrestricted use. Yet these are the only options offered by access control. Until we mature that model, data protection will be all or nothing. If you don’t want to share your data, don’t sign up to join Amazon or Google – if you do choose to use these services, your personal data may be used and shared for nearly any purpose that serves them.
By using access control to regulate AI and big data, we’re trying to solve today’s challenges with yesterday’s tools. Access control only addresses the problem of unauthorized access. We need different tools to address the problem of unauthorized use of data. In my next post I will explore options for better control of the use of data in an AI context.
Bringing Privacy Regulation into an AI World:
PART ONE: Do We Need to Legislate AI?
PART TWO: Is AI Compatible with Privacy Principles?
PART THREE: Big data’s big privacy leak – metadata and data lakes
PART FOUR: Access control in a big data context
PART FIVE: Moving from access control to use control
PART SIX: Implementing use control – the next generation of data protection
PART SEVEN: Why Canadian privacy enforcement needs teeth