This article describes the issue of Police use of AI-based facial recognition technology, discusses why it poses a problem, describes the methodology of assessment, and proposes a solution
The CBC reported on March 3[1] that the federal privacy watchdog in Canada and three of its provincial counterparts will jointly investigate police use of facial-recognition technology supplied by US firm Clearview AI.
Privacy Commissioner Daniel Therrien will be joined in the probe by ombudsmen from British Columbia, Alberta, and Quebec.
Meanwhile, in Ontario, the Information and Privacy Commissioner has requested that any Ontario police service using Clearview AI’s tool stop doing so.[2]
The Privacy Commissioners have acted following media reports raising concerns that the company is collecting and using personal information without consent.
The investigation will check whether the US technology company scrapes photos from the internet without consent. “Clearview can unearth items of personal information — including a person’s name, phone number, address or occupation — based on nothing more than a photo,” reported the CBC.[1] Clearview AI is also under scrutiny in the US, where senators are querying whether its scraping of social media images puts it in violation of online child privacy laws.
In my opinion, there are three factors that could get Clearview AI, and its Canadian clients, in hot water. Here are the issues as I see them:
- The first issue: Collecting and aggregating data without consent. Even though the photos may have been procured under contract from social media networks, the linking of database photos to demographic information is a big no-no from an individual privacy perspective. Facebook’s infamous experience with the now-dissolved Cambridge Analytica was another example of data being repurposed. It’s possible that, through “contract engineering” (drafting complex contracts with lots of caveats and conditional clauses), Clearview has gained contractually permissible access to Canadians’ photos. However, linking that data with demographic information would be considered a violation of Twitter and Facebook’s terms of use.
- The second issue: Not providing evidence of a Privacy Impact Assessment. A Privacy Impact Assessment is used to measure the impact of a technology or updated business process on personal privacy. Governments at all levels go through these assessments when new tools are being introduced. It’s reasonable to expect that Canadian agencies, such as police services, would go through the federal government’s own Harmonized Privacy and Security Assessment before introducing a new technology.
- The third issue: Jurisdiction. Transferring data about Canadians into the United States may be a violation of citizens’ privacy, especially if the data contains personal information. Certain provinces, including British Columbia and Nova Scotia, have explicit rules about preventing personal data from going south of the border.
How will Privacy Commissioners decide if this tool is acceptable?
The R v. Oakes four part test [3] will be used to assess the tool’s impact. This requires considering the “four-part test” used by courts and legal advisors to ascertain whether a law or program can justifiably intrude upon privacy rights. The elements of this test: necessity, proportionality, effectiveness, and minimization. All four requirements must be met.
- Necessity: There must be a clearly defined necessity for the use of the measure, in relation to a pressing societal concern (in other words, some substantial, imminent problem that the security measure seeks to treat);
- Proportionality: The measure must be carefully targeted and suitably tailored, so as to be viewed as reasonably proportionate to the privacy (or any other rights) of the individual being curtailed;
- Effectiveness: The measure must be shown to be empirically effective at treating the issue, and so clearly connected to solving the problem; and
- Minimal intrusiveness: The measure must be the least invasive alternative available (in other words, all other less intrusive avenues of investigation have been exhausted).
My assessment of the use of Clearview AI’s technology from the Oakes Test perspective:
- Necessity: Policing agencies will have no problem proving that looking for and identifying a suspect is necessary. However …
- Proportionality: Identifying all individuals, and exposing their identities to a large group of people, is by no means proportional.
- Effectiveness: The tool’s massive database might be effective in catching suspects; however, known criminals don’t usually have social media accounts.
- Minimality: Mass data capturing and linking doesn’t appear to be a minimalistic approach.
The Federal Privacy Commissioner publishes its methodology at this link[4].
Are there any solutions?
Yes, AI-based solutions are available. Here at KI Design, we are developing a vision application that allows policing agencies to watch surveillance videos with everyone blurred out except the person for whom they have surveillance warrant. For more information, reach out to us.
References:
- https://www.cbc.ca/news/canada/windsor/windsor-police-clearview-ai-1.5483550
- https://www.ipc.on.ca/information-and-privacy-commissioner-of-ontario-statement-on-toronto-police-service-use-of-clearview-ai-technology/
- https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/117/index.do
- https://www.priv.gc.ca/en/privacy-topics/surveillance/police-and-public-safety/gd_sec_201011/