News

Research group calls for pause on emerging use of algorithmic policing in Canada

Vancouver Police Department officers at a demonstration at the opening of the outside the Trump International Hotel & Tower in Vancouver in February 2017.
Vancouver Police Department officers at a demonstration at the opening of the outside the Trump International Hotel & Tower in Vancouver in February 2017. Ben Nelms/Bloomberg via Getty Images
article-aa

A growing number of Canadian police forces are using or considering algorithm-based tools to forecast where crimes may be committed or to surveil social media. Now, a prominent University of Toronto research group is calling for a halt in their use until the federal government completes a judicial inquiry into the practice.

Purchase a subscription to read the full article.

By entering your e-mail you consent to receiving commercial electronic messages from The Logic Inc. containing news, updates, offers or promotions about The Logic Inc.’s products and services. You can withdraw your consent at anytime. Please refer to our privacy policy or contact us for more details.

Already a subscriber?

Talking Point

Canadian police forces are starting to use algorithmic policing technologies including crime forecasting systems, facial recognition and social media surveillance, according to a new Citizen Lab report. It warns such tools could violate residents’ privacy and Charter rights and exacerbate state discrimination against marginalized groups, calling for a moratorium until a national judicial inquiry is complete.

In a report released Tuesday, the Citizen Lab warned such algorithmic policing technology may infringe on residents’ privacy and Charter rights, and risk disproportionately targeting marginalized communities. 

Policymakers need to intervene now, before the use of algorithmic policing expands in Canada as it controversially has in the U.S. and U.K., said co-author Cynthia Khoo, a Citizen Lab research fellow and technology and human rights lawyer. “We have to make sure that we’re not putting forward technology as a solution to a problem that technology actually cannot solve.”

While algorithmic policing isn’t currently widespread in Canada, several forces are using automated systems or technology that have such capabilities. For example, the Vancouver Police Department (VPD) uses a machine learning-based system first launched in April 2016 to forecast where break-and-enter crimes are likely to happen, then dispatches officers to patrol them. In Saskatoon, a partnership between the provincial government, university and force is feeding police data into a model that identifies youth at risk of going missing. Law enforcement agencies in Toronto and Calgary have previously used social media surveillance tools, while the RCMP recently put out a tender for such a service. 

Other countries have seen wider adoption. Forces in the U.K. have used facial recognition and automated risk-assessment systems, while those in some of the largest U.S. cities have deployed and then decommissioned predictive tools. In April 2019, the Los Angeles Police Department (LAPD) ended a program that used Palantir’s technology after years of protests and criticism from the Stop LAPD Spying Coalition, although another crime-location prediction system remains in place. While the Calgary Police Service uses the Silicon Valley firm’s software for data analysis and organization, it hasn’t turned on the forecasting capabilities, according to the Citizen Lab report. 

By contrast, authorities in Saskatchewan chose to build their system in-house “to avoid the issues that they know can come with proprietary software,” Khoo told The Logic. Private-sector vendors may try to keep the details of their technology a secret. For example, some forces withheld documents in response to Citizen Lab’s freedom-of-information records requests, citing commercial confidentiality. In a criminal justice situation, a lack of disclosure could hurt a defendant’s due process rights. 

Law-enforcement sources to whom the authors spoke said algorithmic policing technologies could help forces allocate their resources more efficiently, according to Khoo. “The idea is [to] pinpoint with more accuracy where crime is more likely to happen, send officers to that particular [area], and deter the alleged future crime before it happens,” she noted, although the tools can be expensive. But she said savings do not justify the human rights issues such systems raise. 

Algorithmic policing technologies may infringe on residents’ privacy and Charter rights such as freedom of expression, peaceful assembly, equality and liberty, according to the report. It recommends that forces never use them as the sole basis for arrest or detention, and get court authorization before deploying automated surveillance tools in public places or online. 

Existing legislation and case law may be insufficient to assess algorithmic policing technologies. For example, the privacy regime that allows forces to collect mugshots wasn’t designed for facial recognition software. “We don’t have a specific legal safeguard that says whether or not it’s appropriate and sufficiently protective of privacy rights to repurpose [those] databases in this way,” said Khoo. 

In other cases, pre-existing laws may apply, but those cases have never been brought before the courts. That’s why the report calls for governments to impose moratoriums on using or training algorithms on historical police datasets, until a national judicial inquiry can examine the technology and its rights implications.

Systems that forecast where crime may occur or whether a person is likely to commit one typically use historical police data. But the report notes numerous studies and court decisions have determined that Black and Indigenous people are overrepresented in carding, street-check, arrest, sentencing and incarceration data because of “biased criminal justice practices or systemic discrimination.” Khoo added that homeless and low-income residents are also more likely to show up in government registries because they’ve accessed services.

Share the full article!
Send to a friend

Loading...

Thanks for sharing!

You have shared 5 articles this month and reached the maximum amount of shares available.

Close
This account has reached its share limit.

If you would like to purchase a sharing license please contact The Logic support at [email protected].

Close
Want to share this article?

Upgrade to all-access now

Close
x

Crime-forecasting systems that draw on those biased datasets are likely to disproportionately target those same groups. While “algorithmic policing might seem futuristic,” said Khoo, it risks simply maintaining longstanding discrimination and violence against marginalized groups, “but wearing the emperor’s new technological, scientific clothing.” (She said the VPD system only uses the location, type, date and time of incidents, not suspect information, and that officers are trained to not detain someone on the basis of the prediction alone).  

The report’s 20 recommendations include requiring police forces to disclose when they’re employing these systems and how, make the software’s source code available for expert study and in court cases, as well as conduct algorithmic impact assessments. The federal government requires the last of those for all new automated decision-making deployments, although it exempts national security programs, and the Citizen Lab report argues the federal policy isn’t rigorous enough. The authors also call for policymakers and law enforcement agencies to consult with historically marginalized communities on any such programs, and monitor them for bias.