Exclusive

Ottawa’s AI ethics test didn’t consider Palantir’s controversial work with U.S. law enforcement

Alexander Karp, co-founder and CEO of Palantir Technologies
Alex Karp, co-founder and CEO of Palantir Technologies. Francois Mori/AP Photo
article-aa

The data-mining firm Palantir Technologies has faced criticism from civil rights groups for its work with U.S. immigration authorities and police forces. But officials in Ottawa weren’t able to consider the company’s most controversial work when evaluating whether it should be allowed to bid on some Canadian government artificial intelligence projects, The Logic has learned.

Companies wanting to qualify as bidders had to outline their qualifications, provide examples of their past work and show how they “address ethical considerations when delivering AI.” But while Palantir submitted a 52-page response filled with case studies, resumes and business principles, it does not appear to have mentioned its contracts with U.S. law enforcement.

“There’s a huge elephant in the room here that they’re not acknowledging,” said Daniel Munro, a senior fellow at the University of Toronto’s Munk School of Global Affairs & Public Policy.

Purchase a subscription to read the full article.

By entering your e-mail you consent to receiving commercial electronic messages from The Logic Inc. containing news, updates, offers or promotions about The Logic Inc.’s products and services. You can withdraw your consent at anytime. Please refer to our privacy policy or contact us for more details.

Already a subscriber?

Talking Point

Canada’s government has positioned itself as a world leader on ethical artificial intelligence. However, federal procurement rules meant that officials in Ottawa did not consider data-mining company Palantir Technologies’ controversial work with U.S. law enforcement agencies when it greenlit the firm to bid on government AI projects, The Logic has learned.

Palantir established an Ottawa office in 2013, but only won its first publicly disclosed federal contract in March, a million-dollar deal for software to be used by the Canadian Special Operations Forces Command. In August, it hired David MacNaughton, the outgoing Canadian ambassador to the U.S., as president of its Canadian operations. 

But the firm has long been controversial. In May, Latinx advocacy organization Mijente released documents it said showed U.S. Immigration and Customs Enforcement (ICE) used Palantir’s software in a 2017 operation that arrested 443 people by targeting relatives of unaccompanied children who crossed into the U.S. via the Mexican border; Palantir has denied its technology is used in this way. Earlier this year, the Los Angeles Police Department ended a program that used the company’s software to identify people deemed likely to do something illegal, which a civil rights group said created a “racist feedback loop.” 

Palantir did not respond to multiple requests for comment for this story.

The Liberal government, meanwhile, has positioned itself as a world leader in the ethical use of AI. It has partnered with France to create an international panel working to ensure factors like human rights are considered as the technology develops. At home, it wants to use AI to improve how it provides services to citizens and how the public service makes decisions, but has tried to make sure departments employing automated decision-making are equipped to gauge the risks.

“The benefits from AI must not come at the expense of the rights of Canadians,” said Jane Philpott, then-minister of digital government, at a March event. She also congratulated the firms approved to bid on government AI projects: “The list is going to give … federal departments and agencies access to world-renowned companies that they can trust [and] world-renowned talent that can get the job done.”

In September 2018, Ottawa invited companies to go through a screening process to become approved vendors for AI projects. As part of that process, companies were asked to “provide examples of how [they address] ethical practices when delivering AI,” including “testing for outcomes and biases and fair, comprehensive and inclusive data collection practices.” 

Palantir’s submission, a partially redacted version of which The Logic obtained via access-to-information request, shows the company did not mention its work with U.S. police forces or with ICE. Federal procurement rules require officials to evaluate companies based only on what’s included in their submissions—an attempt to ensure fairness, so bidders don’t benefit or suffer from officials considering outside information that may not be applicable to the situation at hand—so the government team managing the screening process had to evaluate Palantir’s ethics without being able to consider its most contentious work.  

The company’s history was nonetheless a “big point of discussion” for officials, according to a source with knowledge of the process whom The Logic has agreed not to name because they are not authorized to speak about the matter publicly. “It makes it difficult when doing a process like a procurement that you’re only supposed to evaluate what’s been submitted,” the source said.

The officials screening the AI companies’ ethics responses focused on whether they had adequately answered the questions asked of them in the process, rather than testing their claims. For example, if a company were to show it had established an ethics committee, the officials did not try to evaluate the effectiveness of that committee. 

“In order to qualify on the AI Source List, each supplier’s proposal was evaluated against the criteria stated in the Invitation To Qualify,” said Stefanie Hamel, a spokesperson for Public Services and Procurement Canada (PSPC). The 78 companies that have so far qualified for the source list, including Palantir, are not guaranteed work, but are allowed to bid on government projects. “Each future opportunity stemming from the [list] will have its own set of requirements and evaluation criteria … established by the contracting department or agency,” Hamel said.

The department did not answer questions about whether it was aware of Palantir’s work with U.S. law enforcement, or whether it considered any information other than that provided by the company during its assessment. It also declined to release the scorecard for Palantir’s submission.    

In its submission, Palantir said it treats “the societal implications of our work as a first-order concern, on par with the challenges and importance of building world-class technologies.” It said it employs a team of privacy and civil liberties experts that reports to CEO Alex Karp and works with the engineering and development departments to ensure the “ethical design” of its products. 

Palantir also laid out the principles it uses to evaluate potential projects, including assessing and addressing bias in the data it uses to train its systems; judging how results might be unfair to vulnerable groups; and considering whether AI should even be used for the project.

While the company’s submission isn’t necessarily misleading or inaccurate, “I would want to see a transparent acknowledgement of those tough cases, and an attempt by Palantir to set out its rationale [for them],” said Munro, who reviewed the documents obtained by The Logic.

He said the government’s criteria are partly to blame, because they focus on the effectiveness of the technology and ways to reduce bias, instead of asking companies to grapple with whether AI should have been used in their projects at all—such as Palantir’s law-enforcement work.

Share the full article!
Send to a friend

Loading...

Thanks for sharing!

You have shared 5 articles this month and reached the maximum amount of shares available.

Close
This account has reached its share limit.

If you would like to purchase a sharing license please contact The Logic support at [email protected].

Close
x

Groups like the Institute of Electrical and Electronics Engineers and the International Organization for Standardization have issued advice on the right way to use the technology. “But AI is so many different things that even trying to navigate all of that is difficult,” said Ashley Casovan, executive director of AI Global, which is creating a certification program for AI it hopes will be similar to the certification Ocean Wise bestows on sustainable seafood.

Munro worries Palantir’s involvement could diminish public confidence in the government’s use of AI, even if it’s working on defensible projects.

“Given that this is a new technology and there are some serious concerns about how it’s being used—in particular by governments—I would think that they would want to keep the threshold pretty high for trustworthy, transparent and ethical companies,” he said. “And companies that are perceived to be ethical, as well.”