Police in London, Ont., are implementing policies around the use of artificial intelligence, joining the growing ranks of Ontario law enforcement agencies doing so.
The London Police Services Board approved new rules around AI at its meeting on Thursday in a bid to ensure use of the technology aligns with legislative obligations, privacy protection and public trust.
“AI technologies are becoming increasingly embedded in policing,” said board chair Ryan Guass during the meeting.
“While they offer opportunities for efficiency, they also introduce risks related to privacy, bias and public confidence.”
Get daily National news
Get daily Canada news delivered to your inbox so you’ll never miss the day’s top stories.
Establishing governance expectations around the use of AI in policing is currently a police board’s responsibility, given there is no provincial framework; London’s policies draw on the measures taken by police services boards in York, Peel and Toronto.
Under London’s policy, “AI must remain subject to meaningful human oversight, and that its use must be justified, proportionate, and consistent with legal and ethical standards,” a report sent to the committee reads.
Furthermore, an “AI Technology Compliance and Risk Report” will be presented to the board annually.
Portions of that report may engage operational, legal or security sensitivities, therefor a public-facing summary will likely be required, the report added.
“AI Technologies shall be used only in a manner that complies with the applicable laws, including the Canadian Charter of Rights and Freedoms, human rights legislation, privacy laws, and policing legislation,” the framework reads.
“The use of AI technology must be shown to further the purpose of law enforcement in a manner that outweighs identified risks.”
© 2026 Global News, a division of Corus Entertainment Inc.

