Controversial AI tech deployed alongside record-setting 5G network

Last week Saturday, over 20 million viewers from across the UK tuned in to watch the coronation of King Charles III, making it the country’s most-watched TV event of the year. Another approximately two million took to the streets of London, under the close watch of AI. 

In the lead-up to the coronation, the Metropolitan Police confirmed that it would deploy live facial recognition technology — which scans faces and matches them against a list of people wanted for alleged crimes — across central London to identify potentially dangerous individuals mingling in the crowds. 

During the event, the software scanned footage from central London’s almost 1 million CCTV cameras and analysed it using an AI algorithm to identify faces that might match those on the Met’s watchlist. The sheer scale of the deployment made it the largest-ever use of live facial recognition technology in public spaces in British history. 

Live facial recognition technology has been a topic of controversy in the UK in recent years due to concerns about privacy, civil liberties, and the potential for the technology to be misused.

One of the main issues is the lack of clear legal regulation around its use. “Live facial recognition is not referenced in a single UK law, has never been debated in parliament, and is one of the most privacy-intrusive technologies ever used in British policing,” said Madeleine Stone, legal and policy officer at British civil liberties campaign group Big Brother Watch.

Critics argue that the use of live facial recognition could lead to false positives, where innocent people are wrongly identified as suspects. There are also concerns that the technology may disproportionately impact certain groups, such as people of colour or those with disabilities, due to the potential for bias in the algorithms used to analyse the images.