Links

Next content

Read more

Frontier AI,’ Power, and the Public Interest: Who Benefits, Who Decides

As the rapid industrialization of generative AI (GenAI) reached a crescendo in the fall of 2023, a series of international AI policy initiatives, like the UK AI Safety Summit and G7’s Hiroshima Process, cropped...

Following her article on Facial Recognition Technology used within the Russia-Ukraine conflict, Natalia explores one of the first instances where a famous outlet -The BBC publicly disclosed the use of FRT for autonomous investigations into events related to the ongoing Israeli-Hamas conflict.

These events represent one of the first instances in which a prominent news outlet publicly discloses the use of facial recognition technology for autonomous investigations into such sensitive matters. It is important to note that this does not necessarily imply that other news outlets or actors have not used similar methods in the past; it’s possible that such practices simply remain undisclosed or unknown to the public at large.

Drawing from this case, the author emphasises the urgent need to establish clear rules for the use of FRT in different contexts. She also highlights our vulnerability to similar dynamics due to the insufficient deterrent effect of current regulations, shortcomings in monitoring, and jurisdictional constraints on adjudication and enforcement:

“When used by private actors attempting to collaborate in “solving” serious crimes or complex and controversial issues, as in the case at hand, there are risks associated with ‘methodological and technological blind spots’, including the technical immaturity of the systems used and possible interferences given by racial, gender, or automation biases.”, she writes.

Back to top