The UN’s human rights chief has called for a moratorium on the use of some artificial intelligence tech, such as bulk facial recognition, until there are “adequate safeguards” against its potentially “catastrophic” impact.
In a statement on Wednesday, UN High Commissioner for Human Rights Michelle Bachelet stressed the need for an outright ban on AI applications that are not in compliance with international human rights law, while also urging for a pause on sales for certain technologies of concern.
Noting that AI and machine-learning algorithms now reach “into almost every corner of our physical and mental lives and even emotional states,” Bachelet said the tech has the potential to be “a force for good,” but could also have “negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”
AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.
Technologies like facial recognition are increasingly used to identify people in real time and from a distance.We call for a moratorium on their use in public spaces, at least until robust international #HumanRights safeguards are in place.Learn more: https://t.co/VmmR75slYdpic.twitter.com/mslH79ccFK
Bachelet’s warning came as the UN Human Rights Office released a report that analyzed the impact of AI systems – such as profiling, automated decision-making and other machine-learning technologies – on various fundamental rights, including privacy, health, education, freedom of expression and movement.
The report highlights a number of worrying developments, including a “sprawling ecosystem of largely non-transparent personal data collection and exchanges,” as well as how AI systems have affected “government approaches to policing,” the “administration of justice” and “accessibility of public services.”
AI-driven decision-making could also be “discriminatory” if it relies on out of date or irrelevant data, the report added, also underscoring that the technology could be used to dictate what people see and share on the web.
However, the report noted that the most urgent need is “human rights guidance” with respect to biometric technologies – which measure and record unique bodily features and are able to recognize specific human faces – as they are “becoming increasingly a go-to solution” for governments, international bodies and tech firms for a variety of tasks.
In particular, the report warns about the increasing use of tools that attempt to “deduce people’s emotional and mental state” by analyzing facial expressions and other “predictive biometrics” to decide whether a person is a security threat. Technologies that seek to glean “insights into patterns of human behaviour” and make predictions on that basis also raise “serious questions,” the human rights body said.
Noting that such tech lacked a “solid scientific basis” and was susceptible to bias, the report cautioned that the use of “emotion recognition systems” by authorities – for instance, during police stops, arrests and interrogations – undermined a person’s rights to privacy, liberty and a fair trial.
“The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real,” Bachelet said, adding that the world could not “afford to continue playing catch-up” with rapidly developing AI technology.
Think your friends would be interested? Share this story!