We recently learned that last year Israeli intelligence agents smuggled parts of a robot into Iran, assembled the robot once there, and deployed the robot against Moshen Fakhrizadeh, a top Iranian nuclear scientist and a human being with rights. The robot killed Moshen Fakhrizadeh with a machine gun, guided by AI facial recognition technology. Another human, an Israeli agent, fired the machine gun remotely via satellite link, but was assisted in targeting by artificial intelligence.
Catherine Thorbecke at ABC News reports that the United Nations Human Rights chief recently “called for a moratorium on the sale of and use of artificial intelligence technology that poses human rights risks — including the state use of facial recognition software — until adequate safeguards are put in place.” This is because AI is developing more rapidly than we have the protocols or even the deliberative space to discuss and construct shared norms for its use. Even as concerned people call out the racist and otherwise unjust construction and application of the technology, profit-driven private sector developers and rights-insensitive governments continue to drive its accelerated development. The UN would like countries to put “a moratorium on the use of remote biometric recognition technologies in public spaces — at least until authorities can demonstrate compliance with privacy and data protection standards and the absence of discriminatory or accuracy issues.” The Washington Post reports that law enforcement technologies are especially worrisome, since “AI systems can mine criminal arrest records, crime statistics, social media posts and travel records to profile people according to the human rights report the UN also cites. That cited report is the first comprehensive call for moratoria on AI development citing systemic human rights concerns.
The report cites the “rush” to “incorporate AI applications” that bypass due diligence among both corporations and governments. In fact, after years of concerns about the flaws of facial recognition, for example, municipal governments continue to insist on using the technology. In large part, this is because police and prosecutors are driven by arrest and prosecution counts, rather than actual equity or even changes in crime rates. Facial recognition technology presents the most paradigmatic case study of the impact of AI on human rights. Critics say the systems “violate the right to privacy and threaten the rights to freedom of peaceful assembly and expression.” Last year’s unwarranted (literally) police assault on NYC activist Derrick “Dwreck” Ingram’s apartment — which only ended only because Ingram started livestreaming it — was abetted by facial recognition tech based on Ingram’s public activism. The cops had also illegally placed “wanted posters” in his neighborhood, based on images they had taken of him using AI at a political rally. The implications are concerning to those who work in political engagement and organizing of any kind. Technology is great when it allows democracy-minded organizers and activists to reach one another — for example, filling data gaps using appending services like those offered by my client Accurate Append — but advanced technology can quickly be used for more nefarious purposes (NYPD used cutting edge technology to persecute an organizer whose stances they disagreed with).
What is the human rights framework for this? There may be several, but here’s how human rights are folded into international and domestic law: states have conferences and conventions and draw up agreements (treaties, covenants, protocols, etc.). When states sign those treaties, they become bound to their domestic implementation. They assume the obligations enumerated in the agreements, but these can only be done (or are usually only done) through “self-executing” means, such as changing domestic law or creating regulations, to best reflect the aspirations of the agreements. Depending on the language and intent of the agreements, states assume obligations to “respect, to protect and to fulfil human rights.” To respect is to refrain from interfering with a right. To protect is to prevent others from interfering. To fulfill is to lay out and implement positive obligations and actions, providing resources and power in the advancement of the right. And even if every country hasn’t signed and ratified a particular treaty, protocol or covenant, if enough nations do, the rights or laws often become a sort of customary law or norm. States that refuse to abide by those norms tend to become pariahs, continually shamed by, and sometimes sanctioned by, other states.
In the case of AI technology, the threat of one particular “rogue state” seems minimal, given both the parallel development of technology across wealthier nations and the transfer of technology between nations, both legal and illegal. So if a particular advancement in AI tech threatens human rights in the United States, it’s probably going to be deployed in other countries as well, with little delay. This is why what the U.N. is doing makes sense. Rather than mentioning specific countries, or even specific brands or corporations, the U.N. is seeking to build a general set of protocols and calling for all nations to put a hold on AI’s rapid development until such protocols can be agreed upon.
So will this call be heard? It’s difficult to say, but there are reasons to be skeptical. Once such research starts, it almost takes on a life of its own. Trade secrets and company opacity make it difficult to determine when such research and development is happening. And some of the nations at the forefront of AI development are politically skeptical of the U.N.’s human rights regime; we can include the United States in that category.
Even if the call for moratoria is not totally effective, it will incentivize both states and corporations not only to explore ad hoc solutions like “debiasing,” but also to tackle some of the root causes of hierarchy and discrimination that will be augmented by new technologies. In other words, by confronting the ways AI reveals what is evil in ourselves, the hope is that people will confront the evil in order to be able to live with the goods of artificial intelligence. That’s the hope at least.