Emerging Tech Has a Front-Row Seat at India-Hosted UN Counterterrorism Meeting. What About Human Rights?

0
15

A special meeting of the U.N. Security Council Counter-Terrorism Committee in India this week marks a disturbing new twist on an already stained record of global security initiatives. Civil society organizations worldwide – especially in the Middle East/North Africa – as well as various U.N. human rights mechanisms and even Secretary-General António Guterres have decried the long years of harm done in the name of countering terrorism and ensuring security more broadly. Abuses accelerated in the past few years, especially as governments used the cover of responding to the COVID-19 pandemic to impose a gamut of emergency measures and repressive regulations.

Now, the U.N. and member States are increasingly focused on the role of emerging technology such as artificial intelligence (AI), biometric systems, and information and communications technology (ICT) to facilitate terrorism. Moreover, they are using assumptions about such threats to justify calls for broad and unrestricted counterterrorism responses, including the use of the very same technologies that are ripe for abuse by those same authorities.

So-called AI systems have become today’s shiny new toys for preventing terrorist attacks. Uses  range from tracking individuals and predicting their actions, to moderating terrorist-related content online. In other words, a person’s movements, payments, behaviors, and social networks may be monitored in hopes of predicting future terrorist activity.

The premise is that terrorists exploit technology such as new payment mechanisms and fundraising methods, for example, and must therefore be stopped at all costs. Yet the U.N. and member States provide little evidence for how terrorists are using technology in practice, and a nuanced understanding of the threat is still lacking.

The U.N. Security Council has played a significant role in accelerating the use of technology for counterterrorism purposes. In Security Council Resolution 2396 (2017), member States were instructed to collect biometric data and encouraged to develop and use biometric systems. Other resolutions that followed further emphasized the need for increased focus and collaboration on preventing the misuse of technologies, including emerging ones, for terrorist acts. Of note is the recent and unanimously approved Security Council Resolution 2671 (2021), which extended the mandate of the Counter-Terrorism Committee Executive Directorate (CTED) and highlighted financing, information and communications technologies. In June 2021, member states also flagged technology as a key concern when reviewing the U.N. Global Counter-Terrorism Strategy, based on U.N. General Assembly Resolution 60/288 (2006).

Efficacy vs. Harm

This week’s meeting in Mumbai and New Delhi is focused on precisely this issue of new and emerging technologies in terrorism and counterterrorism. This is certainly a cause for alarm, not least given the ongoing attack on human rights defenders and civil society organizations in India in the name of countering terrorism.  While there is little to no evidence of the efficacy of emerging technologies for preventing terrorism, there is evidence of harm to the rights and democratic freedoms of individuals and groups in their own countries.

Research to be published soon and undertaken by my organization, the European Center for Not-for-Profit Law (ECNL), in partnership with seven civil society organizations based in the Global South illustrates how governments use technology under the guise of counterterrorism to suppress legitimate dissent and infringe upon activists’ freedoms of speech, assembly, and privacy. This is especially true in countries where individuals, organizations, and civil liberties such as freedom of expression and association are already under attack.

Official justifications for the use of biometric technologies, for instance, rely on tech developers’ claims that they can identify perpetrators of terrorist offences. In 2021, Privacy International investigated such use in Iraq, Afghanistan, and Israel/Palestine and concluded that many claims of effectiveness weren’t substantiated. Moreover, they documented harms and restrictions to civil liberties and human rights resulting from the use of such technologies. ECNL’s new research shows that one of the most concerning trends for civic liberties is the use of biometrics to surveil protestors and dissidents. The partners we collaborated with exposed such use in countries including India, Mexico, Turkey, Uganda, and Thailand. Beyond direct human rights violations, the mere existence of surveillance technology can have a chilling effect on political expression and civic engagement, as individuals can self-censor and refrain from organizing due to the fear of being identified.

Other alarming impacts to freedom of expression and assembly stem from over-broad efforts by social media companies. This is further exacerbated by the short deadlines imposed by policymakers to remove terrorist content, which may not give platforms enough time to discern whether content violates the law or their internal policies. This can inadvertently result in the suppression of legitimate content, especially content shared by members of marginalized and vulnerable groups, such as Muslim and Arab users. This is partly due to social media companies’ lack of contextual understanding and investment when moderating content in these regions and languages, as well as the challenge of adequately enforcing policies at scale. Content exposing human rights abuses or criticizing powerful actors can be erroneously flagged as violative and thus removed, as seen in the recent human rights impact assessment of Meta’s activities in Israel/Palestine.

Furthermore, as data is increasingly collected and processed by private companies, issues arise when they disclose this content to law enforcement. Yet users are often left in the dark as regards the modalities of such disclosure, and hardly ever have a say in the matter. In Mexico, for example, mobile phone companies were required by law to request biometric information of phone users as a tool to combat organized crime. Given the severe risks to privacy, the Mexican Supreme Court struck this policy down in August 2022, declaring it unconstitutional.

Importantly, even when governments do not intend to use technology maliciously, there is little to no evidence that the technology is effective to achieve a broadly defined goal. And yet, these technologies are often designed and deployed without robust safeguards and consultation with affected communities.

Proportionate Approaches and Meaningful Engagement

Lessons learned from counterterrorism-related abuses in the past unequivocally show the importance of proportionate approaches and meaningful engagement with civil society prior to any use of technology in combating terrorism. After 20 years of applying a preventive (and pre-emptive) counterterrorism agenda – and finally admitting the harm it caused to freedoms worldwide –  the U.N. and member States can no longer justify taking hasty, immediate action without considering the potentially severe damage to human rights. When technology is thrown into the mix, the risk is exacerbated by a common “move fast, break things” approach championed by reckless technology companies, as they race for innovation while disregarding their impact on human rights.

More scrutiny and better safeguards begin with better understanding the limits of the technologies themselves and, through evidence-based research, assessing whether they are indeed fit for purpose and can prevent terrorism in practice. A corollary of that is the need to investigate how technologies introduced in the name of security and counterterrorism, including in financing of terrorism, respond to the actual threats and how they will impact human rights and civic freedoms.

To be proportionate, tech-based responses to terrorism must be based on a full risk assessment of their impact on human rights and civic space, and deployed in a way that mitigates the identified risks. Only when grounded in a multistakeholder approach can such assessments and actions be sufficiently informed, legitimate, and effective. This begins – and ends – with meaningfully engaging various sectors, including relevant national authorities, companies, civil society, and academia. Members of historically marginalized and vulnerable groups must have a privileged seat at the table, because they often are the most immediately and/or severely impacted. This especially includes racialized persons, women and gender non-binary persons, LGBTQIA+, religious minorities, migrants and refugees, disabled persons, children, the elderly, and those of lower socio-economic status. Voices and representatives from the Global South must not only be included, but also elevated throughout the process.

Once human rights risks and impacts are properly assessed, policymakers at the U.N. and in member States should consider existing guidance and resolutions related to counterterrorism and financing. How do current instruments and responses address the threat? Are more regulation and action really needed? Importantly, what has been done to address the unintended consequences of counterterrorism measures to date, and what lessons can be learned in crafting future responses?

In the meantime, a global coalition of civil society organizations has been pushing for a ban of biometric surveillance technologies. And U.N. special rapporteurs called for a moratorium on the sale of surveillance technology in August 2021, given the severe risk to human rights.

Any future policy intervention must be risk-based, targeted and in full compliance with the wider international human rights framework. This includes not only binding instruments, but also the U.N. Guiding Principles on Business and Human Rights and the U.N. systemwide guidance on human rights diligence in developing, deploying, and using new technologies. Recent Human Rights Council resolutions are also relevant, such as Resolution 41/11 on new and emerging technologies (2019), and the new 51/L.25 on the use of technology in the military domain. Restrictions to human rights, including civic freedoms, must always meet the three-part test of legality, legitimate aim, and proportionality and necessity. No blanket exemption for the use of technology for counterterrorism or national security could ever meet that test.

As the U.N. and member States engage during the meeting in India, they must pause, listen, and take this opportunity to scrutinize the use and impact of emerging technologies, with wide and meaningful consultation of civil society.

IMAGE: A sign on Queen Street in the city center of Cardiff, United Kingdom, on August 25, 2022, warns that South Wales Police are using facial recognition. (Photo by Matthew Horwood/Getty Images)

Source