The EU AI Act must protect people on the move

The European Union Artificial Intelligence Act (AI Act) will regulate the development and use of ‘high-risk’ AI, and aims to promote the uptake of ‘trustworthy AI’ whilst protecting the rights of people affected by AI systems.

 

Click here to read in pdf

 

However, in its original proposal, the EU AI Act does not adequately address and prevent the harms stemming from the use of AI in the migration context. Whilst states and institutions often promote AI in terms of benefits for wider society, for marginalised communities, and people on the move (namely migrants, asylum seekers and refugees), AI technologies fit into wider systems of over-surveillance, criminalisation, structural discrimination and violence.

It is critical that the EU AI Act protects all people from harmful uses of AI systems, regardless of their migration status. We, the undersigned organisations and individuals, call on the European Parliament, the European Commission, the Council of the European Union, and EU Member States to ensure the EU Artificial Intelligence Act protects the rights of all people, including people on the move. We recommend the following amendments to the AI act:

1. Prohibit unacceptable uses of AI systems in the context of migration

Some AI systems pose an ‘unacceptable risk’ to our fundamental rights, which will never be fixed by technical means or procedural safeguards. Whilst the proposed AI Act prohibits some uses of AI, it does not prevent some of the most harmful uses of AI in migration and border control, despite the potential for irreversible harm.

The AI Act must be amended to include the following as ‘prohibited practices’:

  • Predictive analytic systems when used to interdict, curtail and prevent migration. These systems generate predictions as to where there is a risk of “irregular migration” and are potentially used to facilitate preventative responses to forbid or halt movement, often carried out by third countries enlisted as gatekeepers of Europe’s borders. These systems risk being used for punitive and abusive border control policies that prevent people from seeking asylum, expose them to a risk of refoulement, violate their rights to free movement and present risks to the right to life, liberty, and security of the person.
  • Automated risk assessments and profiling systems. These systems involve the use of AI to assess whether people on the move present a ‘risk’ of unlawful activity or security threats. Such systems are inherently discriminatory, pre-judging people on the basis of factors outside of their control, or on discriminatory inferences based on their personal characteristics. Such practices therefore violate the right to equality and non-discrimination, the presumption of innocence and human dignity. They can also lead to unfair infringements on the rights to work, liberty (through unlawful detention), a fair trial, social protection, or health.
  • Emotion recognition and biometric categorisation systems. Systems such as AI ‘lie-detectors’ are pseudo-scientific technology claiming to infer emotions on the basis of biometric data, while behavioral analytics are used to detect ‘suspicious’ individuals on the basis of the way they look. Their use reinforces a process of racialised suspicion towards people on the move, and can automate discriminatory assumptions.
  • Remote Biometric Identification (RBI) at the borders and in and around detention facilities. A ban on remote biometric identification (such as the use of facial recognition) is required to prevent the dystopian scenario in which technologies are used to scan border areas as deterrence and part of a wider interdiction regime, preventing people from seeking asylum and undermining Member States’ obligations under international law, in particular upholding the right to non-refoulement.
2. Expand the list of high-risk systems used in migration

While the proposal already lists in Annex III the uses of ‘high-risk’ AI systems in migration and border control, it fails to capture all AI-based systems that affect people’s rights and that should be subject to oversight and transparency measures.

To ensure all AI systems used in migration are regulated, Annex III must be amended to include the following as ‘high-risk’:

  • Biometric identification systems. Biometric identification systems (such as mobile fingerprint scanners) are increasingly used to perform identity checks, both at and within EU borders. These systems facilitate and increase the unlawful and harmful practice of racial profiling, with race, ethnicity or skin colour serving as a proxy for an individual’s migration status. Due to the severe risks of discrimination that come with the use of these systems, lawmakers must ensure the EU AI Act regulates their use.
  • AI systems for border monitoring and surveillance. In the absence of safe and regular pathways to the EU territory, people will cross European borders via irregular means. Authorities increasingly use AI systems for generalised and indiscriminate surveillance at borders, such as scanning drones or thermal cameras. The use of these technologies can exacerbate violence at the borders and facilitate collective expulsions or illegal pushbacks. Given the elevated risks and broader structural injustices, lawmakers should include all AI systems used for border surveillance within the scope of the AI Act.
  • Predictive analytic systems used in migration, asylum and border control. Systems used to generate predictions as to migration flows may have vast consequences for fundamental rights and access to international protection procedures. Often these systems influence how resources are assessed and allocated in the migration control and international protection contexts. Incorrect assessments about migration trends and reception needs will have significant consequences for the preparedness of Member States, but also for the likelihood that individuals can access international protection and numerous other fundamental rights. As such predictive systems should be considered as ‘high-risk’ when deployed in the context of migration.
3. Ensure the AI Act applies to all high-risk systems in migration, including those in use as part of EU IT systems

Article 83 of the AI Act lays out the rules for AI systems already on the market, at the time of the legislation’s entry into force. Article 83 includes a carve-out for AI systems that form part of the EU’s large-scale IT systems used in migration, such as Eurodac, the Schengen Information System, and ETIAS. All of these large-scale IT systems – which foresee a capacity of over 300 million records – involve the automated processing of personal and sensitive data, automated risk assessment systems or the use of technology for biometric identification. For example, the EU plans to subject all visa and ‘travel authorisation’ applicants to automated risk profiling technologies in the next few years. Further, EU institutions are currently considering an update to Eurodac to include the processing of facial images in databases of asylum applicants.

The exclusion of these databases would mean the safeguards in the EU AI Act do not apply. This blanket exemption will only serve to decrease accountability, transparency and oversight of AI systems used in EU migration control, and lessen protection for people impacted by AI systems as part of EU large-scale EU IT systems. Due to the exemption from regulatory scrutiny of these systems, the EU AI Act would lead to a double-standard when it comes to protecting fundamental rights of persons, depending on their migration status.

The EU AI Act should be amended to ensure that Art. 83 applies the same compliance rules for all high-risk systems and protects the fundamental rights of every person, regardless of their migration status.

4. Ensure transparency and oversight measures apply

People affected by high-risk AI systems need to be able to understand, challenge, and seek remedies when those systems violate their rights. In the context of migration, this requirement is both urgent and necessary given the overwhelming imbalance of power between those deploying AI systems and those subject to them.

The EU AI Act must prevent harm from AI systems used in migration and border control, guarantee public transparency, and empower people to seek justice. The EU AI Act must be amended to:

  • Include the obligation on users of high-risk AI systems to conduct and publish a fundamental rights impact assessment (FRIA) before deploying any high-risk AI system, as well as during its lifecycle.
  • Ensure a requirement for authorities to register the use of high-risk – and all public – uses of AI for migration, asylum and border management in the EU database. Public transparency is essential for effective oversight, particularly in the high risk areas of migration where a number of fundamental rights are at stake. It is crucial that the AI Act does not allow carve-outs for transparency measures in law enforcement and migration.
  • Include rights and redress mechanisms to enable people and groups to understand, seek explanation, complain and achieve remedies when AI systems violate their rights. The AI act must provide effective avenues for affected people, or public interest organisations on their behalf, to challenge AI systems within its scope if they are non-compliant or violate fundamental rights.

Click here to access the list of signatory organisations