The European Union is leading the way when it comes to the regulation of Artificial Intelligence. In the words of the European Parliament, the future “AI Act” will set-up an “analysis and classification system of AI tools according to the risk these systems pose to users. The different risk levels will mean more or less regulation”. This Act brings forward important and unprecedented safeguards in the use of Artificial Intelligence.
A welcome move? Absolutely. And yet, the Act may still fail to address some concerns. As the Act enters the final phase of negotiations (between the EU Commission, the EU Council and the EU Parliament), EuroMed Rights and its partners believe some dangerous protection gaps remain.
As EuroMed Rights and other human rights organisations have previously warned, the use of Artificial Intelligence in the fields of policing, security and migration amplifies structural discrimination against communities that are already marginalised and over-surveyed.
Member States are increasingly pushing to include blanket regulatory exemptions in the field of law enforcement and migration. This contributes to creating double standards of protection for people on the move and racialised communities who are exposed to dangerous technologies.
Take technologies such as risk assessment and profiling systems. These technologies claim to be able to make border checks smoother and more efficient by distinguishing between bonafide and “illegitimate” travellers. However, it has been proven that these systems may reproduce and reinforce racial bias and discrimination based on personal and sensitive data. For another risky technology, look no further than the so called “predictive analytics”. This tool is expected to forecast migration movement based on a different array of data. The dangers of these predictions can be severe: they might lead to an assumption that some groups of people pose a risk of “irregular” migration. The tool could then be used to justify repressive migration control policies such as pushbacks, pull-backs or other forms of migration deterrence, which are in direct contravention of the right to asylum, the principle of non-refoulement and the right to leave one’s country.
Border areas have seen an increasing number of new technologies being deployed over the last two decades. From biometric databases to surveillance infrastructure, this tech forms the backbone of the European security obsession on migration. As shown by EuroMed Rights and Statewatch the trends for the future go in the same direction. Decades of new technology deployment at borders keep showing that military, security, and defense tools or technology do not stop migration. They only make it more dangerous and lethal. In spite of it, the EU seems committed to explore unregulated new tools, from AI to surveillance, to biometric to make these tech borders even more violent.
As a human rights organisation, we believe that all people despite their legal status are to be treated on an equal footing according to the fundamental values which form the base of the European Union. As such, we reject these blanket exemptions. Instead, we demand that these policy areas be regulated in an even tighter manner; the power imbalance between the authorities and those surveyed pose a greater risk of violations of fundamental rights and rule of law.
When it comes to the rights of people on the move, some technologies should be banned from the AI Act. These include remote biometric identification in all publicly accessible places, including border areas; all forms of predictive and profiling systems, used for instance to make individual risk assessments and profiles based on personal and sensitive data, and predictive analytic systems used to interdict, curtail and prevent migration; facial recognition and emotion recognition systems. Other technologies should see specific safeguards attach to their use as their indiscriminate use would expose individuals to high risks. These include for instance biometric identification systems like fingerprint scanners, AI used in border management or in forecasting of migration movements.
The AI Act offers a crucial opportunity to propose safeguards to counter some of the risks posed by Artificial Intelligence. But regulations must be balanced and equitable to all or else they will reproduce structural discriminations and double standards which would go against the EU’s stated values. To have a fair legislation, we need more protection – not surveillance – for people on the move and racialised communities.