
“It’s almost impossible for a machine to convey the same message that a professional interpreter with awareness about the country of origin can do, including cultural context.”
That was the concern expressed by Uma Mirkhail, a volunteer with Respond Crisis Translation, after seeing the effects of using AI-powered translation tools to vet applications from Afghan asylum seekers in the US. The speaker was quoted in an article published by the Guardian in September 2023, ‘Lost in AI Translation’, which found that some applications were being jeopardised because the language tools were misinterpreting the data.
Interpreters and translators are linguists, which means that they understand language on a deeper level than a non-linguist and much more comprehensively than an AI translation tool. Languages contain subtleties and cultural references that can only be acquired and ‘meta-textually’ used by a human. However, machine translation is increasingly popular because of its cost-effectiveness and speed.
Delegating high-stake decisions to AI translation tools such as Google Translate and Microsoft Translator also raises ethical questions due to the AI’s non-compliance with international law and human rights legislation. This is particularly relevant to asylum seeker applications. The Guardian article reported that in the US, the Department of Homeland Security (DHS) had set up contracts with several machine translation firms including Lionbridge and TransPerfect Translations International Inc, while officials at Immigration and Customs Enforcement (ICE) had been instructed to use Google Translate to vet refugee applications. Customs and Border Protection had even developed its own app, CBP Translate, to help communicate with migrants.
The real-life implications of delegating translation and interpretation to AI
When quality is not a top priority, AI can be a good choice for translation work; it is quick and cheap. However, it does not have a human translator’s linguistic background and understanding of cultural nuances. In the context of asylum seeker applications, its shortcomings have irreparable consequences. From border stations all the way to detention centres to immigration courts, potentially life-changing errors appear.
The CBP One app is a portal used by asylum seekers in the US to schedule an appointment with Customs and Border Protections (CBP). It was mandated by the Biden administration. But it can work in only a few languages and even these translations have contained mistakes. The Guardian article ‘Lost in AI Translation’ gives examples of asylum applications being denied because of mistranslations: in one case the translation tool interpreted an ‘I’ in a refugee’s statement as ‘we’, making it seem as if it was an application for more than one person. In another case a women wanted to flee her country of origin because of domestic abuse by her father. She described him colloquially as ‘mi jefe’ which was translated as ‘my boss’ by the CBP One app, and this ultimately led to her application getting denied.
This poses the question of the ethical dimensions of the choices of applications acceptations and denials: how can the US government demand flawless asylum applications whilst putting imperfect tools at the applicants’ disposal?
Ariel Koren, the founder of Respond Crisis Translation and former Google Translate employee, said:
“Not only do the asylum applications have to be translated, but the government will frequently weaponise small language technicalities to justify deporting someone … the application needs to be absolutely perfect.”
The cultural aspects of language
It is widely known in the linguistic field that AI-powered translation tools are based on the English language. English is used a pivot, which means that a word will first be translated into English and then into the desired language. However, not all languages follow the same linguistic patterns as English, and certain elements and information from the source text can be lost. For example, many languages do not have an equivalent to the generic English word for ‘rice’; they use different words according to whether the rice is cooked, uncooked or brown. So translating ‘this rice in tasty’ in Swahili through Google Translate resulted in ‘this uncooked rice is tasty’.
The Guardian article said: “Language is more than a series of words and their meanings; it’s a means to express cultural identity, and it’s how many communities make sense of the world. Without cultural context, machine translation systems will continue to prioritise a western worldview, making it nearly impossible to properly interpret the nuances of most non-English languages.”
AI systems are dependent on the data they are fed. Like other AI tools, machine translation services reflect and perpetuate existing biases in society, along with global power and economic imbalances.
‘Linguistic Imperialism’ is a term coined by Robert Philipson in his influential 1992 book of the same name in which he criticises the over-dominance of English and its imposition on other languages and cultures. AI translation tools amplify this process; due to the UK’s colonial history, English is among the most recorded languages in the world, meaning there are ample digital resources available to AI. Other languages which have fewer digital resources available tend to be left on the sidelines. Swahili, for example, although spoken by 80 million people across Africa, has few digital sources.
There are also subjective considerations, for example when translating asylum seekers’ descriptions of trauma. An Afghan researcher at Respond Crisis Translation, quoted in the Guardian article, said: “We are dealing with people who are traumatised, and our approach is trauma informed. As an interpreter, you cannot under-do or overdo [the translation], but at the same time, you should have empathy to convey their emotions and feelings, and that is only possible with a human being.”
At the end of the day, data cannot replace a human, and the use of technology requires even more supervision and regulation in order to comply with human rights obligations and international law.
‘Bias and institutionalised racism in AI’
In the world of translation and beyond, AI has been found to produce biased and racist information. When it comes to immigration processes in the US, this has reinforced certain stereotypes and exacerbated pre-existing institutionalised racism.
The principles of equality and non-discrimination are codified in international law, which stipulates that ‘rights set out in that covenant are recognised without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status’. However, in the case of AI, there are no counterweights to verify that these stipulations are respected.
AI and compliance with international law
Opacity and unpredictability have always been the most frequent ethical and legal challenges for asylum seekers, and even more so now that AI is used. There are valid fears of unlawful discrimination and the inability to challenge said decisions because they were automated.
In ‘Why Worry about Decision-Making by Machine?’ in Yeung, K and Hodge, M, the authors break down the potential legal risks attributed to AI into two categories: ‘outcome-based’ and ‘process-based’. The ‘outcome-based’ category represents risks that must be considered before implementing AI tools in the asylum-seeking process, such as malicious use, accuracy problems and discriminatory outcomes. ‘Process-based’ risks are the issues that arise after the implementation of AI translation tools, such as bias, unfairness or unlawful discrimination, difficulty ensuring protection from data breaches and failure to meet the quality of processing applications required by international organisations such as the UN.
The European Union has safeguards in place to make sure a system of checks and balances remains. For example, one directive states that the examination of applications for international protection must be taken ‘individually, objectively and impartially’.
However, legal protections of this kind are not legally binding in the United States. Although the US is prohibited from returning individuals to a place where they could face persecution and is obligated to process asylum claims based on criteria defined by the 1951 Refugee Convention, AI did not even exist at the time, so these ethical problems were not yet an issue.
In other words, AI complies with US law simply because there isn’t yet legislation which protects individuals from the potential and the existing risks of AI in asylum applications. According to ‘Refugee Protection in the Artificial Intelligence Era: A test case for rights’, (Chatham House), profiling and predictive tools, which have been used in asylum seeker applications in the US, ‘rely on past patterns of observed behaviour among groups of people to make decisions about individuals – including, in some cases, about their anticipated behaviour in the future’.
These tools have been used to predict whether an individual could cause harm to the community, when deciding whether or not detention is needed. The UN High Commissioner for Human Rights has cautioned that ‘predictive tools carry an inherent risk of perpetuating or even enhancing discrimination’ because the past data used to make predictions will often ‘reflect racial and ethnic bias’ or ‘carry harmful assumptions and stereotypes’.
A flawed tool for making life-or-death decisions
AI’s dependence on data that can be incorrect or poor quality has a huge impact on its work in the field of assessing asylum applications in the US. Moreover, the lack of regulation over these new translation tools has led to violations of international law, as well as exacerbating bias and discrimination. Asylum seekers at the US border are being denied their human right to protection because of poor translation software and negligent US government officials who use these flaws to justify the rejection of legitimate seekers.
Mistakes in asylum applications, and biased decisions about refugee status, can have life-or-death consequences. The existing, human-operated systems were already flawed; AI is just adding another layer of opacity and insensitivity.