Racial bias in immigration algorithms

"The Law Society described the decision announced by the Home Office as timely and has warned of the risk of discrimination."

The government is suspending the use of algorithms in visas and immigration pre-decisions while it reviews them. There has been growing pressure on government around the risks and implications of their use and has been facing legal action from a technology justice campaign group which described the algorithm as "speedy boarding for white people". The Home Office has denied this claim.

The Law Society described the decision announced by the Home Office as timely and has warned of the risk of discrimination.

What’s the issue?

The Home Office is increasingly relying on technology in deciding visa applications but there are issues with transparency and bias. When, for example, an applicant applies for settled status they input their name, national insurance number and birthdate into an app. The tech checks the inputted information against government databases and an algorithm then processes the individual’s data to determine how long the applicant has resided in the UK.

But critics say this discriminates against women as it does not check working tax credit, child tax credit or child benefits - even though it checks against other benefits.

It has also been described as racist. The algorithm issues a red flag for applicants from specific countries such as in Africa and speeds up applications for white people from other countries. Even the EU has sought assurances from the UK government that EU nationals will not face automatic deportation after the Brexit transition period ends.

The Home Office says an interim replacement system (intended in place by the end of October) would not take nationality into account. It has also committed to undertaking equality and data protection impact assessments for an interim system, looking at issues such as unconscious bias and discrimination.

Law Society president Simon Davis said the Society believes the system is unsafe and it has been “raising the alarm for some time” about the systemic risks with using algorithms to filter immigration applications. He said the process “may embed bias against certain groups of people based on generic markers such as nationality, country of origin, age or whether they have travelled before”.

Davis has said a legal framework is urgently needed for the use of algorithms by all public bodies and government. He warned of “a real risk of unlawful deployment, or of discrimination or bias that may be unwittingly built into an algorithm or introduced by an operator”.

“Greater transparency, oversight and accountability around the use of algorithms in the justice system would protect the rule of law and help maintain trust in the justice system.”

These concerns are not new: in a report dating back to July 2017, the chief inspector of borders’ report of 2016-17 raised concerns about risks of ‘confirmation bias’ in the use of algorithms in immigration decision-making.

This suspension pending review of the ‘biased’ algorithm is an important victory for individuals and groups seeking to purge the system of bias and to increase transparency.

However, it does not detract from the reality that the government has consistently struggled to deal efficiently with the stream of immigration applications against the backdrop of complex immigration laws.

The volume of applications is undoubtedly a key driving force behind the increasing reliance on technology – and it’s not hard to see how the risks of bias can unwittingly arise in the use of algorithms.


Written by Nicola Laver, a non-practicing solicitor and a qualified journalist. She is also editor of Solicitors Journal.

Date:

Posted on 10.08.20