Immigration NZ, by and large, has been good at transparency in the past– you may think some of their policies are inhumane or arbitrary, but you can easily find out what their policies are. That’s a pleasant contrast to the other place I’ve lived as an immigrant. Even their operational manual is available online. So, when you hear in this morning’s Radio NZ story “Immigration NZ using data system to predict likely troublemakers”, you might want to give them the benefit of the doubt and assume they are just taking more steps to make their decision procedures explicit.
But then you get to the quotes
“We will model the data sets we have available to us and look at who or what’s the demographic here that we’re looking at around people who are likely to commit harm in the immigration system or to New Zealand,” he said.
“Things like who’s incurring all the hospital debt or the debt to this country in health care, they’re not entitled to free healthcare, they’re not paying for it.
“So then we might take that demographic and load that into our harm model and say even though person A is doing this is there any likelihood that someone else that is coming through the system is going to behave in the same way and then we’ll move to deport that person at the first available opportunity so they don’t have a chance to do that type of harm.
At the very least, they are saying that you can have two people with the same record of what they’ve done in New Zealand, in the same circumstances, and one of them will be deported and the other not deported based on, say, country of origin or age. It’s true that to be deported you have to have done something that gives them a justification — but “at the first available opportunity” is fairly broad when you’re Immigration NZ. And if they’re talking about people who are “not entitled to free health care”, then “immigrants” is the wrong term. [update: Radio NZ have now changed the first word of the story from “Immigrants” to “Overstayers”. Apart from that issue of terminology the same comments still apply]
So, how does this differ from, say, the IRD using statistical models to target people with higher probability of having committed tax fraud for auditing? There are two important differences in principle. The first is that the IRD is interested in auditing people who have already committed tax fraud, not people who might do so in the future. The second is that the consequences of being caught don’t depend on the predicted probability. Immigration NZ, on the other hand, seems to be interested in treating people differently based on things they haven’t done but might do in the future.
Now, Immigration NZ has to deport some people. It has to make decisions about who to let into the country in the first place, and who to give extensions of visas, or grant residency. That’s what it’s for. These decisions will have serious impacts on the lives of would-be immigrants — ranging from those who have an application for residency denied to those who don’t even bother applying because there’s no hope.
Since Immigration NZ does make these sorts of decisions, do we want them to do it based on a statistical model? That’s actually a serious question. It depends. There are at least three issues with the model: the ‘transparency‘ issue, the ‘audit‘ issue and the ‘allowable information‘ issue. All of these are also a problem with decisions made by humans.
The ‘allowable information‘ issue is ‘racial profiling’. As a society, we’ve decided that some information just should not be used to make certain types of decisions — regardless of whether it’s genuinely predictive. For anyone other than Immigration NZ, country of origin would be in that category. Invoking a statistical model — essentially, writing it down in a flowchart — wouldn’t be a justification. To some extent Immigration NZ is required to treat prospective immigrants differently based on their country of origin; the question is how far they can go. The Human Rights Commission is likely to have an opinion here, and it’s quite possible they’ll say Immigration NZ has gone too far.
The ‘transparency‘ issue is that the model should be public. Voters should be able to find out their government’s policy on deportations; people trying to immigrate should know their chances. The tax office have an argument for keeping their model secret; they don’t want people to be able to tweak their accounts to escape detection. The immigration office don’t.
The ‘audit‘ issue is related but more complicated. Immigration NZ need to know (and should have independent verification, and should tell us) how accurate the model is and what inputs it’s sensitive to, and how reliable the data are. How many of the deported people does the model say would have committed serious crimes? How much unnecessary government expenditure does it predict they will require? How well do these predictions match up to reality? Are there relevant groups of people for whom the model is importantly less accurate — people from particular countries, people with or without family in NZ, etc — so that the costs of automated decision making aren’t justified by benefits. And to what extent do the inputs to the model suffer from self-reinforcing bias?
The classic problem of self-reinforcing bias comes from a different context, predictions of future offences by convicted criminals. We don’t have data on who commits crimes, only on who is arrested, charged, or convicted. To the extent that people from particular demographic groups are more likely to attract the notice of the justice system, it will look as if they are more likely to commit crime, and this will lead to more targeted enforcement. And so on, round and round.
In the immigration setting, we’d be concerned about any of the criteria that can be affected by current immigration enforcement practice — if people are currently more likely to be deported or more likely to have applications refused based subjectively on country of origin, this will tend to show up in the new models. Healthcare costs, on the other hand, aren’t directly affected by Immigration NZ decisions and so don’t have the same self-reinforcing vicious circle — though failing to pay the bills might.
Having a statistical model isn’t necessarily a bad thing, just like having a formal flowchart or points system isn’t necessarily a bad thing. The model can have various sorts of bias, but so can actual human immigration officers. In contrast to some of the social policy models, this model isn’t being used to make new distinctions in a setting where everyone used to be treated uniformly — the immigration system has always made individual decisions about visas and deportations.
In principle, a model could be developed with care to include only the right sorts of inputs, to predict outputs that aren’t subject to vicious circles, to have clear and reliably estimated costs and benefits associated with decisions, and to be open to independent audit. Such a model would be more accountable to the Minister, Parliament, and the nation than the decisions of individual immigration officers.
The fact that we, and the incoming Minister, only found out about the system this morning doesn’t suggest we’ve got that sort of model. Neither does the disappearance of data from their website, where they’ve just discovered privacy problems (without all that much effect, since the data are still up at archive.org). Nor the explicit use of country of origin. Nor the spokesperson’s complete lack of reference to safeguards in the modelling process, or the argument that they can’t be doing racial profiling because they also use gender, age and type of visa in the model.