Some of the best, or at least sharpest, minds on the planet are devoted to guessing what we might buy next, and showing us advertisements for it. Often the results are ludicrously inaccurate; sometimes they are creepily precise. Would we trust the same kind of technology to predict what crimes we might next commit? That is the question raised by the
latest report by campaigners at Liberty on the implications of the police’s use of big data and machine learning, the technologies usually referred to as artificial intelligence. When they’re used to sell us things, they are relatively harmless. When they sell opinions, they can corrupt democracy. When they determine the course of the criminal justice system, they could do immense damage. Because machine learning can only detect patterns in the data that it is given, any bias in the original sample will only be amplified. So if past practice has been to
discriminate against women or minorities, any algorithm fed on previous experience will continue this pattern, but this time with the apparent authority of science behind it. And because modern machine learning techniques are opaque, even to their programmers, a computer cannot easily be made to testify about its own reasoning in the way that police officers can – in theory – be tested by judges or politicians.
Society does have a vital interest in being able to predict who is most likely to offend or to reoffend, and to help them away from temptation. But the idea that algorithms could substitute for probation officers or the traditional human intelligence of police officers is absurd and wrong. Of course such human judgments are fallible and sometimes biased. But training an algorithm on the results of previous mistakes merely means they can be made without human intervention in the future. The strongest single predictor of whether a young man will end up in jail is whether his father did so. People live down to society’s expectations, and we all lose as a result.
The Guardian says society does have a vital interest in being able to predict who is most likely to offend or to reoffend, and to help them away from temptation. But the idea that algorithms could substitute for probation officers or the traditional human intelligence of police officers is absurd and wrong. Of course such human judgments are fallible and sometimes biased. But training an algorithm on the results of previous mistakes merely means they can be made without human intervention in the future. The strongest single predictor of whether a young man will end up in jail is whether his father did so. People live down to society’s expectations, and we all lose as a result.