Culture WarFeatured

Mahmood will unleash two-tier terror with her plan for AI policing

ON MONDAY afternoon Home Secretary Shabana Mahmood told the Commons her police reforms will include the ‘largest-ever rollout of facial recognition’. This includes spending £115million for police forces to roll out Artificial Intelligence systems, overseen by a new organisation to be called ‘Police.AI’.

The Times’s report details her plans to reduce 43 constabularies to 12, each with data analysts and AI software Tasper to predict crime. This is a terrible idea.

Predictive AI never predicts as promised. It always reinforces institutional biases. If police are systemically biased against conservative commentators, white recruits, and ‘openly Jewish’ protesters, predictive AI is only going to reinforce their prejudices. It will also intensify two-tier policing.

Sellers of predictive AI claim that a human is always in the loop, checking the validity of the automated predictions. But this claim is contradicted by their other claim: automation saves personnel. Whatever personnel are left to supervise the AI, those personnel usually don’t know how to validate predictions, can’t be bothered, or challenge only the predictions they dislike. Algorithms, we are told, can forecast crime hotspots, identify likely offenders, select recruits, and allocate resources more efficiently than humans could. But humans are the ones who set up the AI. What is difficult for humans to predict is also difficult for humans to program into AI.

Predictive AI isn’t any more intelligent than a human. It’s not really intelligent at all. Predictive AI is actually machine learning from historical data to forecast future behaviours. It’s just a pattern whisperer.

Predictive AI’s advantage is financial, not operational. It’s quicker and cheaper in finding patterns, but not better. It finds patterns that a disinterested human would recognise as spurious. For instance, AI that was hyped as detecting covid in chest X-rays had merely learned to distinguish adults from children: in its training, all the covid-free patients were children.

Some experts predict that predictive AI will never improve, even though generative AI will continue to improve.

AI’s failure to predict is inherent to the data from which it ‘learns’ or on which it ‘is trained’ in the absence of theory.

Historical data are inherently unpredictive of other periods. This is most obvious when observing an adaptive or contingent behaviour. For instance, if you had used historical data from before the 2000s to predict terrorism in the 2000s, you would not have predicted 9/11. Machine-learning would have predicted negotiable airline hijackings, not suicidal hijackers flying planes into buildings.

Also, samples are inherently unpredictive outside of the sampled demographics or geography. For instance, police across America use the Ohio Risk Assessment System, which was trained on just 452 defendants, all in Ohio, all in 2010. Think how unrepresentative that sample is.

There is a temptation to think that the sample must merely be bigger. Well, the Public Safety Assessment did just that. It was trained on 1.5million subjects across 300 American jurisdictions. But Cook County, Illinois, found that it ‘predicted’ ten times more defendants escalating to violent crimes than actually escalated. Thousands of defendants were jailed unnecessarily. The PSA, despite its sample size, did not represent propensities in Cook County.

Training data reflect self-selection and selection biases, which AI will only reinforce. For instance, if a hiring algorithm is fed CVs from a male-dominated industry, it will inadvertently associate men with success, and select male candidates – thereby exacerbating the gender bias. This isn’t hypothetical: it’s the real-life story of a recruiting tool scrapped by Amazon in 2018.

Now think how West Yorkshire Police, which already discourages applications from whites, could automate such racism. Similarly, predictive policing software PredPol (now Geolitica) was criticised for directing patrols to neighbourhoods which already receive more policing, regardless of changes in actual crime.

When AI is trained on policing data, it does not ‘learn’ criminal patterns, it learns policing patterns. AI becomes part of a self-reinforcing feedback loop, reinforcing the illusion that crime is concentrated where AI says it is. The algorithm is predictive in only the self-fulfilling sense.

Now think of how this feedback loop rationalises police biases. For instance, West Midlands Police have been exposed cracking down on anti-asylum protesters while letting armed Muslim gangs take over Birmingham city centre, and banning Israeli football fans while kow-towing to immoderate Muslim ‘community leaders’. How did West Midlands Police generate the intelligence to rationalise its ban on Israeli fans? Partly with AI, which reported Israeli violence at a football match that never happened.

Predictive AI is also inherently invasive of privacy. Machine learning depends on personal data: online browsing, purchasing, travel, socialising, voting, health, demographics and so on. Social media companies, such as Facebook (now Meta), use such data to predict everything from political leanings to mental health, often without explicit consent. Now imagine the police predicting your crime of racial hatred because you browsed news of a protest outside an asylum hotel, or your crime of Islamophobia because you browsed the facts of Muslim ‘grooming gangs’.

Predictive AI reinforces stereotyping. The Cambridge Analytica scandal of 2018 revealed how predictive AI could reduce millions of users to a few political stereotypes, based on Facebook data, and then feed those stereotypes with targeted political messaging. Netflix claims to recommend what its viewers want to watch, but really it homogenises most viewers around a minority of options. Imagine a school and thence a local authority that stereotypes you as a far-right extremist because you showed students of politics some videos made by Trump supporters. That happened in 2025. AI could help police to identify people who watched such videos on Facebook and thence flag them as far-right extremists.

Predictive AI is used to justify policing that is invasive and repressive, even if it stops short of fighting any crime. Policing has shifted from responding to crimes to intimidating supposed pre-criminals (euphemistically: ‘managing risk’). The presumption of innocence has shifted to presumption of propensity. Probable cause has shifted from preparations for crime to conformity with a pattern. Predictive policing might suppress crime, but it also represses lawful activity and erodes trust. Without trust, policing becomes less effective – at least the intelligence-led kind.

AI undermines accountability. Most predictive systems are proprietary. Thence, the public, police, and even the supplier do not understand how predictions are generated. When an officer acts on an algorithmic recommendation, who is responsible for the outcome? The officer? The constabulary? The vendor? This diffusion of responsibility weakens accountability.

AI undermines due process. Unlike a human, an algorithm cannot be cross-examined or criminalised. The user and programmer can claim ignorance or irresponsibility. Errors are hard to detect, let alone correct. Even good-faith investigations can collapse into ‘computer says so’. Apologists argue that humans are biased too, so algorithmic bias is merely a lesser evil. This is a false choice. Human bias is more contestable and corrigible. Algorithms give bias the appearance of objectivity, and embed it in systems that operate at scale and are difficult to investigate and turn around.

Predictive AI elevates machine automation over human autonomy. This is ‘automation bias’. For instance, IBM’s Watson Health promised to predict patient outcomes but fell short. Nevertheless, doctors were more likely to defer to AI judgments than to colleague judgments, and even their own judgments.

This ‘automation bias’ has been documented in aviation, where pilots are more likely to shut down a healthy engine (a potentially deadly decision) when AI falsely warns of a problem than when a human falsely warns of a problem.

Efficiencies are over-estimated or -valued. Acting on bad predictions is expensive, such as when Cook County jailed thousands of defendants unnecessarily. Correcting for bad predictions is expensive, such as when Kent Police paid £20,000 to Julian Foulkes, a 71-year-old retired special constable, for wrongfully invading and searching his home and arresting him over a satirical tweet that the police misread as inciting the ‘storming [of] Heathrow’. In any case, financial efficiencies are over-valued at the expense of operational effectiveness.

AI which cannot predict outside the historical period or the demographics or geography on which it is trained, which reinforces self-selection and selection biases, invades privacy, feeds stereotyping, encourages invasive and repressive policing, undermines accountability, undermines due process, and automates human judgments is ineffective, even if it is efficient.

Source link

Related Posts

Load More Posts Loading...No More Posts.