W.I.R.E.
close

Simple truth inside the black box. Interview with Spencer Chainey

By Hannes Grassegger

 

Hollywood movies like “Minority Report” imply that the police will one day be able to predict crimes and arrest perpetrators before they have committed the crimes they have planned. Spencer Chainey, a leading researcher in the field of predictive policing, does not believe in these sorts of visions. Even in the future, police work will continue to be based on our ability to interpret facts, he maintains. And computers do not have that ability.


Spencer, you have done a lot of research about predictive policing. It has been such a hot topic in recent years. Hollywood movies like “Minority Report” have depicted predictive policing as some form of futuristic government mind control. From the outside it looks as if you are working on techniques of predicting who might commit what crime in the future. Tell me, are you really able to foretell crimes?

I think predictive policing is a bit of a misnomer, as it still relies on recorded information. All predictive policing tools that are currently in use and marketed by various software companies such as Predpol rely upon historical data. Like hotspot analysis. The label often attached to predictive policing is that it is forward-looking, whilst hotspot analysis – my original focus – looks backward.

 

Hotspot analysis is a method of mapping crime via computerised Geographic Information Systems (GIS). What is its connection to predictive policing?

We are developing ways that help predict future hotspots, meaning identifying areas of high risk for certain types of crime. In a way what we are doing is more predictive policing. It’s just that in reality predictive is about places, not persons.

 

How exactly does it work?

We rely on three types of data from police agencies. The primary source are instances of recorded crime. Burglary, vehicle or mobile phone theft. Second: Incidents of call-toservices. Things that are reported to the police. These don’t necessarily have to be crimes, but in most cases such recorded incidents turn out to be crimes. Let me give you an example. I see a huge fight breaking out in front of a fast food restaurant in my home town at 1am. I call the police but by the time they arrive, everyone has disappeared. The third type of information is stop-and-search records when police have searched somebody. Whilst these are the main sources it is really important to understand that this data cannot be analysed in its most sterile form but has to be embedded in a bigger picture resulting from police observations, on-site checks, getting a feel for the environment. What do people look like, how do they treat each other... soft information. Then the police enter all the information in a crime recording system which is linked to a geographical system that visualises the information as a kind of map. Imagine Google maps.

 

This sounds like some 1940s movie with police officers pinpointing a crime on a paper map. Except it‘s digitised. Where does prediction come in?

One thing we have known for an eternity is that where a crime has happened recently there is a high probability that the next crime will happen there, too. So if you look backwards for where mobile phone theft, for instance, has occurred during the last three months you will have a pretty good indicator of where it is likely to occur in the near future.

 

What makes crime more likely in places where there has been an incident just a short while ago?

Boost Account Theory tells us that future victimisation is boosted by the initial incident i.e. the offender got away with it, so why not do it again? And at the same location because he knows how to get in, the layout of the property and what he left behind last time.

 

So what is predictive policing about?

It is about some new ways in which information is analysed and packaged to be presented to the police officer. There has always been an element of prediction to crime mapping. It’s just in the last two years that this new term has been used to describe it: predictive policing.

 

So it’s a fad? Is there really nothing fundamentally new about it?

Personally I don’t think there is anything fundamentally new about it. It’s just a new label. The only place where there has been real change is that up till recently, to produce a hotspot map, police agencies had to ask their intelligence analyst to produce that map for them. These days specialised software delivers such results as computer generated content. So there is less need for intelligence analysts to produce crime maps. For me the major change has been in the packaging.

 

What about predicting complex crimes, e.g. financial crimes?

Currently the majority of tools that software companies provide tend to focus on the issues police patrols are facing: burglary, street robbery, shoplifting, vehicle theft. I am not aware of any software for other sorts of crime like financial crime.

 

So it’s nothing new, something really simple. But at least it’s personal! According to “Minority Report”, predictive policing is based on the ability to understand the future behaviour of individuals.

Focusing on individuals is not what predictive policing is about, either. It is rather about identifying high risk areas. Much further down the line, if the police decide to target a certain issue, that information could be connected to known facts about certain individuals that are specific to that particular risk. And the police officers could decide to keep an eye on these individuals while they are patrolling.

Currently predictive policing is about producing a square on a map that says “we need to devote more resources to here”. It is not strong when it comes to bringing in more psychological elements. It doesn’t help us find out what kind of resources should be allocated in that square. Let alone tell us what specific actions to take to prevent crimes in that area.

 

Soon there will probably be a lot more data available from all sorts of public electronic sensors, mobile phones and government statistics. What does this mean to crime mapping and prediction?

I haven’t seen any evidence that when it comes to crime mapping and predictive policing more data is better. Having more data can actually be misleading. Let me explain this. There are some software companies that throw 50 to 100 different variables into their algorithms, their black boxes they call predictive policing tools. What they typically fail to realise is that half of these variables are doing exactly the same job in terms of predicting crime. Because they are highly correlated to each other. Therefore there is redundancy in these variables. Which can make your whole model inaccurate.

 

In what way could redundant data spoil the results?

If you aggregate two data variables in a prediction model that are basically telling the same story in order to produce the sum of a predicted risk, you might end up overestimating the level of predicted risk.

 

So it’s all about the quality of the algorithm?

Even before the algorithm it’s all about the theoretical underpinnings of what variables you are going to include in your model. What we have found out in our research over many years here at the university is that the best predictor of crime is actually knowing where crime has happened before. For instance, we have conducted studies on burglary. There have been studies like these from New Zealand to the Netherlands, the uS or South Africa. And the findings are all consistent: when you are a victim of a burglary, you and your close neighbours have a high probability of becoming a victim of the very same crime within the next few days. Then within two or three weeks that level of risk declines to the average level in your area. And the level of risks for the neighbours of the incident declines with the geographical distance. This pattern applies to many forms of crime. From mobile phone theft to other more serious crimes. That is the main model we are using in our department to calculate future risks.

 

So what if a policeman using such a predictive policing model comes to a different conclusion from the machine?

If I was a serving police officer the two most powerful tools that I would rely upon would be a combination of a theoretically robust algorithm that is built into a software that identifies to me those places that are currently seen as high risk or are predicted to be high risk in the close future – to which I would add additional contextual information, either from my experience or from my intelligence unit. This will help me decide exactly what course of action to follow. Should I go in with very robust equipment or use soft power – speaking, building up trust and a relationship. That will always be so. A computer can’t give you that kind of advice.

 

But if it is all about human logic, shouldn’t predictive policing focus even more on analysing individuals?

No. I think there have been many mistakes made in the recent past with this approach. In the 1990s officers were seduced into believing ideas of behavioural profiling. They were hoping to find a code that would tell them “according to this pattern it was Mr John Smith who committed the crime”. I don’t think this is how it works.

 

Why doesn’t algorithmic behavioural prediction work?

It’s about context and interpretation. There always has to be an element of human interpretation, whether it is about understanding what exactly is going on or how we should react. Whether it is about predicting a high risk person or a particular area. Predictive policing is simply a helpful tool that doesn’t replace good police work.

 

 

Spencer Chainey is the Principal Research Associate at the University College of London Department of Security and Crime Science. His particular research interests are in developing geographical crime analysis and crime mapping, but he carries out most of his day-to-day work on developing the use of data, information sharing and analysis to aid intelligence development and decision-making by police forces, community safety partnerships, and national crime reduction and policing agencies. His work has influenced UK policy, and has contributed to policing and crime reduction developments in the USA, China, Germany and South Africa amongst others. Spencer has had first-hand experience: prior to joining UCL, he spent several years working in the private sector and in local government. Spencer has published several books on the subject of geographical information science.

 

 

© 2024 W.I.R.E. - Web for Interdisciplinary Research and Expertise
mrks.ch - professional web work zurich