Harvey Redgrave, Chief Executive Officer
Friday 25 June 2021
The government has unveiled plans to improve the way police, prosecutors and the courts deal with rape offences in England and Wales. One of the measures involves the use of performance scorecards - to monitor agencies’ progress and publish the results every six months. Our Chief Executive Officer, Harvey Redgrave, has written before about the hidden hazards of performance indicators. Here he considers whether there’s merit in the latest proposal:
Government ministers have just done something that has been a rare commodity in British politics in recent years: apologise for failure. The subject of their remorse was record low conviction rates in rape cases – the latest figures show just 1.6 per cent of rapes recorded by police resulted in someone being charged. Justice Secretary Robert Buckland said he was “deeply ashamed” and promised to do “a lot better”. Boris Johnson told MPs he was “sorry” for the “trauma” and “frustration” victims go through because of the “inadequacies” of the criminal justice system.
Of course, such admissions of failure raise the follow-up question: what do you plan to do about it? The government’s long awaited ‘rape review’ set out a range of measures intended to reduce the time victims are without their phones and spare them the ordeal of courtroom cross-examination. But perhaps the most significant recommendation was a target to return the volume of rape cases going to court to “at least 2016 levels” and publish regular “scorecards” to monitor progress.
Too simplistic for a complex offence?
The idea behind scorecards is simple - to provide a mechanism for holding an organisation to account, by publishing a set of performance indicators which can be used to measure its effectiveness in comparison to other bodies. In a public services context, scorecards grew to prominence in the United States, with a number of cities using them to compare the performance of schools. More recently, they’ve been used as a means of driving improved transparency, for example, in the NHS.
Historically, though, there has been nervousness about the use of scorecards in relation to criminal justice. Firstly, it has been argued that they are too crude for complex ‘human activity’ systems, which are subject to influence and cannot be measured by a simplistic numerical snapshot approach. It is easy to see how that might be the case when tackling a complex offence like rape. There are many factors which lie behind a prosecutor’s decision about whether or not to bring a charge or a victim’s decision to stay engaged in the process; some would say these cannot be captured in a single standardised output.
Gaming the system
Secondly, scorecards risk causing dysfunctional behaviour, such as ‘gaming’. It has long been claimed that pressure to hit numerical targets, particularly in policing, can create perverse incentives leading to unintended behavioural consequences. For example, a target to increase the volume of rape cases being referred by police to the Crown Prosecution Service (CPS) will be, at best, meaningless, and at worst, counterproductive, if it leads to an increase in dropped prosecutions (resulting from poor file quality), or an under-recording of such offences to begin with. Similarly, targets based on rates (such as conviction rates) can be gamed by charging fewer suspects. Indeed, there have already been suggestions of a secret conviction rate target for rape, set by the CPS, which, according to campaigners, led prosecutors to drop weaker or more challenging cases.
A piece of the puzzle, not the final answer
Numerical snapshots don’t tell the whole story - but we should not be too quick to write off scorecards. Sometimes it is important for the government to signal a clear ambition and scorecards could potentially galvanise effort around a common goal. The most important thing is to ensure a scorecard is intelligently designed, rather than a crude simplification. That means ensuring it is neither excessively input-focused, for instance, by measuring only the number of charging decisions - as that risks driving perverse behaviours; nor too widely drawn - which tends to let organisations off the hook. Instead it should measure things the police and the CPS have the power to control, like the standard of case files; the time it takes to investigate allegations and make decisions; and whether victims are kept informed and stay engaged in the process. Ideally, scorecards would also take local circumstances and issues into account. At Crest, we’ve done a lot of work charting performance in the criminal justice system and have developed an interactive tool to compare areas.
Clearly, numerical scorecards - on their own - are not a panacea. They need to be complemented by a more rounded (and ideally qualitative) assessment which recognises the full range of criminal justice activity, from recording and detecting offences to managing vulnerable victims and reducing their exposure to unnecessary trauma. Moreover, scorecards cannot be a substitute for proper investment - there is little point in tougher accountability if the agencies responsible lack the resources to respond to demand and meet victims’ needs. Nonetheless, there is a role for intelligently designed scorecards to hold different parts of the system to account and bring more rapists to justice. If they can be made to work here, there may be a case for wider application across other parts of the criminal justice system, which for too long has escaped adequate scrutiny and accountability.