One of the many issues that spring out of expert testimony is the obvious influence of experts on the decision maker – whether a judge or a jury. In a criminal trial in the UK, for example, the jury is asked to determine whether the accused is guilty ‘beyond a reasonable doubt’. The Crown Court Bench Book provides that ‘being sure is the same as entertaining no reasonable doubt’. This is an inherently vague concept, and one that is influenced by the expert in a number of ways.
The first issue is how different people speak very different languages. A doctor may enter the courtroom, clothed in professional respectability, and give an opinion – a diagnosis – of what caused a particular injury. However, when a doctor announces that they are ‘sure’ of a diagnosis, just how sure is ‘medical sure’?
I carried out an on-the-spot survey of delegates at Bond Solon’s annual conference last week. All those who responded set their level of certainty of a ‘diagnosis’ significantly lower than beyond reasonable doubt – 75 per cent for a diagnosis in one instance, as compared to 99 per cent sure for a conviction. Unless the distinction is spelled out – and too often in the courtroom it is not – the jury may understand the doctor’s conviction in their diagnosis to be the equivalent of a criminal conviction, simply because people are speaking different languages.
Level of certainty
Of greater concern, perhaps, is how ‘sure’ any of us have to be before deciding that something is true ‘beyond a reasonable doubt’. Troublingly, perhaps, the Cambridge Institute of Criminology has stated: ‘Beyond reasonable doubt (BRD) is the standard of proof used to convict defendants charged with crimes in the English criminal justice system. If the decision maker perceives that the probability the defendant committed the crime as charged (based on the evidence) is equal or greater than their interpretation of BRD, than he/she will decide to convict. Otherwise, the decision maker will acquit the defendant. It is generally agreed that BRD should be interpreted as a .91 probability.’
I am not sure who ‘generally agrees’ this – I certainly don’t – but it is an extraordinary suggestion. Estimates of those in prisons or jails in the US fall at around two million (2,220,300 in 2013 according to the US Bureau of Justice Statistics) – with many more caught up in the system but not currently incarcerated. If we aimed to convict these people at a 91 per cent level of certainty, we would expect to get the wrong response in roughly 200,000 cases, sending an extraordinary number of innocent people to prison. Further, we don’t need to turn to Robin Hood to recognise that when we aim low, we tend to miss.
Unlike some purveyors of forensic ‘science’, I do not pretend that my study was an entirely scientific one, since there were only 158 self-selected respondents out of an audience of almost 400, and I suspect they were provoked into setting the bar higher than they might have by my strident criticism of the system. However, in defining how ‘sure’ they would have to be before convicting someone, the Bond Solon experts responded as follows:
BARD = % Bond Solon 2016
100% 17 (10.8%)
99% 43 (27.2%)
95% 45 (28.5%)
90% 19 (12.0%)
80% 9 (5.7%)
75% 10 (6.3%)
60% 1 (0.6%)
50% or 51% 12 (7.6%)
Can’t put percentage 1 (0.6%)
Depends on circumstances of case 1 (0.6%)
The Bond Solon respondents tend to require a higher standard of proof than the Cambridge Institute. Various points of interest arise from the responses. First, while their responses are commendable, it may be impossible ever to attain 100 per cent certainty about anything in a courtroom, so I am sceptical when it comes to the 10.8 per cent who claimed they would require such proof.
95 per cent certain
In defining how ‘sure’ they would have to be before convicting someone, the median response of the experts surveyed was a 95 per cent level of certainty – experts who would aim to have ‘only’ 100,000 innocent prisoners in US prisons. Interestingly, this is the level set in assessing whether a study is ‘significant’ at two standard deviations from the norm. However, one of the problems with the proliferation of ‘studies’ that appear in the nation’s newspapers on any given day is that a surprising result, headlined in the Daily Telegraph, is very likely to reflect a false result. We would expect to see a false positive study (very roughly) once among each of 20 supposedly “significant” studies. It is one thing for my mother to get terrified about an erroneous study concerning the germination of bacteria in her marmalade; it is another when we send someone to prison, or to death row, on the same standard. That the Cambridge Institute set the bar significantly lower is very worrying.
That fully one-third (33.5 per cent) of expert witnesses set the level of reasonable doubt below 91 per cent reflects a worrying truth – particularly since the audience was one well versed in the judicial process. Lay jurors are likely to set the level of proof even lower – and often do, in other rather unscientific studies I have conducted over the years.
In my experience, judges tend to side with the one respondent who said that reasonable doubt is not subject to statistical estimation. However, they do this with the claim, made in the Bench Book, that there are always going to be ‘problems encountered… when the judge endeavour[s] to explain reasonable doubt to the jury’. In other words, they would rather that the jurors should wallow in their own misperceptions than risk discovering that the Emperor wears no clothes. In my own experience, furthermore, the one time a group of judges agreed to estimate their assessment of BRD, the assembled luminaries averaged 83 per cent – in other words, they were aiming to put 340,000 innocent Americans behind bars.
In truth, the use of statistics can be misleading, albeit in more significant ways than most judges claim. The group of people who actually go to trial in criminal cases is not a random selection – but an assembly of people claiming innocence when the police and prosecutors think they are guilty beyond a reasonable doubt. Those who are patently culpable have generally pleaded guilty, leaving a residue of people whose circumstances are improbable, but, in the true world of statistics, much more likely to be true.
Consider the archetypal shaken baby syndrome (SBS) case, at least as far as sceptics are concerned: a child who accidentally falls a short distance and sustains a fatal head injury with all the ‘hallmark signs’ of SBS. We know it happens, because Dr John Plunkett has a video, taken by a relative, where precisely this happened. It may well be that such a result is very unlikely – let us randomly assign it a probability of one-in-100,000 that a short fall will result in such a death. Very few people would believe the accused when a doctor says we should be 99.99 per cent sure it did not happen that way, for this is way beyond most people’s notion of a reasonable doubt. But in a country like the US, with 319 million people and perhaps a million short-distance falls among children every day, it is simply inevitable that such accidents are going to happen. It is equally inevitable that if we equate 0.00001 with zero (the zero theory), we are going to disbelieve the parent, and commit a tragic mistake.
Thus, such a group of criminal defendants are going, inexorably, to fall foul of the judicial process. It is vital, then, that we should define ‘beyond a reasonable doubt’ in a vastly more rigorous way than we see with the Cambridge Institute, the Crown Court Bench Book, or even the majority of the Bond Solon experts.
Clive Stafford Smith is the founder and director of Reprieve