21 Mar 2016
(by Chris Hanretty) Rankings, ratings and reviews are common in life.
They claim to tell us which are the best films, the best albums, even the best universities.
Ratings are particularly useful for credence goods — goods the quality of which we poor consumers can’t judge.
Law is a good example of a credence good. I might hire a lawyer to represent me in court. I might even attend the court hearing. But I’d have no way of telling whether the lawyer’s arguments were good or bad. If I knew which arguments were good or bad, I could probably have saved some money and represented myself.
It’s therefore no surprise to see that there are lots of rankings for lawyers in the UK. One company () is particularly known for ranking barristers — the kind of lawyers who earn their crust standing up and arguing cases in court.
Does this mean that you should always try and get the best-ranked barrister to represent you? There are many reasons why you might. You might be set on the best-ranked barrister because you think that getting the best will show other people that you mean business. (This is similar to the way owners of football clubs compete for the services of superstar managers — the Ancelottis and Guardiolas of this world). Or you might think that if you plump for the best, no one will be able to criticise you afterwards should you lose.
Most people, I think, would choose the best-ranked lawyer because they think it will help them in their case — that, other things equal, a better ranked lawyer should be more likely to win a case than a lower-ranked lawyer.
No one really checks to see whether this is the case. Legal practice is not like football. No one keeps a public record of your win/loss ratio, and indeed I’m sure such a thing would be considered unutterably vulgar by the Bar.
There are two good reasons given for this. The first is that in some cases it’s very difficult to talk about winning and losing. In a criminal case where a defendant enters a guilty plea, there is no chance of “winning” the case, but there is a chance of securing a lenient sentence for one’s client. A second reason – which applies to cases which can be divided more readily into “wins” and “losses” – is that “good lawyers sometimes lose cases, bad lawyers sometimes win them”. There are lots of other things which affect the outcome of a case, from the legally relevant (did the facts support the claim?) to the legally irrelevant (was the judge hearing the case feeling badly disposed to one of the lawyers?). But we might think that all these things would come out in the wash — that, on average, and across lots of different cases, better-ranked lawyers would more often defeat worse-ranked lawyers.
It turns out that is not the case — better-ranked lawyers aren’t more likely to win. The table below (taken from my ) shows the results of selected tax appeals in the UK between 1996 and 2010. Tax cases are nice and simple because they’re all appeals from decisions of the Inland Revenue. As such, it’s easy to keep score — either the appeal succeeds, or it fails.
Appellant wins Appellant loses n Percent n Percent Appellant counsel better 58 34.9 108 65.1 Appellant counsel not better 123 41.6 173 58.5
As you can see, when the appellant counsel is/are better, in the sense of being ranked higher, then the appellants are more likely to lose than then when appellant counsel are not better (65.1% of losses compared to 58.5% losses). This simple table is a sign that if better-ranked lawyers do make a difference, something must be pushing back in the other direction.
Because of this, I went on to conduct a sensitivity analysis — in essence, making the cases received by the better-ranked barristers one increment harder to win, then two increments harder to win, then three increments, and so on. By gradually — and artificially — stipulating that better-ranked barristers receive more difficult cases, we can find the bingo point: the point at which better-ranked barristers start being a significant positive contribution to their side.
There’s no obvious unit for “difficulty of cases”. In the article, I compared the extra difficulty of cases to the amount we can explain about legal outcomes using the other things we know about the case (its complexity, how far it got in the court system and so on). It turns out that the bingo point is reached when the greater difficulty of cases received by better-ranked counsel is equal to explaining 3.2% of the variance in the outcome. That might not sound like a lot — but it’s more than we can explain using any other single observable feature of the case.
Thus, the answer to the question “how much harder must the cases received by better-ranked counsel be if we are to conclude that they do have a positive effect on the outcome?” is “very much harder — at least when set against what we can explain about the likely success of a case”.
This sets up a problem for rankings of lawyers. We can continue to say that law rankings identify better lawyers if we’re happy to say that those better-ranked lawyers tend to get cases that are more difficult. But if better-ranked lawyers systematically get cases that are more difficult, it means that consumers are already pretty legally sophisticated: they know how difficult their case is, and seek out better barristers on that basis.
How is this related to competition policy? The 2001 OFT report on Competition in Professions identified the requirement that clients instruct barristers only through a solicitor as a restriction on the supply of legal services. Following that report, a public access scheme was launched in 2004, allowing members of the public to instruct (some) barristers directly. These consumers may be less legally sophisticated than the solicitors who used exclusively to instruct barristers, and so might have most need of rankings of lawyers to help them choose. My findings suggest that these consumers, if they do not know how difficult their case is, should not place weight on lawyer rankings.