Stephen J. Choi , G. Mitu Gulati and Eric A. Posner (New York University - School of Law , Duke University - School of Law and University of Chicago - Law School) have posted Which States Have the Best (and Worst) High Courts? on SSRN. Here is the abstract:
This paper ranks the high courts of the fifty states, based on their performance during the years 1998-2000, along three dimensions: opinion quality (or influence as measured by out-of-state citations), independence (or non-partisanship), and productivity (opinions written). We also discuss ways of aggregating these measures. California and Delaware had the most influential courts; Georgia and Mississippi had the most productive courts; and Rhode Island and New York had the most independent courts. If equal weight is given to each measure, then the top five states were: California, Arkansas, North Dakota, Montana, and Ohio. We compare our approach and results with those of other scholars and the U.S. Chamber of Commerce, whose influential rankings are based on surveys of lawyers at big corporations.
A bit more from the paper:
In earlier work, we showed that the relationship between institutional design and judicial quality is complex, and does not lend much support to the conventional wisdom. Appointed judges write more frequently-cited opinions than elected judges do, but elected judges are more productive, while there seems to be no difference between their levels of independence.18 Judicial pay has little effect on judicial quality, except among elected judges, who are more productive when paid more.19 It is not the purpose of this Article to review or reproduce these findings. Our goal instead is to generate a ranking of the state high courts on the basis of the data that we collected for the earlier studies. We will argue that our ranking overcomes many of the defects of the U.S. Chamber of Commerce study, as well as those of earlier academic work.
Rankings make people uneasy. They seem to trivialize activities that are of public importance, and they may stimulate the ranked agents or institutions to engage in destructive competition or demoralize those that have no ability to escape from the bottom. The most serious objection to rankings is that they unavoidably rely on measures that neglect hard-toobserve but important aspects of performance. If nonetheless those who achieve a high ranking are rewarded with resources or public esteem, institutions will distort their missions so as to do well on whatever measures are used.20
We address this objection by making our rankings as transparent and flexible as possible. Readers might disagree about how to weight the different measures that we use, and we show how such disagreements may lead to different rankings of the state courts. The alternative to rankings is, as a practical matter, virtually no information, and public institutions that are not carefully monitored and evaluated will rarely have strong incentives to perform well. Rankings, however imperfect, serve an important information-forcing function.21 Institutions that do poorly on rankings should have the burden of coming forth with an explanation for their performance; but if the explanation is plausible, then the ranking should be discounted. Better still, if the stakes are high enough – and the amounts of money spent by institutions like the U.S. Chamber of Commerce on commissioning rankings suggest there are22 – competitor rankings should emerge that improve upon the prior ones.
A must read. And even if these rankings are imperfect, they are certainly a huge improvement over the status quo. Bravo!