Admittedly, I am not a fan of Transparency International’s Corruption Perception Index (CPI) (and similar perceptions-based indices), the limitations of which are discussed at length in our book, A Users’ Guide to Measuring Corruption. I was going to rant about a very public abuse of the CPI last week, then got distracted by actual work… but another blog post referencing the same flimsy comparisons between the CPI and the price of oil has gotten me all into a lather again.
First was a Bloomberg piece last week that quoted a Russian economist arguing that, “Russian corruption may decline this year as the slumping price of oil reduces the amount of ‘easy’ money available to pay bribes.” The story observed how,
…corruption rises and falls with the price of oil. Russia had its best-ever score on Transparency International’s corruption perceptions index in 2004 when the price of Urals crude, Russia’s chief export blend, averaged $34.18 a barrel. The country received its worst score last year since 2001 when Urals surged to a record $142.94 a barrel and averaged $95.10.
Then I ran across this post on the Center for International Private Enterprise’s Development Blog [full disclosure: CIPE is a funder of Global Integrity and are good friends]. In it, CIPE’s Eric Hontz points readers to the same Bloomberg piece, noting how, “The petrodollars flooding the country were hiding a dirty little non-secret – endemic corruption…It is difficult to think of an adjective strong enough to describe the level of corruption in Russia – estimated by one source at 50% of GDP!”
Now for my rant.
In the view of most experts (including me), the CPI is not appropriate as a tool for tracking change over time. The various third-party opinion polls that feed into the CPI change year to year — for Russia in 2001, there were 10 surveys used to generate Russia’s score on the CPI. In 2004, there were 15. Additionally troubling is that the actual data sources for a given year’s CPI score aren’t necessarily gathered in that year! Russia’s “2004” CPI score, for instance, is an amalgam of disparate surveys and polls carried out in 2002, 2003, and 2004…so the label “2004” is itself subject to questioning. You don’t need a PhD in statistics to understand why comparing Russia’s 2001 CPI score to its 2004 score is similar to comparing apples to oranges. To TI’s credit, they have long warned users against interpreting change over time too precisely:
The index primarily provides a snapshot of the views of business people and country analysts for the current or recent years, with less of a focus on year-to-year trends. If comparisons with previous years are made, they should only be based on a country’s score, not its rank…
Year-to-year changes in a country’s score can either result from a changed perception of a country’s performance or from a change in the CPI’s sample and methodology. The only reliable way to compare a country’s score over time is to go back to individual survey sources, each of which can reflect a change in assessment.
If the people publishing the data tell you upfront they have reservations with year-to-year comparisons, maybe we should pay attention to that — call me crazy. Unfortunately, these nuances are lost on most reporters and even savvier experts.
— Nathaniel Heller