If a country score on a Washington analyst’s spreadsheet nudges from -0.79 to -0.75, should anyone care? Maybe not. In fact, here’s a case where the appearance of insight provided by functionally hollow numbers is filling up a space that could be used to discuss actual governance issues.
Call it governance theater. It’s as staged as a kabuki dance, and even more predictable: Player one is the respected international institution. Player two is the time-pressed journalist, who wants to write about corruption, but preferably without having to risk her neck or make a lot of phone calls.
Player one, the researchers, are using methods so dense and seemingly mysterious that it’s hard to find a journalist who can even vaguely describe the source data or methods used to generate the results. Player two is just happy it all sounds official, and really doesn’t want to know much more than that about the methodology.
The real example here is the brand new 2008 Worldwide Governance Indicators (WGI), which we discussed here. The WGI are the most widely used source of governance data. In the role of Player Two in this production is GMA News, a Filipino news outlet.
In a story published yesterday, a GMA reporter dissects the WGI results for the Philippines at length — some 600 words. The inevitable comparisons are there: wow, the Philippines scored the same as [insert really poor country]. Like this:
This year’s ranking was a marginal improvement from negative 0.79 in 2007.The same score was received by the Union of the Comoros, which is located off the eastern coast of Africa on the northern end of the Mozambique Channel. About half of its 798,000 population live on less than $1.25 a day.
The Philippines’ neighbors fared better in their anti-corruption measures with Thailand, -0.38; Malaysia, 0.14; Indonesia, -0.64, and Singapore, 2.34. Only Vietnam got a worse score than the Philippines at -0.76.
Notice anything missing? How about anti-corruption policy? Any anti-corruption policy? Or a politician involved in corruption. Or an anti-corruption reform platform. Or political movements working on the issue. Or the sense at any point that the Philippines scores were based on something specific, a set of observations that may or may not inform decisions about anti-corruption policy.
The WGI press release admittedly offers little in the way of specifics to our intrepid and time-pressed reporter:
“When governance is improved by one standard deviation, infant mortality declines by two-thirds and incomes rise about three-fold in the long run.”
Not exactly in-the-weeds stuff for Filipino readers. There’s no sense in the story of what, if anything, would improve those scores in the Philippines, or even why they are the way they are.
There’s got to be a better way…
I’m reluctant to put the blame for this mess on the WGI authors: we get some crazy stuff written about our work too, and there’s nothing we can do about it.
But the data itself, in the way it’s presented as more science and less an imperfect art, contributes to this. We need more than numbers, or we’re going to keep getting these vapid, meaningless stories. Sure, that -0.75 country score comes from somewhere, and all the supporting documentation and spreadsheets are on file if you dig deep enough. But in practice, it’s so abstract that few people outside of Western aid agencies and think tanks know where it comes from, much less what it implies for specific policy choices.
This makes me crazy. Because it’s a lost opportunity to actually educate people about their options and empower them to plot a way forward.
Global Integrity has tried to lay out a different way — a way that links policy choices and ongoing debates directly to the information being produced. For the theory of this, you can read A Users’ Guide to Measuring Corruption. The issues of actionability and abstraction are the core themes of the book.
In practice, we’ve got our Global Integrity Report, and our new Local Integrity Initiatives. They aren’t perfect by any means, but we’re trying — every source datapoint is right there on the website, with narrative and references and dissenting opinions. And we publish journalistic, no-numbers qualitative reporting alongside all of that data on the very same issues. The work itself is a shopping list for potential points of intervention. And sometimes, we get really great insights from reporters using that data.
But we still put out an index, with a top to bottom ranking, and it drives an awful lot of press for us. It’s the same with the Corruption Perceptions Index for Transparency International: it’s a devastatingly effective PR tool (note the index-release traffic spikes). But I’m starting to think I just can’t do it anymore. I can’t keep pretending these country rankings are worth talking about.
— Jonathan Eyler-Werve