The Marketplace of Ideas: From “external assessments” to country-level learning

Alan Hudson
No Comments

By Alan Hudson — May 4, 2015.

 

Many organizations put a lot of time and effort into conducting external assessments of developing countries, the policies they have in place and the progress that is being made in various sectors. For a lot of organizations such external assessments are central to what they do and, even more so, to what they are known for. Think Transparency International and their Corruption Perceptions Index. The International Budget Partnership and their Open Budget Index. Freedom House and their Freedom in the World Index. UNDP’s Human Development Index. Oh, and Global Integrity and our Global Integrity Report and Global Integrity Index (halted in 2011).

So, I really welcome the recent report by AidData on “the marketplace of ideas” which examines in great detail the question of which external assessments leaders in developing countries pay attention to and why. Exploring how such assessments are used ought to help those who invest in conducting external assessments to design their assessments so that leaders in developing countries pay attention. This idea – linking up those who use the data, with those who produce the data – is one of the ideas behind the Governance Data Alliance, an initiative which Nathaniel Heller kicked off when he was at the helm of Global Integrity. So far, so good. Kind of.

But what struck me when reading the AidData report – OK, the executive summary – was the largely unexamined assumption that leaders in developing countries should pay attention to external assessments. This is something that I know is on AidData’s radar for future reports. And to be fair, there are hints of what I’m talking about in the “intended and unintended effects of external assessments” section of their report. But I’d like to see more unpacking of the assumption that external assessments should be listened to by leaders in developing countries. Which external assessments, of what, by whom, for what purpose and when?

  • Should such assessments be listened to on all issues? Just on technical issues where asking “what works” and identifying and promoting best-practice solutions makes sense? Or on complex, political and context-dependent issues such as governance and corruption too, where check-list assessments of the institutions that developing countries have in place may be of limited value?
  • Should external assessments be listened to more than the views of citizens in developing countries? Or would internal assessment and feedback do more to enhance the performance and legitimacy of leaders and their governments? And is there a risk that external assessments might crowd the space for internal assessments and learning?
  • What is it about an assessment that defines it as “external”, particularly from the perspective of leaders in developing countries? Is it the source of the funding, the location or nationality of the researchers, or the place where the organization conducting the assessment is registered? If data is collected by local researchers – as is the case with Global Integrity’s assessments and those of many other organizations – are the assessments still “external”?
  • And how should and do leaders deal with external assessments which proffer conflicting policy advice? (Hint: As AidData acknowledge, leaders in developing countries, as elsewhere, might make “strategic” use of various external assessments, listening to what makes sense for their contexts. Or, more cynically, citing the advice that supports what they wanted to do anyway).

It might seem odd for the Executive Director of Global Integrity to question the value of external assessments. Turkeys voting for Christmas perhaps. But questioning the way in which things are done – innovating and iterating – is the reason why Global Integrity has for 10 years been at the forefront of efforts to measure and assess aspects of governance in meaningful ways. And such questioning remains at the heart of our thinking (see, for instance, my posts on the “Good Governance” mantra and on “measuring governance: what’s the point?”).

Our new strategy  will see us continue to conduct cross-country comparative research to generate data and stories about how various issues are playing out in different contexts. This is our core competence and something to build on. It’s also something that the AidData report suggests that can be useful (see the final para of p.8 of the executive summary). But our approach is evolving: from, conducting external assessments of whether countries have in place the things that our assessment frameworks hold to be important; to, exploring how issues are playing out in different contexts, with the data and stories generated by cross-country research used to support and facilitate country-level learning about policy options. This, for instance, is the thinking behind our research on how the Open Government Partnership is playing out in five different countries.

There is no doubt that assessments, external and otherwise, can play a valuable role, generating data, stories and insights that can be shared across contexts. Assessments, done right, can be an entry-point for learning and reflection. Indeed, the best assessments might review whether suitable mechanisms are in place to enable learning, reflection and adaptive development; the World Resources Institute’s Environmental Democracy Index is worth a look here. But there is a risk that external assessments can crowd out the space for local contextual learning that takes account of the political dynamics, constraints and opportunities.

We look forward to playing our part in ensuring that all assessments – including our own – are designed with a focus on generating data, insights and stories that support country-level learning. And maybe we’ll join with AidData and others – with partners in developing countries center-stage – to learn about how complex context-dependent political issues can be measured in ways that support country-level learning.


Thanks to Owen Barder (Center for Global Development), Samantha Custer (AidData), Nathaniel Heller (Results for Development) and Global Integrity colleagues for comments on a draft of this blogpost.

Photo credit: Image courtesy of kansas.com – licensed for non-commerical re-use

Alan Hudson
Alan Hudson
Executive Director

Related blog posts