Recently I sat down to read an incisive article by AnnJanette Rosga and Margaret Satterthwaite critiquing the “turn to indicators” in the human rights field. Working for an organization whose raison d’être is anti-corruption/good governance indicators, I had an obvious professional interest in their research and its implications for Global Integrity’s assessment projects and my own independent work on technologies and epistemologies of governance. My reflections here do not rehearse or otherwise directly engage with their main lines of argument in-depth. To interested readers, Rosga’s and Satterthwaite’s analysis eloquently speaks for itself. My aim, rather, is to focus on one of their insights in their concluding remarks as a starting point for revisiting debates over “participation” that preoccupied several social scientists and development practitioners a decade ago.
Participation was initially conceived as a bottom-up and citizen-driven approach to development projects as a response to more traditional top-down, technocratic planning that left most local people out of the decision-making process. But despite robust critiques of participation that eventually surfaced, these critical interventions failed to make a significant dent on mainstream development practices as Bill Cooke and Uma Kothari (arguably the most polemical critics of participatory development) suggest (2001: 3). For the most part, this continues to be the case today. More recently, participation has spread from development to governance (Hickey and Mohan 2004). (Publicity for a recent World Bank-sponsored talk by the editor of From Political Won’t to Political Will breathlessly stated that “good governance is participatory governance.”) All this makes critical public debates among academics and practitioners all the more important. As Global Integrity increasingly gravitates to local “ownership” and “participation” in the design and deployment of anti-corruption and good governance indicators, then, in what ways can these debates reflexively inform our own work?
The Trust in Indicators
Rosga’s and Satterthwaite’s “The Trust in Indicators” carefully examines why and how indicators have become important tools to measure human rights progress, a trend attributable in part to the rise of new audit and standardization practices in diverse global governance regimes including human rights. Their analytic focus is the use of indicators by the U.N. Office of the High Commissioner for Human Rights and U.N. treaty bodies to monitor states’ commitment to and compliance with international human rights treaties (and, more recently, to monitor states’ own monitoring activities). Linking their discussion to a wide and productive body of scholarship in governmentality, science and technology studies, audit cultures, and standardization, the authors argue that while “indicators may provide advocates with new opportunities to use the language of science and objectivity as a powerful tool to hold governments to account […they] threaten to close space for democratic accountability and purport to turn an exercise of judgment into one of technical measurement.” Although they suggest that problems in assessing how to prioritize reforms and locate accountability are structural ones related to international human rights law rather than failures inherent in indicators themselves, the quantitative nature of most indicators serves to mask these problems as technical issues (2009: 258).
The most relevant passages in their article, for our purposes, can be found toward their concluding remarks. While acknowledging the limits of technocratically oriented indicators that depoliticize what is essentially a political process, the authors also suggest that indicators can be repurposed to allow (human rights) indicators to be much more participatory, democratic, and open to political negotiation. As mobilized by states and international organizations, indicators represent a clear example of a “technology of global governance,” or, measurement tools based on expert knowledge that target, classify, and rationalize citizens and populations (ibid.: 304-305). In the hands of human rights advocates, however, indicators are primarily “aimed at changing the conduct of governments toward those same populations […Hence] the power of quantitative indicators, when harnessed by human rights advocates, may be fruitfully turned on the State by those the State has harmed.” This opens up a space for indicators to be “designed to allow for the monitoring of governmental processes to ensure they are participatory and open to deliberation and debates” (ibid.: 310-311, emphasis in original). In the end, they propose designing indicator questions that query the degree to which citizens can participate in the institutional and policy design of human rights priorities/action plans. But just as important, they contend that participation should also “extend to the process of designing and implementing indicators themselves” (ibid.: 313-314).
There is no doubt that the incorporation of local knowledge into the construction of indicators is a salutary step toward investing the process with greater representation and legitimacy. But I hazard to guess that had it been the analytic focus of their essay, Rosga and Satterthwaite would acknowledge that participation is entangled with wider structures of power that necessarily raises questions about its limits. The remainder of this blog post, then, takes up and critically reflects on Rosga’s and Satterthwaite’s recommendation for the democratic co-design of indicators.
Participation: The New Tyranny?
The challenge to participatory development as a means of giving an agentive “voice” to native “beneficiaries,” while subject to critique for several years, reached a watershed with the publication of essays assembled in Participation: The New Tyranny? (Cooke and Kothari 2001). This edited volume provides a heterodox reading of normative participatory practices in development agencies and NGOs by addressing how they can mute extant local decision-making processes, become hijacked by powerful elites, and discount alternative participatory methods. One of the most pointed interventions, by David Mosse, questions whether participation discourses and practices actually defer to local peoples’ knowledge. Drawing on his work as a long-term consultant for a donor-funded project to assist rural farmers and their families in India, Mosse argues that “local knowledge” and “people’s planning” obscure the processes by which these categories are produced by outsiders (i.e., donor project priorities and indigenously powerful groups). He instantiates this by showing how villagers’ “needs” – represented as “rural people’s knowledge” – were shaped by the villagers’ perceptions of what they thought the agency could realistically deliver in the immediate short-term, even though these needs may not fit the reality of their long-term needs for survival and livelihood. In other words, these villagers aligned their interests with those of external projects in the construction of problems and needs to serve their own ends. Another chapter by Giles Mohan deploys insights from post-colonial criticism to suggest that participation tends to reinscribe rather than overcome top-down hierarchical relationships. Participatory methods, he argues, often idealizes local communities as harmonious, undifferentiated wholes that are ruled by consensus. This not only homogenizes local (and global/Western) groups, but also glosses over the role of the state and larger structural forces in the production of power and inequality. Mohan then suggests an alternative hybrid model of participatory research that tries to move beyond these limitations, using the example of one NGO in West Africa, Village AiD (VA), to support his alternative analytic approach. Among other things, VA has sought innovative methods that ensure villagers, not outside agencies, define the terms of participatory agendas.
Tyranny’s intervention elicited diverse responses from a group of academics and practitioners seeking to defend participatory development against some of its most visible critics with the publication of Participation—From Tyranny to Transformation? (Hickey and Mohan 2004). While contributors to this later volume agree that proponents of participation have often been naïve about how power operates, they nevertheless argue that critics have not sufficiently recognized the ways that participatory practices have transformed over time (partly in response to the broader critiques), including linking participation to more populist and active notions of citizenship. For reasons of space, I will not elaborate further on this volume except to recommend that interested readers consult this useful supplement to Cooke’s and Uthari’s provocation. I juxtapose both volumes because the passionate debates demonstrate not so much the truth-value of one argument over the other, but that how one interprets participation depends on one’s position and reference point (Lewis and Mosse 2006: 8). This, in turn, suggests that all relevant actors involved in “participation” – including academics, practitioners, donors, and policy makers – need to become more reflective about “their own positionality and the fields of power within which their knowledge production becomes (or fails to become) authoritative” (ibid.), a point that I will return to later.
Research by other academics has problematized participation as it has expanded from development into democratic governance. For instance, Jessica Greenberg (2010) mines materials based on her long-term ethnographic fieldwork in post-conflict Serbia to rethink key normative concepts in the democratization literature. Specifically, she contends that non-participation and apathy are not necessarily synonymous with democratic failure or deficits. Instead, non-participation in politics may signal an alternative political stance and mode of engagement. In the case of Serbia, where many citizens elected to sit out formal politics after the war, the disinterest towards participatory democracy was often a direct response to the perceived moralism, elitism, and self-serving interests of Western/international powers (the main proponents of “democratic participation”) as well as the failures of democracy itself in a country where people continue to be hyper aware of being constantly scrutinized by the international community. Greenberg urges us to be more sensitive to the social, cultural, and political economic context of participation and non-participation in democraticization efforts. In doing so, she opens up a space for “scholars and policy-makers […] to find more meaningful ways to open up democratic possibilities than circulating and recirculating moralizing narratives of politics and progress, which may alienate more than inspire” (ibid.: 64). Thus her research is suggestive of ways in which participation may, under certain circumstances, be a coercive, judgmental, and externally imposed rather than genuinely democratic, governance practice.
The Paradox of Participation
In her research on a group of health care NGO activists in post-dictatorship Chile, the anthropologist Julia Paley (2001) writes about the “paradox of participation” she witnessed in the 1990s as the country transitioned into democracy. While participatory governance seemed to open up possibilities, Paley notes it also foreclosed others. These civic activists were solicited by the government to help prevent the spread of cholera through a campaign by the Ministry of Health encouraging citizens to assume greater responsibility and self-governance by adopting hygienic practices in order to protect themselves, their family, and the nation. Another participatory initiative promoted by the government was the improvement of health care service delivery. Grassroots community groups were exhorted, for example, to pick up litter from garbage-strewn fields to help reduce government labor costs. The attempts by the state to stimulate greater civic involvement, however, emerged from a context in which Chile was transitioning to a neoliberal market democracy. With structural adjustment and the attendant devolution of centralized state power, civil society organizations were increasingly attractive to international donors because they were seen as filling in the service delivery gap traditionally carried out by the welfare state. Participatory forms of governance by civil society were thus widely promoted by international bodies. In turn, the language of local empowerment and civic involvement was taken up by the Chilean government to co-opt or placate civic groups that might protest the weakening of services normally provided by the state and raise questions about government accountability by making citizens invest in the system (ibid.: 143-147). This did not necessarily mean citizens were quiescent. Several grassroots groups did resist the community partnership framework assigned to them by the state and offered their own interpretation of more meaningful participation, though with mixed results. In sum, Paley’s insights shed light on the multiple meanings of participation, which, as she convincingly demonstrates, different social groups and individuals can strategically use and appropriate for their own purposes. In other words, the important question to consider is how participation is exercised as a form of power, rather than whether citizens should or should not participate (ibid.: 181).
Although schematic in nature, I have dwelled on some of the major lines of interrogation on participation in development and governance at some length to underscore the complexity of these issues. In doing so, I am in no way suggesting that we do away with participatory approaches (as Cooke and Kothari have put on the table as an option [2001: 15]). On the contrary, I am sympathetic to finding ways of democratizing the development/governance process through bottom-up and people-driven approaches, including the generation of indicators that have become so entrenched in the governance landscape. My goal here is simply to have raised several important (ongoing) critiques of unexamined participation in a public forum in order to further our understanding.
If these interventions carry the intellectual and critical heft that I think they do, how can they inform the work of Global Integrity (and others) that produce indicators with the substantive input of citizens and civil society? Rather than formulate a response through broad recommendations or pronouncements about “best practices,” I have chosen in the closing paragraphs to provide a brief description of my organization’s venture into newer territory involving sector-level and sub-national governance indicators so as to initiate a more reflexive and critical self-understanding of our work. (Reflecting on our own conditions of work does not necessarily have to slip into solipsism and narcissism, though this risk is worth bearing in mind.)
The Local Integrity Initiative
In 2007, Global Integrity inaugurated a new Local Integrity Initiative to assess the strengths and weaknesses of sub-national anti-corruption and good governance mechanisms in a variety of country contexts. Our pilot study focused on post-conflict Liberia followed by assessments of sub-national governance units in three Latin American countries (Argentina, Ecuador, and Peru). Future work is planned in Guatemala and countries in Southeast Asia and the Pacific. In addition, we are turning our attention to sector-level assessments, including foci on traditional sectors such as health and education as well as non-traditional ones like telephony.
Why go local? Here’s our answer to the question we ourselves posed, which I quote rather extensively from the website: “[S]takeholders around the world were nearly unanimous in telling us that existing tools weren’t giving them what they needed to inform serious, evidence-based policy choices. International rankings may be great for driving headlines, they told us, but the hard work of creating sustainable anti-corruption reforms requires something more specific, more relevant, more local. Along the way, we found that some of the most important work being done was happening far below the international media radar—small, local assessments that were creating real change despite being ad hoc and poorly funded. Reformers worldwide told us they wanted and needed those local assessments just as much as (or even more than) the well-known international indices…[T]he Local Integrity Initiative is our attempt at an answer. The work in progress requires Global Integrity to challenge many of our assumptions adopted during the last few years—forget international comparisons, because the challenges are too diverse. Forget national governments, because key issues—elections, access to information, administration—are often played out at the local level. Instead, partner even more closely with local stakeholders—advocates, journalists, governments—and let them set a unique direction for each project. Sub-national assessments aren’t easy…But Global Integrity is built for this kind of challenge: more than 95 percent of our staff is already working in-country, collaborating online. Our global network of anti-corruption experts and journalists is perfectly suited to designing new country-specific indicators, regional projects, and sector assessments, all rooted in the real problems being debated on the ground. And while each Local Integrity project is different, they are all built on Global Integrity’s tried-and-true formula: work with the best local experts you can find; help them develop actionable indicators for assessing the strengths and weaknesses of anti-corruption systems; and give them a technology platform that makes gathering the information as efficient as possible” (my emphasis). (You can read more on the Local Integrity website)
In the Liberia and Latin America projects, we collaborated with local civil society organizations (CSOs) in crafting de jure “in law” and de facto “in practice” indicators that were most germane to the context of the country. These local partner organizations were vetted through an extensive process of formal and informal consultation, both internally and with outside individuals and groups, with respect to their expertise and independence. Categories and indicators were then constructed, revised, and refined through an iterative process between the Global Integrity headquarter staff and the local partner over several months. For example, our partner at the Center for Transparency and Accountability in Liberia (CENTAL) thought it was important to include questions about customary justice systems to complement those that focused on formal justice institutions. The data – both quantitative and qualitative, including first-hand interviews and references to relevant literature (academic, media, policy reports) – were collected by local research teams and then submitted via our technology web-based platform. From there, the information submitted went through a minimum of two reviews by headquarter staff where we checked for inconsistencies, incomplete sourcing, and other quality control issues. Another very important step was the peer review process by outside experts who verified the accuracy of the data. After the completion of this process, the data were published online for public consumption. The structure of this process will be replicated, more or less, with future local integrity projects.
This is a bare bones description of the basic organizational structure and some of the processes involved in the Local Integrity Initiative. A deeper self-analysis, or auto-ethnography, of projects in this area may be worth undertaking in the near future. But some preliminary questions that might help pull apart a number of issues central to participation can already be raised based on this preliminary sketch: what semantic categories of (sub-national) governance are accepted, resisted, vernacularized, and recontextualized, how, and why? How are indigenous “needs” and “contexts” identified and negotiated between local partners, Global Integrity, and donor agencies/funders? Are Global Integrity headquarter personnel neutral mediators, or do we play a role beyond coordinating field staff and data, providing logistical support, and conducting quality control? Might we also be cultural “brokers and translators” (Lewis and Mosse 2006), and, if so, how? In what ways do answers to these questions (among others) shape these indicators – their design and implementation – that are envisioned as “bottom-up” and “locally-driven”? For now, we hope that readers will feel inclined to (dare I say?) participate in this critical conversation by submitting additional questions and comments below.
— Raymond June
— Photo credit: from Appreciative Consulting Services
Cooke, Bill and Uma Kothari, eds. 2001. Participation: The New Tyranny? (London: Zed Books).
Greenberg, Jessica. 2010. “’There’s Nothing Anyone Can Do About It’: Participation, Apathy, and ‘Successful’ Democratic Transition in Postsocialist Serbia,” Slavic Review, vol. 69, no. 1: 41-64.
Hickey, Samuel and Giles Mohan, eds. 2004. Participation—From Tyranny to Transformation? Exploring New Approaches to Participation in Development (London: Zed Books).
Lewis, David and David Mosse, eds. 2006. Development Brokers and Translators: The Ethnography of Aid and Agencies (Bloomfield: Kumarian Press).
Paley, Julia. 2001. Marketing Democracy: Power and Social Movements in Post-Dictatorship Chile (Berkeley: University of California Press).
Rosga, AnnJanette and Margaret L. Satterthwaite. 2009. “The Trust in Indicators: Measuring Human Rights,” Berkeley Journal of International Law, vol. 27, no. 2: 253-315.