By Nathaniel Heller — October 16, 2014.
With the recent launch of the UN Secretary General’s Independent Expert Advisory Group on the Data Revolution for Sustainable Development, development and data gurus have begun focusing in earnest on how precisely the increasing availability of data in the coming decades could or could not accelerate development efforts. Unsurprisingly, few oppose a data revolution. Most observers see it as an automatic public good that, regardless of ultimate efficiency, can’t hurt in our efforts to aid the poor. It’s worth reading a new post from Development Initiatives calling for a “Data Manifesto” as well as the Open Government Partnership’s call for tighter coordination between that initiative and the Data Revolution.
But there are worrying traps — swirling around privacy, freedom, and individual liberty — in any potential Data Revolution that need careful parsing. Many of those dangers center around whether a Data Revolution is framed as an agenda controlled by individuals, or instead one in which development agencies, Big Data firms, and internet companies invoke a benign dictatorship of data scientists to tell us what’s best for ourselves. Put another way: we need to avoid a Data Revolution that treats individuals (and their data) as raw materials with no ability to opt out.
The Worrying Big Brother Nudge
As I’ve thought more in the past several months about the promises and pitfalls of the Data Revolution, I’ve continually come back to freedom of choice as a nagging source of angst. Would the Data Revolution be forced down the throats of the poor even if they weren’t asking for it? Do development practitioners, who are often non-technical, know the right questions to ask or how seemingly innocuous assumptions about data production and data use might lead to perverse outcomes?
I was struggling to enunciate those concerns until I read Jeremy Waldron’s critique of Cass Sunstein’s new books on how governmental and regulatory “nudging” can produce the public policy outcomes we seek. In “It’s All for your Own Good,” Waldron writes powerfully about the risks of assuming that experts (powered by data!) can make the right choices on behalf of the rest of us.
“There are, first of all, people, ordinary individuals with their heuristics, their intuitions, and their rules of thumb, with their laziness, their impulses, and their myopia. They have choices to make for themselves and their loved ones, and they make some of them well and many of them badly.
Then there are those whom Sunstein refers to as “we.” We know this, we know that, and we know better about the way ordinary people make their choices. We are the law professors and the behavioral economists [and data scientists!] who (a) understand human choosing and its foibles much better than members of the first group and (b) are in a position to design and manipulate the architecture of the choices that face ordinary folk. In other words, the members of this second group are endowed with a happy combination of power and expertise.
Of course regulators are people too. And like the rest of us, they are fallible. In the original Nudge, Sunstein engagingly confessed to many of the decisional foibles that Thaler exposed. Worse, though, is the fact that regulators are apt to make mistakes in their regulatory behavior: “For every bias identified for individuals, there is an accompanying bias in the public sphere.” Sometimes governments blunder because they feel compelled to defer to the irrationalities of ordinary people. But we all know they are perfectly capable of screwing things up on their own, whether it’s the invasion of Iraq or the rollout of Obamacare.”
Waldron goes on to remind us that issues of dignity loom large as well:
The upshot for Data Revolution boosters: are we interested in a Data Revolution that we engineer for the benefit of others, or are we willing to allow them to have a say in whether and how the Data Revolution comes to pass?
Hands Off My Data
A related set of concerns loop back to issues of privacy and data ownership. Put simply, a Data Revolution works best at scale with as much data as possible being vacuumed into the maws of the Data Revolution machine. This immediately raises challenges around participation and ownership. What if I (or a poor farmer) don’t feel like allowing our personal data to be harvested on behalf of the public good? Would we force the poor farmer and hundreds like her to divulge their Facebook activity, however anonymously, to a public data repository in order to tease out trends in fertilizer usage that might allow for improved optimization of agricultural safety net programs? The intended outcomes are noble, but the means raise tricky concerns. Looking ahead to the Internet of Things, where low-cost, internet connected sensors abound in our homes, cars, refrigerators, and thermostats, raises an even more worrying specter. As a thought experiment, would we be comfortable with aid donors paying a company like d.light to attach low cost Arduino sensors to their remarkable personal solar lamps in order to analyze the sleeping habits of low-income communities in rural India?
The Data Revolution as a Threat to Democracy
A final reason to pause and think carefully about the Data Revolution: can it undermine social capital and the democratic experiment itself? Leading techno-utopianism critic Evgeny Morozov wrote eloquently on this recently in the MIT Technology Review:
“Yes, the commercial interests of technology companies and the policy interests of government agencies have converged: both are interested in the collection and rapid analysis of user data. Google and Facebook are compelled to collect ever more data to boost the effectiveness of the ads they sell. Government agencies need the same data—they can collect it either on their own or in coöperation with technology companies—to pursue their own programs.
Many of those programs deal with national security. But such data can be used in many other ways that also undermine privacy. The Italian government, for example, is using a tool called the redditometro, or income meter, which analyzes receipts and spending patterns to flag people who spend more than they claim in income as potential tax cheaters. Once mobile payments replace a large percentage of cash transactions—with Google and Facebook as intermediaries—the data collected by these companies will be indispensable to tax collectors. Likewise, legal academics are busy exploring how data mining can be used to craft contracts or wills tailored to the personalities, characteristics, and past behavior of individual citizens, boosting efficiency and reducing malpractice.
Even if we tie the hands of the NSA—by some combination of better oversight, stricter rules on data access, or stronger and friendlier encryption technologies—the data hunger of other state institutions would remain. They will justify it. On issues like obesity or climate change—where the policy makers are quick to add that we are facing a ticking-bomb scenario—they will say a little deficit of democracy can go a long way.
Here’s what that deficit would look like: the new digital infrastructure, thriving as it does on real-time data contributed by citizens, allows the technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of coalition-building, bargaining, and deliberation with the cleanliness and efficiency of data-powered administration.
This phenomenon has a meme-friendly name: “algorithmic regulation,” as Silicon Valley publisher Tim O’Reilly calls it. In essence, information-rich democracies have reached a point where they want to try to solve public problems without having to explain or justify themselves to citizens. Instead, they can simply appeal to our own self-interest—and they know enough about us to engineer a perfect, highly personalized, irresistible nudge.”
So before jumping on the Data Revolution bandwagon, consider: are you ok with your data being the Soylent Green of the coming Revolution?
Photo Credit: Tim (Flickr – Creative Commons: Attribution Generic 2.0 Share-Alike)