There is growing interest in ‘embedded experiments’, conducted by researchers and policymakers as a team. Aside from their potential scale, the main attraction of these experiments is that they seem to facilitate speedy translation of research into policy. Discussing a case study from Bihar, Jean Drèze argues that this approach carries a danger of distorting both policy and research.
Evidence-based policy is the rage, to the extent that even village folk in Jharkhand (where I live) sometimes hold forth about the importance of ‘ebhidens’, as they call it. No one, of course, would deny the value of bringing evidence to bear on public policy, as long as evidence is understood in a broad sense and does not become the sole arbiter of decision-making. However, sometimes evidence-based policy gets reduced to an odd method that consists of using randomised controlled trials (RCTs) to find out ‘what works’, and then ‘scale up’ whatever works. That makes short shrift of the long bridge that separates evidence from policy. Sound policy requires not only evidence – broadly understood – but also a good understanding of the issues, considered value judgements, and inclusive deliberation (Drèze 2018a, 2020a).
Enormous energy has been spent on the quest for rigorous evidence, much less on the integrity of the process that leads from evidence to policy. As illustrated in an earlier contribution to Ideas for India (Drèze et al. 2020), it is not uncommon for the scientific findings of an RCT to be embellished in the process. This follow-up post presents another case study that may help to convey the problem. It also illustrates a related danger – casual jumps from evidence to policy advice. The risk of a short-circuit is particularly serious in ‘embedded experiments’, where the research team works ‘from within’ a partner government in direct collaboration with policymakers.
The case study pertains to an experiment conducted in Bihar in 2012-2013 and reported in Banerjee, Duflo, Imbert, Mathew and Pande (2020)1. This is a large-scale, influential experiment by some of the leading lights of the RCT movement – indeed, a formidable quartet of first-rate economists reinforced by one of India’s brightest civil servants, Santhosh Mathew. The high technical standards of the study are not in doubt, and nor is the integrity of the authors. And yet, I would argue that something is amiss in their accounts of the findings and policy implications of this study.
Working as a product designer in media for the past five years, I’ve witnessed the topic of “design ethics” raised at industry conferences, presentations, and meetups. Yet I’ve noticed that in our discussions, designers rarely mention the economic context within which we design. We hold up examples like news feeds promoting fake news and financial apps encouraging users to trade the riskiest stocks and we ask: how might we design better? Conventional discourse presents these unintended consequences of our work as technical problems: how might we design and code ethically, while maintaining profitability and growth? (Perhaps the most well-known example of this framing is The Center for Humane Technology’s “The Social Dilemma,” which confuses correlation with causation by attributing negative mental health and political trends to technology, with no mention of technology’s place in capitalism.)
We will not solve problems of authoritarianism, racism and xenophobia, misinformation and addictive technology, mental health and public health, or climate change with design ethics. While designers should thoroughly consider the consequences of our work, the problems facing the design and technology industry are not ones of individual bad actors (though some exist). Rather, we must acknowledge that design decisions are economic decisions––and in our current economic system, the economic interests of individuals often conflict with their social consequences. Technology firms are not cultural or ideological actors, but “economic actors within a capitalist mode of production…compelled to seek out profits in order to fend off competition” (Srnicek 2017, 3). If we truly want to design ethically, we must first consider how technology is embedded in capitalism. Our ability to make technology work better for society as a whole depends upon our willingness to reorder our priorities and redefine value as more than profit maximization.
“The ‘market’ is a bad master, but can be a good servant.”
– S. Chakravarty (1993: 420)
In the world today, more and more interpersonal interactions are replaced by market transactions. The market system is both an economic and a cultural phenomenon, yet we seem to be hardly aware of the values that are bound up in it. This phenomenon is manifest at many levels: from the family, through the neighbourhood and the enterprise, to the nation and the globe. If there is such a thing as global ethics, I suggest, then they are – like it or not – the ethics of the market. My purpose here is to elaborate this claim, and to assess its implications. I shall distinguish between the market as a theoretical construct in economics, and the market as a social institution.
My main hypothesis can be briefly stated as follows: the most convincing ethical argument currently being made in favour of the market is its neutrality. Whether the market is in fact neutral may be disputed. But if one accepts this claim, it implies that the market is amoral, rather than immoral, and there remain, I suggest, two objections to allowing the market ethic to prevail. The first is that this is an abrogation of moral responsibility. It implies delegating decisions of major social and material significance to powers which are beyond our control, and whose outcome is uncertain. Second, the neutrality of the market comes at a cost in social and human terms; social relations between persons are replaced by contractual relations between economic agents.Read More »
Behavioral approaches to development economics and policy have gained momentum in recent years. A growing number of papers studying behavior of people in poor countries have been published in top journals, accompanied by the rise of randomized controlled trials (RCTs). In 2015, the World Development Report was dedicated to behavioral and cognitive research and policy. Papers studying how to nudge farmers to use fertilizers or increase savings have become classics in the field. Lots of hope has been placed into social experiments and behavioral policies to fight global poverty.
Behavioral policies are of course not reserved for policy-making in poor countries. In fact, nudges became famous with a US-American savings plan. Many behavioral instruments have been discussed and tested in and for rich countries. But there has been an important difference as compared to the debates in development economics: when debating behavioral policies in rich countries, scholars have also devoted lots of time to consider normative and ethical concerns. For example, following Thaler and Sunstein’s exposition of Libertarian Paternalism (see also here), a debate unfolded on whether nudges could be anti-libertarian (here, here, here, or here). Implications of the use of nudges as a new form of government policy have been analyzed, for example, from a Foucauldian perspective, or with a focus on institutional change. Books have been written about ethical concerns. The debate has reached a great level of differentiation, e.g. when authors argue that so-called social nudges (these are nudges that seek to stimulate voluntary cooperation in social dilemma situations) may be justified for different reasons than those targeting individual welfare. Overall, the debate has become really sophisticated, and the autonomy, welfare, and dignity of citizens in rich countries as well as consequences of the use of behavioral policies for these countries’ modes of government have received lots of careful scrutiny (recently again here).Read More »