Creating a culture of learning
I attended an interesting workshop in the Netherlands last week. It was one of those pre-proposal workshops that bring together researchers and practitioners. There were social scientists, natural scientists, consulting firms, decision makers, funders and activists. Over the course of the day, I had an epiphany about the nature of the research process and why we have such a hard time bridging the gap to policy. The workshop clearly had two types of people, who were talking past each other. The “thinkers” who look at the past and sift through evidence, and the “doers” who must look ahead to build, make or implement. It got me thinking about why this communication gap persists.
When I was doing my PhD, one of my faculty advisers, a qualitative social scientist, told me in no uncertain terms “dissertation research is NOT consulting”. You cannot ask a prescriptive question, only a descriptive question. You have to look back at something that has already happened and ask a “what or why” question. So a question like “how do we design a policy to improve ABC” is not a valid research question.
This view, of course, is not universal. In engineering departments, designing a new product, tool or technique is common. For instance, in civil engineering, building an optimisation model to inform how dams should be operated is mainstream research. Indeed, the entire success of the Harvard Water Programme in the 1960s, at the peak of the dam-building era, was its claim that through computer simulation models, researchers could inform the design and operation of large dams to “maximise output”. Likewise, economists have historically been comfortable with making policy recommendations. In fact, public policy departments in many universities are comprised primarily of economists.
Perhaps it is precisely this comfort with making prescriptions that have allowed these disciplines to dominate public discourse. Both groups have done this by building quantitative models that can predict the outcomes of policies and programmes, allowing policymakers to justify choices based on “objective” criteria.
Social scientists, correctly, argue that this disciplinary dominance is very problematic; these disciplines are able to be prescriptive by being narrowly concerned with “efficiency”; they are blind to politics, trade-offs, winners and losers, and equity concerns. And to be fair, these critiques have had an impact and led to the broadening of approaches. Instead of being focused on just improving efficiency, modellers now examine trade-offs and engage in multi-criteria decision making. They increasingly include stakeholders in setting the goals through participatory modelling and “shared vision planning”.
The problem is simulation models can only go so far. Once it comes to the nitty-gritty of the implementation, the details are typically left to management consultants or engineering companies. And this is precisely where many programmes, which look great on paper, end up failing.
On one hand, there does not seem to be an easy way for the more critical disciplines, that actually study programmes on the ground, to engage in policy/practice. An ethnographer or sociologist who has spent years studying the impacts of an infrastructure project or policy on vulnerable communities would find it hard to inform future projects in a constructive way. The large consulting companies that design programmes and projects rarely reach out to them.
On the flip side, research cultures do not challenge researchers to be solution oriented. Many researchers feel comfortable highlighting what not to do, without articulating what to do and how to incorporate lessons of past failures to do better. Often, researchers remain content with making generic recommendations for more participation, inclusion, and transparency. Since few in the government know how to properly implement these ideals, it only results in more critiques and problematising and the cycle continues. Thus, the only engagement critical social scientists are able to have is dissent, which further deepens the rift between the thinkers and doers. Policymakers and practitioners end up feeling frustrated by researchers, who they feel will be critical no matter what. This makes it easy to justify ignoring the critiques altogether. The two worlds stay forever disparate.
So what needs to change?
The first step is accepting failure or at least imperfections as an inevitable part of the development cycle. As Rohini Nilekani pointed out in a recent article “acceptance of failure is an essential part of innovation, which in turn is required for a successful outcome”.
Second, if we want to stop repeating the mistakes of the past over and over again, we need to create safe spaces to exchange and communicate and to build pathways from research to design. We need to create “learning communities” that will explicitly make the link from what we have understood by studying the past to how we design for a better future.
Third, we need to explicitly link the design phase to the learning phase by challenging researchers to articulate design principles. There are some examples in the emerging the field of “implementation science” in medical research, whose goal is “to promote the systematic uptake of research findings into routine practice”. In other words, what is needed is a specific set of actions (and associated funding for it) that will help us go from “what caused policy x to fail and policy y to succeed” to “how can we use the evidence to design policy z better”.