Science for Steering (vs for Decision-Making)
I keep running into imaginary decision-makers. Some person, or an organization that behaves like a person, who will look at the evidence – the assessment – and then … do the right thing. Do the evidence-based thing for the societal good.
That’s just not how it works most of the time. Most of the time there is no particular individual or approximation of an individual who’s in charge.
Instead, there’s a social or political process, in which:
- many decision makers,
- each with different motivations that may or may not include anything other than self interest,
- negotiate, collaborate, compete, haggle – INTERACT
to eventually produce an aggregate outcome that nobody in particular really decided.
So the big question is how can science – loosely, the production of facts – do more to “steer” the outcomes of these processes? The difference between “informing decisions” and “steering social processes” is subtle but important. I find myself explaining all too often in contexts from digital public infrastructure to climate intervention and air quality action. So here’s an attempt to put some of those points in one place.
(Please accept for now the gross over- and under- simplification of calling science “the production of facts.” The main point is each of us responds to what we “know” and part of what we “know” is generated by others who combine observations with analysis, experiments, contextualization, etc. So, what can those “others” who seek to inject some reality into what we collectively decide, do when confronted with a “process” rather than a “decision-maker.”)
First, aim to enable interactions, not just decisions. The whole key to changing the outcomes of the process is changing the range of interactions that are possible. For example: research that makes it easier to detect small amounts of pollution earlier and faster, makes it possible to then react faster to punish, deter, or mitigate it. Doesn’t mean that anybody will actually punish or mitigate the pollution – but at least it’s more possible. Research that makes it easier to detect similarities and differences changes group dynamics and incentives. The alike can find each other, form groups, and potentially acquire more of a voice in the process. And the detection of differences can also motivate a race to the top where those differences are rewarded. Think of firms seeking to differentiate from average emission intensity to reduce the penalties under the carbon border adjustment mechanism and similar policies. Research that illustrates interconnections can motivate new collaborations. Think of the early days of the formation of the Long-Range Transboundary Air Pollution Convention (LRTAP). The evidence on the link between emissions in one country and forest damage in another motivated (and enabled) new institutions.
This has implications for the choice of topics and the evaluation of the social returns on investment in particular projects. Will the findings enable new institutions, new interactions, new social processes? That’s an important return.
Second, aim to make the findings unavoidable as well as credible. “Strategic behavior” in social processes is partly about what an individual knows, but also what they think others know, and what the think others think they know – plus more convolutions and tongue twisters. For example, though: it’s not always enough to ensure that a regulator knows that there’s pollution coming from a particular plant. He or she may choose to just ignore it and nobody would be the wiser. But if they know that others know they know, and that the information cannot just be dismissed as irrelevant, then avoiding action becomes more difficult. The IPCC did this well – it is designed to make the findings about climate change hard to dismiss. The way that reports are produced and compiled means that it not be the most perfect, comprehensive, accurate, timely compilation of what is known about climate change causes and impacts – but it creates a statement that is hard to avoid.
Now making findings “unavoidable and credible” is easier to say than do in today’s increasingly fact-free (or multi-fact) public discourse. A few thoughts. First, do the work to ensure that the evidence is admissible for relevant judicial, public policy, or other mechanisms for influence. It’s one thing to grab public attention, it’s another – and it should be another – to stand up in court or a regulatory hearing. Second, if you’re trying to speak to a divided world, involve people across those divisions in producing the evidence. It’s harder then to pretend that it’s all made up or manipulated.
On a related note – make sure that findings are robust to critical scrutiny by not over-claiming certainty or glossing over nuance. Anticipate attempts to discredit uncomfortable statements – and contain them. For example: in the air quality world, it’s common to say “air quality action is climate action.” It’s a simple, clear, nice message. But it’s vulnerable to the exception: reducing some kinds of air pollution (light colored particles) does unmask warming. And this, in turn, casts doubt on the whole statement of “air quality is climate action.” Could have been avoided or contained with a simple “action on air quality is almost always climate action.” For more on this – [air pollution]
Third, think about the infrastructure as well as the answer. The questions behind a particular paper or even a larger assessment are usually a limited subset of the questions that people participating in a social process might ask, want to know, or use to inform their votes, their negotiation, their use of time or effort. So, rather than just focusing on offering answers to particular questions, invest in building foundations for many questions to be asked and answered. Earth on AWS is an example: It doesn’t offer any particular readily packaged insights, but it does offer a set of readier-to-use building blocks for a wide variety of users with diverse questions. (Readier to use because they can bring their algorithms to the data and computing power, rather than get stuck behind a slow connection or limited compute.) Jed Sundwall describes the philosophy behind the choice of how to invest as creating a “mise en place” rather than a “coffee table book.”
There are probably items 4,5,6,7 that I haven’t thought of yet … comment below; stay tuned for longer-form.