[Ed. The views expressed are the author’s and do not necessarily represent the official views of the United States government or the government of the District of Columbia.]
There is a movement afoot to weave psychological science into the fabric of government. And by using the words “weave” and “fabric,” I mean to signal something unique: an attempt, now emerging from within government itself, to integrate the insights and experimental methods from the psychological sciences directly into day-to-day governance.
My work at the federal level, for example, has been as part of the White House’s recently created Social & Behavioral Sciences Team (SBST). The SBST is a multidisciplinary group of applied behavioral scientists, most of whom are drawn from academia or other research entities and serve a Fellowship tour-of-duty directly within government. (For instance, I was previously at the University of Arizona, studying in the psychology department and law school.) In the opening run of work, the SBST and partner agencies completed more than 15 randomized field experiments, designing and testing the impact of behaviorally informed interventions in domains spanning health, education, finance, and government operations (see the 2015 SBST report for details).One of the projects examined the collection of a business fee (known as the industrial funding fee, or IFF) that relied on payers to self-report how much they owed. A psychological study by Shu et al. found that requiring people to sign a guarantee that the information on the form is correct before — rather than after — completing the form made accountability more salient and improved the accuracy of self-reported information. We applied and tested this insight by randomly assigning whether or not a signature box appeared at the top of the IFF online reporting form. The median self-reported sales amount was $445 higher (p < .05, 95% CI [$87, $803]) for those signing beforehand than those not signing at all. This subtle, virtually cost-free intervention resulted in an additional $1.6 million in collections in a single quarter.
Psychological Scientists, Please Stand Up
On September 15, 2015, President Obama issued Executive Order 13707, “Using Behavioral Science Insights to Better Serve the American People.” The executive order emphasizes the applicability of psychological science to governance and directs agencies to “develop strategies for applying behavioral science insights to programs and, where possible, rigorously test and evaluate the impact of these insights.” Agencies are to “recruit behavioral science experts to join the Federal Government” and “strengthen agency relationships with the research community.”
We psychological scientists are now being explicitly invited to — or perhaps more accurately, we’ve elbowed our way into — a prime spot on the governance stage. How will we respond to this opportunity?
There is momentum already. Applied research and public advocacy are in the bones of the psychology profession, running at least from Hugo Münsterberg’s work at the turn of the 19th century through the recentPerspectives on Psychological Science special issue imagining a Council of Psychological Science Advisors. The reasons why psychologists should and must engage have been persuasively argued. The APS Presidential Column series* alone contains many jewels of thinking and leadership related to how the profession should think about meeting the opportunity:
- Douglas Medin highlighted the need for more applied field experiments in psychology and reflected on the mutually reinforcing overlaps between so-called “basic” and “applied” research.
- Elizabeth Phelps and Susan Fiske invited guests to reflect on applying psychological science in public policy, translating psychological science to law (and back), and bringing research on judgment and decision-making to public policy.
- Walter Mischel lamented “the toothbrush problem” — like a toothbrush, everyone feels they need their own unique theory rather than building off existing theories — and argued instead for a more cumulative science, which in turn would generate more robust and practical findings.
- John Cacioppo told young scientists about the rich careers to be had outside the hallways of tenured academia, and showcased psychology as a hub science that infuses diverse fields of knowledge and, by extension, is relevant to many different applied settings.
- John Darley challenged the profession to better anticipate future policy needs and adapt research programs accordingly.
I think psychological science has another unique advantage, one related to its leadership in developing and adopting open science practices. (See APS Executive Director Emeritus Alan Kraut’s December 2015 guest presidential column on open science efforts.) As it turns out, these practices might be the key to bypassing — or harnessing — one obstacle to the uptake of science into policy: politics.
Harnessing Open Science as Political Process
Let me spill some boring beans: Applied research, especially in government, involves politics. Shocking, right?
Importantly though, applied research must and should involve politics, in particular ways. For example, how big of an effect is needed to make an intervention worthwhile? How precise does our estimate of an effect need to be before we act on it? The answers depend on value judgments, such as deciding what counts as a cost or a benefit in cost–benefit analyses and balancing the inductive risks of accepting a false hypothesis or rejecting a true hypothesis. Government provides a process for coordinating and expressing such value-laden decisions. This is an imperfect process, to be sure, but there are textbook guideposts for how democratically elected or appointed officials decide (or delegate and supervise) the courses of government action, doing so in transparent ways that empower voters to hold them accountable.
The key is not to remove politics from the process — neither possible, nor desirable — but rather to shift when and how the value judgments occur. If we front-load the discussion and use the analysis plan to build and lock in consensus about the methods, we may be able to de-politicize reactions to the result
My pitch here is that the best practices from the open science movement, particularly developing and preregistering an analysis plan (before looking at the data), stand to double-down as best practices for successfully engaging in evidence-informed governance. The key is not to remove politics from the process — neither possible, nor desirable — but rather to shift when and how the value judgments occur. If we front-load the discussion and use the analysis plan to build and lock in consensus about the methods, we may be able to de-politicize reactions to the result. The dialogue manages expectations and, most importantly, empowers buy-in for a method that can be controlled rather than a single hoped-for result, which might or might not materialize.
I’ve lived this dynamic with government partners. We tested an intervention with the Center for Program Integrity (CPI), for example, that failed to work as expected, obtaining a null result. There is a risk, when evidence fails to support an idea, for people to bunker down and reactively defend the idea; to pick apart why the evidence is irrelevant or inaccurate, or to rejiggle the analyses to spit out the preconceived result. But in this case, the research team included CPI employees who were closely involved in defining the problem and approving the methodological details. We discussed at length what the study might uncover, including the possibility of a null result and what, given the statistical power, that exactly meant. We adjusted the intervention to fit within regulatory constraints. We agreed ahead of time how the data would be analyzed. It took a lot of legwork. But as a consequence, the null result pivoted naturally, not to defensiveness or dismissiveness, but to brainstorming about how to further innovate to solve the problem. That need to pivot was one of the possibilities that was anticipated from the onset. (New interventions are now being tested in the field.)
Imagine a political debate about whether to evaluate a particular program and, if so, what effect size (measured at what precision) would be required to fund the program at scale. Imagine town halls where stakeholders provided input on what outcomes should be measured or where they decided ahead of time what they would need to see in order to support or reject a proposal. There are many details to unpack here, but the arguments and methods behind the open science movement are ripe to be harnessed and adapted for purposes of driving evidence-informed government.
So What Now?
This is a call to action. Psychological scientists have a spot on the governance stage, and as a profession we need to mobilize to meet the opportunity, to fulfill the responsibility.
There are many things to be done — keep an ear open for more discussion at the 2016 APS Annual Convention — but I’ll end with a request that any psychological scientist can address right away: Think locally. National problems with federal government solutions receive a lot of attention, but the reality is that state and local governments have many more touch points with people. There is enormous opportunity for psychological science to improve governance at these citizen frontlines. You’ll also have easier access to more local government practitioners. So roll up your sleeves and attend a town hall meeting, or visit city hall, to start a dialogue about how psychological science can improve the governance directly in your community.
P.S. To stay in the loop on SBST activities, click here.
Shu, L. L., Mazar, N., Gino, F., Ariely, D., & Bazerman, M. H. (2012). Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proceedings of the National Academy of Sciences of the United States of America, 109, 15197–15200. doi:10.1073/pnas.1209746109