Impact

Why the Stevens Op-Ed is Wrong

June 26, 2012 1666

A rather lengthier response to Jacqueline Stevens’ op-ed. Speaking to various points in turn.

the government — disproportionately — supports research that is amenable to statistical analyses and models even though everyone knows the clean equations mask messy realities that contrived data sets and assumptions don’t, and can’t, capture.

The claim that real politics is messier than the statistics are capable of capturing is obviously correct. But the implied corollary – that the government shouldn’t go out of its way to support it – doesn’t follow. Jacqueline Stevens doesn’t do quantitative research. Nor, as it happens, do I. But good qualitative research equally has to deal with messy realities, and equally has to adopt a variety of methodological techniques to minimize bias, compensate for missing data and so on. Furthermore, it is also extremely difficult to do at large scale – this is where the big projects that the NSF funds can be very valuable. I agree that it would be nice to have more qualitative research funded by NSF – but I also suspect that qualitative scholars like myself are a substantial part of the problem (if we don’t propose projects, they aren’t going to get funded).

It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money.

The claim here – that “accurate political prediction” is the “field’s benchmark for what counts as science” is quite wrong. There really isn’t much work at all by political scientists that aspires to predict what will happen in the future – off the top of my head, all that I can think of are election forecasting models (which, as John has noted are more about figuring out good theories of what drives politics, rather than prediction as such), and some of the work of Bruce Bueno de Mesquita. It is reasonable to say that the majority position in political science is a kind of soft positivism, which focuses on the search for law-like generalizations. But that is neither a universal benchmark (I, for one, don’t buy into it), nor indeed, the same thing as accurate prediction, except where strong covering laws (of the kind that few political scientists think are generically possible) can be found.

As best as I can decipher her position from her blog, and from a draft paper which she links to, Stevens’ underlying position is a quite extreme Popperianism, in which probabilistic generalizations (which are the only kind that social scientists aspire to find) don’t count as real science. Even one disconfirming instance is enough to refute a theory. Hence, Stevens argues in her paper that Fearon and Laitin’s account of civil wars has been falsified, because there are a couple of specific cases that have been interpreted as saying something that disagrees with Fearon and Laitin’s findings, and ergo, the entire literature is useless. I’m not going to get stuck into a debate which others on this blog and elsewhere are far better qualified to discuss than I am, but suffice to say that the Popperian probability-based critique of social scientific models is far from a decisive refutation of the social scientific enterprise. Furthermore, Stevens’ proposed alternative – an attempted reconciliation of Popper, Hegel and Freud – seems to me to be unlikely in the extreme to provide a useful social-scientific research agenda.

What about proposals for research into questions that might favor Democratic politics and that political scientists seeking N.S.F. financing do not ask — perhaps, one colleague suggests, because N.S.F. program officers discourage them? Why are my colleagues kowtowing to Congress for research money that comes with ideological strings attached?

I’m not quite clear what the issue is here. What does Stevens mean by ‘Democratic politics’? If the claim is that the NSF should be funding social science that is intended to help the Democrats in their struggle with other political groupings (the usual meaning in the US of the word Democratic with a capital D), that’s not what the NSF is supposed to be doing. If it’s that the NSF doesn’t fund projects that support Stevens’ own ideal understanding of what democratic politics, then that’s unfortunate for her – but the onus is on her to demonstrate the broader social scientific benefits (including to people who don’t share her particular brand of politics) of the project. More generally, the standard of evidence here is unclear. A colleague “suggests” that NSF program officers discourage certain kinds of proposals. Does this colleague have direct experience himself or herself of this happening? Has this colleague credible information from others that this has happened? Or is the colleague just letting off hot air? Frankly, my money is on the last of these, but I’d be happy to be corrected if wrong.

Many of today’s peer-reviewed studies offer trivial confirmations of the obvious and policy documents filled with egregious, dangerous errors. My colleagues now point to research by the political scientists and N.S.F. grant recipients James D. Fearon and David D. Laitin that claims that civil wars result from weak states, and are not caused by ethnic grievances. Numerous scholars have, however, convincingly criticized Professors Fearon and Laitin’s work. In 2011 Lars-Erik Cederman, Nils B. Weidmann and Kristian Skrede Gleditsch wrote in the American Political Science Review that “rejecting ‘messy’ factors, like grievances and inequalities,” which are hard to quantify, “may lead to more elegant models that can be more easily tested, but the fact remains that some of the most intractable and damaging conflict processes in the contemporary world, including Sudan and the former Yugoslavia, are largely about political and economic injustice,” an observation that policy makers could glean from a subscription to this newspaper and that nonetheless is more astute than the insights offered by Professors Fearon and Laitin.

It would certainly have been helpful if Stevens had made it clear that Cederman, Weidmann and Gleditsch were emphatically not arguing that quantitative approaches to civil war are wrong. Indeed, just the opposite – Cederman, Weidmann and Gleditsch are themselves heavily statistically oriented social scientists. The relationships that they find are not obvious ones that could be “gleaned” from a New York Times subscription – they are dependent on the employment of some highly sophisticated quantitative techniques. The “which are hard to quantify” bit that Stevens interpolates between the two segments of the quote is technically true but rather likely to mislead the casual reader. The contribution that Cederman, Weidmann and Gleditsch seek to make is precisely to quantify the relationship between inequality-driven grievances and civil war outcomes.

The G-Econ data allow deriving ethnic group–specific measures of wealth by overlaying polygons indicating group settlement areas with the cells in the Nordhaus data. Dividing the total sum of the economic production in the settlement area by the group’s population size enables us to derive group-specific measures of per capita economic production, which can be compared to either the nationwide per capita product or the per capita product of privileged groups.

This is emphatically not a debate showing that quantitative social science is wrong – it is a debate between two different groups of quantitative social scientists, with different sets of assumptions.

How do we know that these examples aren’t atypical cherries picked by a political theorist munching sour grapes? Because in the 1980s, the political psychologist Philip E. Tetlock began systematically quizzing 284 political experts — most of whom were political science Ph.D.’s — on dozens of basic questions, like whether a country would go to war, leave NATO or change its boundaries or a political leader would remain in office. … Professor Tetlock’s main finding? Chimps randomly throwing darts at the possible outcomes would have done almost as well as the experts.

Under the very kindest interpretation, this is sloppy. Quite obviously, one should not slide from criticisms of quantitative academic political scientists to criticisms of people with political science Ph.D.s without making it clear that these are not at all the same groups of people (lots more people have Ph.D.s in political science than are academic political scientists; there are lots more academic political scientists than quantitatively oriented academic political scientists). Rather worse: Stevens’ presentation of Tetlock’s research is highly inaccurate. As Tetlock himself describes his test subjects (p.40):

Participants were highly educated (the majority had doctorates) and almost all had postgraduate training in fields such as political science (in particular, international relations and various branches of area studies), economics, international law and diplomacy, business administration, public policy and journalism [HF: my emphasis].

In other words, where Stevens baldly tells us that “most of [Tetlock’s experts] were political science Ph.D.s,” Tetlock himself tells us that a majority (not most) of his experts had Ph.D.s in some field or another, and that nearly all of them had postgraduate training in one of a variety of fields, six of which Tetlock names, and one of which was political science. Quite possibly, political science was the best represented of these fields – it’s the first that he thought to name – but that’s the most one can say, without access to the de-anonymized data. This is very careless writing on Stevens’ part, and she really needs to retract her incorrect claim immediately. Since it is a lynchpin of her argument – in her own words, without it she could reasonably be accused of being a cherry-picking sour-grape-munching political theorist – her whole piece is in trouble. Tetlock’s book simply doesn’t show what she wants and needs it to show for her argument to be more than impressionistic.

The rest of the piece rehashes the argument from Popper, and proposes that NSF funding be distributed randomly through a lottery, so as to dethrone quantitative social science. Professor Stevens surely knows quite as well as I do that such a system would be politically impossible, so I can only imagine that this proposal, like the rest of her op-ed, is a potshot aimed at perceived enemies in a very specific intra-disciplinary feud. I have some real sympathy with the people on Stevens’ side in this argument – as I and Marty Finnemore have argued, knee-jerk quantificationism has a lot of associated problems. But the solution to these problems (and to the parallel problems of qualitative research) mostly involve clearer thinking about the relationship between theory and evidence, rather than the abandonment of quantitative social science.

Written by Henry Farrell

Originally published on the Political Science blog, The Monkey Cage.

Read Related Articles

Political Science Serving the Public Interest

SAGE opposes the Flake Amendment

“Treated like imbeciles”

A Response to Recent Attacks on Social Science

Want monthly updates on what’s new in Social Science? Become a Member!

Related Articles

Viewing 2024 Economics Nobel Through Lens of Colonialism’s Impact on Institutions
News
October 15, 2024

Viewing 2024 Economics Nobel Through Lens of Colonialism’s Impact on Institutions

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Lee Miller: Ethics, photography and ethnography
News
September 30, 2024

Lee Miller: Ethics, photography and ethnography

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024

Webinar: Banned Books Week 2024

As book bans and academic censorship escalate across the United States, this free hour-long webinar gathers experts to discuss the impact these […]

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Paper to Advance Debate on Dual-Process Theories Genuinely Advanced Debate

Sage 611 Impact

Psychologists Jonathan St. B. T. Evans and Keith E. Stanovich have a history of publishing important research papers that resonate for years.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

4 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments