On Measuring Social Science Impact: An Excerpt and Responses

The following essay by Ziyad Marar is adapted from “On Measuring Social Science Impact,” published in the journal Organization Studies. To read the full essay, click here.

I went to a talk at a conference a few years ago (remember those?) where the speaker referenced the ongoing debate around a paper by Ken Gergen entitled “Social Psychology as History.” The paper was published in the Journal of Personality and Social Psychology (JPSP), in 1973! The fact that a paper published nearly 50 years ago is still being debated by social psychologists today should remind us that the time horizons over which social science work has an impact is, on the whole, much longer than in STEM disciplines. And relatedly, social science scholars, within distinct ecosystems of reward and recognition, typically grow their reputations and expertise over a longer time scale than their STEM counterparts.

Headshot of Ken GergenGergen’s paper is wonderfully self-referential in this regard given its claim that social psychology is different from STEM and cannot issue the same kinds of law-like generalizations and predictions that one might find in physics and chemistry, not least because its very findings can alter the nature of the behavior it attempts to explain. What sociologists call reflexivity impacts the study of meaning in ways that are not so relevant to the study of molecules. And for this and other reasons the impact of work in social science is often more diffuse over time rather than being about breakthrough Eureka-like discoveries.

“Social Psychology as History” was published in one of the most prestigious and impactful social psychology journals. JPSP is published by the American Psychological Association and has published many of the most important papers in the field. Gergen’s article is controversial still and despite his warnings most of American social psychology has wedded itself to the claims that the STEM research model is applicable to their domain too. And this in turn has led to a replication crisis which maybe argues for Gergen’s original point.

So what does the fact that JPSP has a two-year impact factor of 7.673 actually tell you about the journal? A two-year moving window, designed with STEM research in mind, that certainly doesn’t include a nearly 50-year-old article with more than 3,000 citations. One might learn more about the journal from a comment made by Malcolm Gladwell who, on being asked where he would like to be buried, replied, “I’d like to be buried in the current-periodicals room, maybe next to the unbound volumes of the Journal of Personality and Social Psychology (my favorite journal).”

We at SAGE as publishers of social science have often debated this question and our role in perpetuating the problem. We have attempted to shift the emphasis by announcing five-year impact factors for the last couple of years andMetric Tide cover have created an annual prize for the most highly cited articles we have published over a ten-year period. While counting citations over a longer period of time will always have a place in measuring quality and impact, scholarly impact and significance cannot be restricted to this kind of measure alone.

 

The Metric Tide, an independent assessment of the role of metrics in assessing research led by James Wilsdon, among other things concluded that narrow metrics are not a responsible way to assess research without being supplemented by qualitative judgments such as peer review. The report advocated a move to responsible metrics – distinct from the broader moves towards responsible research more generally – which have the following characteristics:

  • Robustness: basing metrics on the best possible data in terms of accuracy and scope;
  • Humility: recognizing that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
  • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results;
  • Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system;
  • Reflexivity: recognizing and anticipating the systemic and potential effects of indicators, and updating them in response.

The argument of course is not to abandon metrics entirely: we all need ways to filter knowledge claims to identify relevance and excellence. However, despite the diagnosis of the problem there is no consensus on the cure, as to how social science impact can best be demonstrated. In the absence of good alternatives, we default to the simple and dominant mode.

It is fitting that 1973 was not only the year in which Ken Gergen published his article in JPSP, but it also saw the launch of the Social Science Citation Index (SSCI). The Science Citation Index was created nearly a decade before. The SSCI will be 50 years old next year. Surely, we can do better than that in creating a high quality and impactful social science ecosystem.

Also from Ziyad Marar

 

What I Have Learned From Social Science

“Social science has a hard time breaking through because it tends not to offer up easy answers and solutions. But as one physicist pointed out, it is child’s play to understand theoretical physics compared to understanding child’s play.”

The Changing Imperative to Demonstrate Social Science Impact

“In the United States, claims that social science has no real impact have been used repeatedly in political attacks that attempt to cut social science funding – even as social science was being deployed effectively in questions of finance and national security that obsessed these same critics.”

 

Also of Interest on These Topics

 

The Journal Citation Reports 2022 Are Out. What Do They Mean for Sociology? | by Daniel Nehring

One way or another, the Journal Citation Reports today play an outsized role in determining whose careers thrive and whose careers whither and which journals flourish or fade away.

Can We Use Altmetrics To Measure Societal Impact? | by Lutz Bornmann and Robin Haunschild

The authors examine the relationship of peer review, altmetrics, and bibliometric analyses with societal and academic impact. They argue altmetrics may provide evidence for wider non-academic debates, but correlate poorly with peer review assessments of societal impact.

Academe Just Doesn’t Talk Enough About Research Metrics | by Lai Ma

The active use of metrics in everyday research activities suggests academics have accepted them as standards of evaluation. Yet when asked, many academics profess concern about the limitations of evaluative metrics and the extent of their use. Why is there such a discrepancy between principle and practices?

We Developed a Tool to Make Responsible Research and Innovation Easier | by Stefan De Jong, Michael J. Bernstein and Ingeborg Meijer

The authors describe their work developing a tool that helps researchers and research funders to incorporate responsible research and innovation values into their work.

Getting A Handle On Both Societal And Scientific Impact | by Jorrit Smit and Laurens Hessels

Finding a common separation between evaluation focused on scientific and societal impact, the authors suggest bridging this divide may prove beneficial to producing research that has public value, rather than research that achieves particular metrics.

Graphic on Perspectives on Social Science Impact

A number of scholars in the social and behavioral sciences are offering responses to this essay. Links to their essays will appear below

Headshot of Sue Fletcher Watson

Impact: Who Decides

Psychologist and autism researcher Sue Fletcher-Watson shares why current metrics fall short and what we can do about it.

Anne-Wil Herzing

Impact ≠ Impact ≠ Impact 

Management professor Anne-Wil Harzing defines impact in terms of progressing scientific knowledge, developing critical thinking, and addressing societal problems.

Laura Rovelli

Enhancing the Role of Local Input for Measuring Value

Laura Rovelli, one of the advisory board members for the Declaration on Research Assessment, or DORA, discusses some of the components of impact beyond citation count and how we can harness those components.

headshot of Mike Taylor

The $20 Blender Dilemma: How Different Data Can Create the Perfect Mix  

Mike Taylor, head of data insights for Digital Science, argues that using multiple metrics and data points is the default in many of our daily decisions.

Logo for HuMetrics HSS

On Measuring What We Value 

When we refer to quality, impact, excellence, and relevance as important measures of scholarship, argues the team from HumetricsHSS, we fail to adequately recognize that these values themselves are shaped and determined by the degree they are put into practice.

Unpacking Impact

Ron Kassimir senior adviser to the Columbia World Projects, asks where does social science aim to impact, what kind of science is impactful, and why are we worried about impact in the first place?

[mailpoet_form id="1"]
0
Would love your thoughts, please comment.x
()
x