Industry

Parsing Fact and Perception in Ethnography

May 3, 2021 1917
quadrant showing fundamentals of fact checking
National Public Radio uses this four-quadrant card in training its journalists in fact-checking (read about it here). Ethnographers should go ahead and use material that is “Controversial/Hard to Verify” so long as they clearly identify it. (Graphic: NPR)

The Annual Review of Sociology is a very big deal. Published since 1975, it was initiated by the American Sociological Association in collaboration with the nonprofit publisher Annual Reviews for the purpose of “synthesizing and integrating knowledge for the progress of science and the benefit of society.” With an impressive impact factor, the Annual Review of Sociology now covers “major theoretical and methodological developments as well as current research in the major subfields.” Because submissions are by invitation only, and then subject to peer review, placing a piece in the Annual Review is a major accomplishment for academic authors. It therefore came as a surprise to me – and really, a bit of an honor – to discover that an article in the forthcoming 2021 Annual Review of Sociology devotes almost an entire section to responding to my 2017 book, Interrogating Ethnography, in which I critiqued some common practices in ethnography and suggested how they could be made more rigorous.

Interrogating ethnography cover

The article is “Ethnography, Data Transparency, and the Information Age,” by Alexandra Murphy, Colin Jerolmack, and DeAnna Smith. Now available on the journal’s (paywalled) website, and kindly sent to me by one of the authors, the essay argues that the “growing recognition that the advancement of social science hinges upon scholars being more transparent.” The practice of ethnography, they explain, is facing a “reckoning over the long-standing conventions they follow to gather, write about, and store their data.” The article is comprehensive and insightful, focusing on four important issues related to transparency: data collection, anonymization, verification, and data sharing. Congratulations to Professors Murphy (University of Michigan), Jerolmack (NYU), and Smith (University of Michigan) for a job well done.

The engagement with my work is mostly (although not entirely) in the section titled “Data Verification,” where the authors note my efforts to fact check more than 50 urban ethnographies. “Lubet,” they say, “accuses ethnographers of treating subjects’ retellings of events as fact.” Although I would have preferred a different verb – perhaps “Lubet reports” or “Lubet relates” – it is fair enough to say that my research revealed factual problems in about half of the ethnographies I investigated. To put it gently, I found that ethnographies often resort to hearsay, folklore, and rumor, without distinguishing between first-hand observations and second-hand accounts.

The authors present three sets of responses to my work – all of them oddly negative, given that they pretty much end up agreeing with me – beginning with a rather silly assertion by the UCLA sociologist Stefan Timmermans. According to Timmermans, the authors explain, the only way to show that an ethnographer got something wrong is “to put in the work to make repeated first-hand observations in the setting under question.” It should be obvious that such a standard could almost never be met – meaning that nothing in an ethnography could ever be disproven – even in the unlikely circumstance that a subsequent researcher had the time and inclination to make multiple trips to an ethnographer’s research site.

Many ethnographies, including the most controversial ones, make a point of masking their research locales, thus making it impossible for a fact-checker to visit the “setting under question.” The authors note in a different section that site-masking can prevent direct comparisons and, to say the least, “makes it difficult to conduct ethnographic, archival, virtual, and quantitative reanalysis.” Nonetheless, they appear to endorse Timmermans’ unrealistic claim, at least partially, by noting that “officials and their records may not comport with what happens on the ground.” While that is undoubtedly true, it does not justify the wholesale rejection of all public records – as has been urged by some ethnographers – along with the uncritical acceptance of research subjects’ accounts.

The second response is that ethnographers do in fact “verify their data more thoroughly than Lubet suggests.” Maybe so. The authors cite examples of ethnographers’ self-verification, including a few taken directly from my book, but that does not mean that the practice is widespread, rigorous, or transparent, much less universal. Nor do the authors recognize the inconsistency of commending some ethnographers for cross-checking their work against public sources, after having discounted the value of such sources only one paragraph earlier. In any case, my greater critique of ethnography has been that there is no robust tradition of fact-checking each other’s work.

This brings us to the third set of responses, which really get to the heart of the matter. “Some ethnographers, however, reject the premise that fact checking – at least as commonly understood in journalism – is essential to the ethnographic enterprise.” The problem, according to the authors, is the risk of “privileging the ‘truth’ as constructed by records, experts, and other outsiders over our subjects’ perceptions and experiences.” The centrality of reporting “participants’ worldview,” according to the authors, provides “a convincing rebuttal to the Lubet-style fact-checking approach.”

As much as I appreciate the eponym, this appears to be a serious misunderstanding of the nature of evidence. There is no contradiction between locating “truth” – with or without scare quotes, and however determined – and valuing perceptions and experiences. Fact and perception are simply different categories, neither of which is necessarily more important than the other. The challenge for ethnographers – and the major shortcoming I identified – lies in making clear and careful distinctions between what they have actually seen and what they have only heard about.

Murphy, Jerolmack, and Smith attempt to take both sides. They disdain “the fetishization of journalistic fact-checking,” while still conceding “that there are many reasons why participants’ accounts should not be taken at face value.” Recognizing that “when it comes to verifying data, there is room for improvement,” the authors allow, tepidly for a reckoning, that “some kind of verification of accounts seems warranted.”

An ethnographer embedded in the MAGA world would repeatedly hear that President Biden intends to restrict consumption of beef to one hamburger a month. The readiness in some quarters to accept such a claim could provide important insights in an ethnography, but only if it were clearly identified as a reflecting the subjects’ perception or worldview. A responsible social scientist would never simply report the impending burger limitation as fact, especially without tracking down the source of the story (which turns out to have been a rumor spread by Donald Trump, Jr. and others). That isn’t fetishization, it’s research.

Steven Lubet is Williams Memorial Professor at the Northwestern University Pritzker School of Law and author of Interrogating Ethnography: Why Evidence Matters, and other books such as 2015's The “Colored Hero” Of Harper’s Ferry: John Anthony Copeland And The War Against Slavery and Lawyers' Poker: 52 Lessons That Lawyers Can Learn From Card Players. He is the director of the Fred Bartlit Center for Trial Advocacy. He has been living with ME/CFS since 2006.

View all posts by Steven Lubet

Related Articles

Why Social Science? Because It Makes an Outsized Impact on Policy
Industry
March 4, 2024

Why Social Science? Because It Makes an Outsized Impact on Policy

Read Now
The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting
Impact
February 26, 2024

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

Read Now
A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science
Interview
February 14, 2024

A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science

Read Now
SSRC Links with U.S. Treasury on Evaluation Projects
Announcements
February 1, 2024

SSRC Links with U.S. Treasury on Evaluation Projects

Read Now
New Report Finds Social Science Key Ingredient in Innovation Recipe

New Report Finds Social Science Key Ingredient in Innovation Recipe

A new report from Britain’s Academy of Social Sciences argues that the key to success for physical science and technology research is a healthy helping of relevant social science.

Read Now
Your Data Likely Isn’t Best Served in a Pie Chart

Your Data Likely Isn’t Best Served in a Pie Chart

Overall, it is best to use pie charts sparingly, especially when there is a more “digestible” alternative – the bar chart.

Read Now
Research Integrity Should Not Mean Its Weaponization

Research Integrity Should Not Mean Its Weaponization

Commenting on the trend for the politically motivated forensic scrutiny of the research records of academics, Till Bruckner argues that singling out individuals in this way has a chilling effect on academic freedom and distracts from efforts to address more important systemic issues in research integrity.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments