Featured

Made it!

August 17, 2012 1191

Like a tired boxer at the Olympic Games, the reputation of psychological science has just taken another punch to the gut. After a series of fraud scandals in social psychology and a US survey that revealed the widespread use of questionable research practices, a paper published this month (pdf) finds that an unusually large number of psychology findings are reported as “just significant” in statistical terms.

The pattern of results could be indicative of dubious research practices, in which researchers nudge their results towards significance, for example by excluding troublesome outliers or adding new participants. Or it could reflect a selective publication bias in the discipline – an obsession with reporting results that have the magic stamp of statistical significance. Most likely it reflects a combination of both these influences. On a positive note, psychology, perhaps more than any other branch of science, is showing an admirable desire and ability to police itself and to raise its own standards.

E. J. Masicampo at Wake Forest University, USA, and David Lalande at Université du Québec à Chicoutimi, analysed 12 months of issues, July 2007 – August 2008, from three highly regarded psychology journals – the Journal of Experimental Psychology: General; Journal of Personality and Social Psychology; and Psychological Science.

In psychology, a common practice is to determine how probable (p) it is that the observed results in a study could have been obtained if the null hypothesis were true (the null hypothesis usually being that the treatment or intervention has no effect). The convention is to consider a probability of less than five per cent (p < .05) as an indication that the treatment or intervention really did have an influence; the null hypothesis can be rejected (this procedure is known as null hypothesis significance testing).

From the 36 journal issues Masicampo and Lalande identified 3,627 reported p values between .01 to .10 and their method was to see how evenly the p values were spread across that range (only studies that reported a precise figure were included). To avoid a bias in their approach, they counted the number of p values falling into “buckets” of different size, either .01, .005, .0025 or .00125 across the range.

The spread of p values between .01 and .10 followed an exponential curve – from .10 to .01 the number of p values increased gradually. But here’s the key finding – there was a glaring bump in the distribution between .045 and .050. The number of p values falling in this range was “much greater” than you’d expect based on the frequency of p values falling elsewhere in the distribution. In other words, an uncanny abundance of reported results just sneaked into the region of statistical significance.

“Biases linked to achieving statistical significance appear to have a measurable impact on the research publication process,” the researchers said.

….

Read the rest of the article at Research Digest

Post written by Christian Jarrett for the BPS Research Digest

Related Articles

New Opportunity to Support Government Evaluation of Public Participation and Community Engagement Now Open
Featured
April 22, 2024

New Opportunity to Support Government Evaluation of Public Participation and Community Engagement Now Open

Read Now
Why Don’t Algorithms Agree With Each Other?
Innovation
February 21, 2024

Why Don’t Algorithms Agree With Each Other?

Read Now
A Black History Addendum to the American Music Industry
Insights
February 6, 2024

A Black History Addendum to the American Music Industry

Read Now
When University Decolonization in Canada Mends Relationships with Indigenous Nations and Lands
Higher Education Reform
January 9, 2024

When University Decolonization in Canada Mends Relationships with Indigenous Nations and Lands

Read Now
Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

The double-blind review process, adopted by many publishers and funding agencies, plays a vital role in maintaining fairness and unbiasedness by concealing the identities of authors and reviewers. However, in the era of artificial intelligence (AI) and big data, a pressing question arises: can an author’s identity be deduced even from an anonymized paper (in cases where the authors do not advertise their submitted article on social media)?

Read Now
Kohrra on Netflix – Policing and Everyday Life in Contemporary India

Kohrra on Netflix – Policing and Everyday Life in Contemporary India

Even Social Science Space bloggers occasionally have downtime when they log in to Netflix and crash out. One of my favourite themes […]

Read Now
The UK Pandemic Inquiry – Missing the Point?

The UK Pandemic Inquiry – Missing the Point?

The post-mortems on national governments’ management of the COVID-19 pandemic are getting under way. Some European countries have completed theirs, with rapid […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments