International Debate

A Primer for the Public: 10 Tips for Interpreting Research International Debate
Statistics cartoon

A Primer for the Public: 10 Tips for Interpreting Research

October 8, 2014 1931

Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?

Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.

1. Wait! That’s just one study!

You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.

The Conversation logo

This article by Will J. Grant and Rod Lamberts originally appeared at The Conversation, a Social Science Space partner site, under the title “The 10 stuff-ups we all make when interpreting research”

If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.

The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.

People who blindly accepted Andrew Wakefield’s (now retracted) study – when all the other evidence was to the contrary – fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.

2. Significant doesn’t mean important
Some effects might well be statistically significant, but so tiny as to be useless in practice.

Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.


UNDERSTANDING RESEARCH

What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make. This article is part of a series on Understanding Research appearing at The Conversation (with some piece reposted at Social Science Space.

Further reading:
Why research beats anecdote in our search for knowledge
Clearing up confusion between correlation and causation
Where’s the proof in science? There is none
Positives in negative results: when finding ‘nothing’ means something
The risks of blowing your own trumpet too soon on research
How to find the knowns and unknowns in any research
How myths and tabloids feed on anomalies in science


One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.

The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1 percent. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.

3. And effect size doesn’t mean useful
We might have a treatment that lowers our risk of a condition by 50 percent. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002 percent), then reducing that might be a little pointless.

We can flip this around and use what is called Number Needed to Treat (NNT).

In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.

4. Are you judging the extremes by the majority?
Biology and medical research are great for reminding us that not all trends are linear.

We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.

But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.

The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.

5. Did you maybe even want to find that effect?
Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.

There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.

In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.

6. Were you tricked by sciencey snake oil?

A classic – The Turbo Encabulator.

You won’t be surprised to hear that sciencey-sounding stuff is seductive. Hey, even the advertisers like to use our words!

But this is a real effect that clouds our ability to interpret research.

In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!

7. Qualities aren’t quantities and quantities aren’t qualitites
For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue.

For example, we know people don’t enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.

But in reality you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.

If you asked people to describe how waiting made them feel, you might discover it’s less about how long it takes, and more about how uncomfortable they are.

8. Models by definition are not perfect representations of reality
A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.

But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.

While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.

This doesn’t mean people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.

9. Context matters
The US president Harry Truman once whinged about all his economists giving advice, but then immediately contradicting that with an “on the other hand” qualification.

Individual scientists – and scientific disciplines – might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.

To ponder this we can look at bike helmet laws. It’s hard to deny that if someone has a bike accident and hits their head, they’ll be better off if they’re wearing a helmet.

But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.

Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.

Valid, reliable research can find that helmet laws are both good and bad for health.

10. And just because it’s peer reviewed that doesn’t make it right
Peer review is held up as a gold standard in science (and other) research at the highest levels.

But even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.

It does not mean it’s perfect, complete or correct. Peer review is the beginning of a study’s active public life, not the culmination.

And finally …
Research is a human endeavor and as such is subject to all the wonders and horrors of any human endeavor.

Just like in any other aspect of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world’s best study does not relieve us of this wonderful and terrible responsibility.

There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.

***
Will J. Grant owns shares in a science communication consultancy. He has previously received funding from the Department of Industry. Rod Lamberts has received funding from the Australian Research Council in the past. He also holds shares in a science facilitation consultancy.


Will Grant is a talker, writer, thinker and reader, based primarily at the Australian National Centre for the Public Awareness of Science (CPAS) at Australia National University. His talking / writing / thinking / reading has focused mostly on the intersection of science, politics and society, and how this is changing in response to new technologies. Co-host of KindaThinky.com. Rod Lamberts is deputy director of CPAS, a founding partner of the Ångstrom Group, and a former national president of the Australian Science Communicators. He has been providing science communication consultation and evaluation advice for than 15 years to organisations including UNESCO, the CSIRO, and to ANU science and research bodies.

View all posts by Will J. Grant and Rod Lamberts

Related Articles

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa
Impact
March 27, 2024

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

Read Now
Using Translational Research as a Model for Long-Term Impact
Impact
March 21, 2024

Using Translational Research as a Model for Long-Term Impact

Read Now
Coping with Institutional Complexity and Voids: An Organization Design Perspective for Transnational Interorganizational Projects
Research
March 19, 2024

Coping with Institutional Complexity and Voids: An Organization Design Perspective for Transnational Interorganizational Projects

Read Now
The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting
Impact
February 26, 2024

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting

Read Now
Revolutionizing Management Research with Immersive Research Methods

Revolutionizing Management Research with Immersive Research Methods

In this article, Anand van Zelderen, Nicky Dries, and Elise Marescaux reflect on their decision to explore nontraditional research.

Read Now
Why Don’t Algorithms Agree With Each Other?

Why Don’t Algorithms Agree With Each Other?

David Canter reviews his experience of filling in automated forms online for the same thing but getting very different answers, revealing the value systems built into these supposedly neutral processes.

Read Now
A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science

A Behavioral Scientist’s Take on the Dangers of Self-Censorship in Science

The word censorship might bring to mind authoritarian regimes, book-banning, and restrictions on a free press, but Cory Clark, a behavioral scientist at […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments