College of Agriculture, Forestry and Life Sciences; College of Science

Study: Researchers’ choices could result in different conclusions from the same data

Share:

If you give hundreds of researchers the same data and the same hypotheses to test, they will reach the same conclusions, right?

Wrong, according to a recent study published in the journal BMC Biology.

Two hundred forty-six researchers in the fields of ecology and evolutionary biology — including two from Clemson University — worked in 174 teams to answer two different research questions based on the same unpublished data sets.

They came up with a strikingly variable range of answers, including some that were direct opposites of each other.

Headshot of Jared Elmore
Jared Elmore

“I expected everyone to come up with kind of the same general conclusion, but I did not expect the amount of variation,” said Jared Elmore, a research assistant professor in the Clemson University Department of Forestry and Environmental Conservation and science coordinator for the National Bobwhite and Grassland Initiative.

But the widely varied results does not mean that all scientific findings are suspect.

“It does give you pause, but that’s why we have peer review to make sure there’s some agreement about how the data should be processed and how they should be analyzed,” said Casey Youngflesh, an assistant professor in the Department of Biological Sciences and the other Clemson faculty member involved in the study.

The first question in the study asked how the growth of blue tit chicks is influenced by competition with siblings in the nest. The other question asked researchers to determine whether the amount of grass cover affected the success and survival of seedlings in an effort to restore various species of Eucalyptus on agricultural land.

For the bird question, 64 of the analysis teams found that chicks grew more slowly if they had more siblings, but with a wide range of estimated effect sizes and levels of uncertainty. Meanwhile, five of the teams reported mixed results, and another five found no relationship between brood size and chick size at all.

The leaders of the study say the results ring true to a growing recognition among scientists that the many choices researchers must make—such as which statistical methods to apply—can lead to divergent conclusions even when the different options are all reasonable. 

Major implications

The potential for such substantial variability has major implications for how ecologists and other scientists analyze data. 

The paper describes several data analysis practices that researchers could adopt in response to this variability. For instance, researchers might present several different analyses of the same data to assess the similarity of outcomes across statistical models, or they might attempt more ambitious ‘multiverse’ analyses in which they generate many hundreds or thousands of analyses to explore how different choices influence outcomes. These options join an ecosystem of other proposals to promote the reliability of scientific research, many of which focus on improving transparency. 

Elmore said one common misconception in science is that everything needs to be novel.

“I think science is meant to be replicated. If you look at the whole purpose of publishing these things in peer-reviewed journals and with the amount of detail that we often publish our methods, statistical models and data sets, it’s meant so other people can go replicate those studies and try to come up with similar results,” he said.

Casey Youngflesh, assistant professor, Department of Biological Sciences, College of Science, Clemson University.

Youngflesh said he believes there is a higher burden of proof when findings contradict what is currently known about a system.

“If you’re pretty sure that the world works this way and you just have one single piece of evidence that maybe isn’t all that strong, we’re not going to overturn this paradigm based on this one little piece of knowledge,” he said.

Youngflesh continued, “We have to talk about how much confidence we have in the things we’re saying. I think we do a pretty good job of not speaking in absolutes. Generally, science is not a binary — has an effect, doesn’t have an effect. It’s more, what is the strength of this effect and how much uncertainty is there.”

The authors — corresponding author Tim Parker and primary co-authors Elliott Gould, Hannah Fraser, Shinichi Nakagawa — say their findings will encourage researchers, institutions, funding agencies and journals to support initiatives aimed at improving research rigor, ultimately strengthening the reliability of scientific knowledge.

Detailed findings were published in the journal BMC Biology in an article titled, “Same data, different analysts: variation in effect sizes due to analytical decisions in ecology and evolutionary biology.”

This article incorporates information from a press release provided by the primary authors of the study.

Want to Discuss?

Get in touch and we will connect you with the author or another expert.

Or email us at news@clemson.edu

    This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.