Can psychology fix its reproducibility problem?

A attempt to repeat 100 experiments published in top psychology journals has exposed a big problem for the science's credibility.

A bronze cast of 'The Thinker' by Auguste Rodin, 1880, is seen outside the gateway to the Rodin Museum in Philadelphia in 2004.

Jacqueline Larma/AP

August 30, 2015

A large group of researchers set out to repeat 100 experiments published by leading psychology journals to see how often they would get the same results.

The answer: Less than half the time.

That doesn't mean all those unconfirmed studies were wrong. But it's a stark reminder that a single study rarely provides definitive answers and why scientists often greet new findings by saying, "More research is needed."

"Any one study is not going to be the last word," said Brian Nosek, a psychology professor at the University of Virginia.

"Each individual study has some evidence. It contributes some information toward a conclusion. But the real conclusion, when you can say confidently that something is true or false, is based on an accumulation of evidence over many studies," said Nosek, who led the project.

And yes, he said at a press conference, "even this project itself is not ... a definitive word about reproducibility."

The work was carried out by an international team of more than 300 people and released Thursday by the journal Science. The project focused on psychology because its organizers came from that field. Researchers worked with the authors of the original studies in setting up the replication attempts.

Only about 40 percent of those attempts produced the original results.

Can Syria heal? For many, Step 1 is learning the difficult truth.

The effort focused on 100 experiments reported during 2008 in any of three major psychology journals: Psychological Science, the Journal of Personality and SocialPsychology and the Journal of ExperimentalPsychology: Learning, Memory, and Cognition.

None of these experiments tested any treatments. They focused on basic research into how people think, remember, perceive their world, and interact with others. One explored why people are reluctant to tempt fate, for example.

Studies with stronger statistical evidence for their conclusions were more likely to be replicated than others, as were those with findings that were judged to be less surprising.

When a study's results were not replicated, there could be several explanations, Nosek said. The original study could be wrong. Or it could be right, and the repeat study overlooked a real effect just by chance. Or both studies could be correct, with conflicting conclusions because of differences in how they were carried out.

Project workers tried to minimize such differences, but matching an original study could be tricky. E.J. Masicampo of Wake Forest University in Winston-Salem, North Carolina, a co-author of the new study, said one of his own experiments was not confirmed by the project.

The study required participants to make decisions that required significant mental effort. To create that situation, researchers asked undergrads to choose between off-campus apartments. That task wasn't as mentally challenging when it was tried again at a different campus, which evidently threw off the results of the experiment, he said.

Duane Wegener, a psychologist at Ohio State University who was not among the new study's authors, said a similar problem in reproducing the psychological setting for an experiment apparently explains why the project could not confirm one of his results.

Wegener said that the message of the new project's results for psychological researchers is unclear because the reasons for the various failures to confirm are not known.

Copyright 2015 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.