"Publication practice has to change"

A deluge of scientific publications is pushing the system to its limits. Studies are questioning the reproducibility of results. In this interview, neuropsychologist Lutz Jäncke and systems biologist Lawrence Rajendran talk about the crisis in the publication process and new solutions such as the Matters of Reproducibility platform.

Interview: Stefan Stöcklin

"These aren't just problems for us in psychology. They also affect other fields. But in some disciplines of psychology they're accentuated because the measured effects are so small," says Lutz Jäncke. (Picture: Frank Brüderli)


More than 5,000 scientific publications are produced every day. But many results can’t be reproduced. What’s the situation in your area of research?

Lawrence Rajendran: It’s true that the vast majority of researchers polled by Nature believe we’re in crisis. You could describe it as a reproducibility crisis, but actually it’s about the entire publication and scientific system. It’s putting scientists under pressure to publish in prestigious, high-impact journals. The result is studies that can’t be reproduced in their entirety, or can’t be reproduced at all.

Lutz Jäncke: I think it’s primarily a question of quantity. The quality of publications per se is no worse than it used to be. In this debate psychology is somewhat in the spotlight. That has to do with the phenomena and processes we investigate. I agree: In the future we’re going to need new approaches to analyzing and verifying data, and more collaboration.

An Open Science Collaboration study in 2015, which repeated work reported in 100 published psychology papers, made huge waves. Ninety-seven percent of the original publications reported significant results, but these were confirmed in only 36 percent of the replicated attempts. What’s your take on that?

Jäncke: These aren’t just problems for us in psychology. They also affect other fields. But in some disciplines of psychology they’re accentuated because the measured effects are so small. We’re talking levels of significance and p-values (see box) at the limits of interpretability. There’s another point you also have to consider: The human being is very variable. In psychology we’re not measuring physically stable units; we’re measuring human characteristics and traits that are constantly changing. This makes it harder to measure stable data. Another factor is that our thoughts and feelings depend on surrounding conditions, which makes physically precise experiments virtually impossible. It’s a similar story in biology.

Rajendran: I know this Open Science Collaboration study and its corresponding author, social psychologist Brian Nosek, very well – he’s on the advisory board of our publication platform Science Matters.The fundamental problem highlighted by the result is that while 97 percent of the original studies examined apparently showed significant effects, only 36% of replication studies confirmed the significant results.

What does this mean? The researchers evidently adjusted their data to get significant results. The reason is obvious: Without significance it’s hardly possible to publish a study. You can’t “sell” negative results and unconfirmed hypotheses; without significant effects, scientists have nothing to show for their efforts, even if the results could actually be interesting.

This pressure to publish is a problem particularly for young researchers, who are expected to publish several papers while working on their PhD thesis, for example. This leads to over-interpretation and so-called p-hacking. The study illustrates these dilemmas in exemplary fashion, and in my view proves that the publication system needs an overhaul.

Are these problems characteristic of psychology, or is a lack of reproducibility an issue for science in general?

Jäncke: It’s a problem found in all empirical sciences, although the exact natural sciences are somewhat less affected because physical measurements are more objective and less subject to influence than psychological experiments. In psychology the lack of reproducibility varies widely depending on the discipline.

Rajendran: It’s an issue we also face in the life sciences, in other words biochemistry and molecular biology. Cells and their biochemical processes are also variable. In these disciplines, researchers often work with specific cell lines from a laboratory animal, for example in Alzheimer’s research. Strictly speaking the findings only apply to one cell type, but there’s a great temptation to transfer the results to human cells. While this is another form of reproducibility, it adds to the problem of significance.

Is science suffering from a fixation on the p-value?

Jäncke: I would say so: The publication system focuses too closely on significance, and added to this is the glorification of p-values. This is something you also see, for example, with the imaging process used to measure brain activity. It would be better to take a step back from p-values and report the effects descriptively to stimulate repeat experiments.

Rajendran: The issue of uncertain p-values is too closely tied up with scientists’ careers. Research funding is only granted to projects that promise significant results. This leads to a temptation to over-interpret.

"You could describe it as a reproducibility crisis, but actually it's about the entire publication and scientific system," says Lawrence Rajendran. (Picture: Frank Brüderli)


How could the situation be improved?

Jäncke: Most researchers are aware of the things we’re talking about here, and there’s a rethink taking place. More and more people at research funding institutions are also questioning the current system, which is based on publication in high-impact journals and levels of significance. I have high hopes of the open science movement, in other words the disclosure of all data. As a result of this transparency we’ll see more experiments being verified and repeated. Basically it’s the same thing my former professor taught me: He advised publishing results without inferential statistical analysis, in other words descriptively, and discussing them with colleagues. I still think this is the right approach.

Rajendran: It’s true: We’re at a turning point, and many researchers have realized that publication practice has to change. The open science movement exemplifies this transformation. But the scientific system is inert and contradictory; there are no incentives to go through with the change. On the one hand, young researchers are told they should pay attention to reproducibility and not try to publish their work in high-impact journals whatever the cost. On the other hand, academic appointments depend much too much on publications in these top journals. This means that in reality it’s difficult to put the new principles into practice.

Jäncke: I totally agree. I’m now 60 and have been operating in this system for many years – a system that works according to a principle that was drummed into me very early in my research career: Publish or perish! My generation was expected, if not compelled, to publish. But that’s precisely what would have to change.

Lawrence Rajendran, you initiated the Science Matters project to offer alternatives to standard publication practice. How does the model work?

Rajendran: On the platform, researchers can publish and discuss individual experiments and observations. Our idea is to place experimental evidence on a more secure footing and present it to the community before it’s developed into a complete story. Our motto is no more stories: We present observations and data so that people can verify and develop them.

Is Science Matters also a platform where people can publish negative results and experiments that didn’t turn out as planned?

Rajendran: Of course. We also publish repeats or reproductions of studies, something that’s more or less impossible anywhere else. But as the Open Science Collaboration has shown, it’s incredibly important to critically question published data. In my lab we always verify the work of other authors before using it as the basis for further research.

You already have plans for another new publication portal dedicated to the issue of reproducibility.

Rajendran: Yes, that’s correct. In March we’ll be launching our next online journal, Matters of Reproducibility, in collaboration with the Center for Open Science. It will be entirely devoted to the themes of reproduction, statistics, and observations. It will be the very first journal devoted to the matter of reproducibility. It’s intended as a complement to Science Matters, which will continue to serve as a platform for publishing both positive and negative results and observations.

Jäncke: I think this project makes a lot of sense, but from a psychologist’s point of view, I’d like to add that there are limits to reproducibility. You can’t lose sight of the individuality of human beings. I myself am repeatedly astonished at just how individual our brain and our behavior are. This places natural limits on reproducibility experiments.

Rajendran: I don’t see any contradiction. Our efforts to find connecting neuropsychological mechanisms can only benefit if researchers are looking for common factors. That might also mean that we don’t find any because they don’t exist. And this is precisely the sort of negative result that would be published on our platform.

Is the University of Zurich supporting your new publication initiatives?

Rajendran: UZH pays the publication costs for researchers who publish on Science Matters. That comes to around 50,000 francs a year. We also receive a substantial amount from the Velux Foundation. But that’s not enough, so I also invest money of my own. We could use more money to operate the platforms.

What has been the scientific community’s response your platform?

Rajendran: Our peers are open, and most of them have responded very positively. We’re constantly registering new publications on our platform, but there’s still potential.


The interviewees:

Lutz Jäncke, professor of neuropsychology
Lawrence Rajendran, professor of systems and cellular biology
of neurodegenerative diseases


In most cases, verifying scientific hypotheses involves calculating p-values (probabilities). Typically researchers will work with a null hypothesis representing the opposite of what they are hypothesizing. Then the data captured are used to run a calculation that yields statistical test values. The p-value is the probability of getting the result obtained from the study or more extreme results if the null hypothesis applies. Low p-values suggest that the null hypothesis does not apply and thus that the researcher’s hypothesis is accepted. A p-value of 0.05 has become the established cut-off.

Stefan Stöcklin, editor UZH News. English translation by Michael Craig.

Write Comment

The editorial team reserves the right to not publish comments. We will not publish anonymous, defamatory, racist, sexist, otherwise prejudiced, or irrelevant comments. UZH News will also not publish comments with advertising content.

Number of remaining characters: 1000