Skip to content

Groundbreaking or Definitive? Journals Need to Pick One

December 31, 2011

Do our top journals need to rethink their missions of publishing research that is both groundbreaking and definitive? And as a part of that, do they — and we scientists — need to reconsider how we engage with the press and the public?

2011 was bookended by two extraordinary events for social and personality psychologists — events that have produced a lot of uncomfortable scrutiny from both within and without the field. The year began with a paper on parapsychology (Bem, 2011) that presented what many scientists think are impossible results. And it ended with a high-profile case of a researcher, Diederik Stapel, who fabricated data and published perhaps dozens of fraudulent articles in peer-reviewed journals. Both of these incidents have led to a great deal of reflection and reevaluation of how we do science. This ongoing conversation has been taking place both online (for example Crocker, 2011; Roberts, 2011; Yarkoni, 2011) and in traditional journals (Simmons, Nelson, & Simonsohn, 2011). This has been an important discussion, and many of these proposals have focused on how researchers can improve their methods, or how editors and reviewers can better distinguish good and bad work.

But it is also vital to go beyond the practices of individual scientists and look critically at how we have structured our institutions. The institution I am particularly concerned about is journals. Journal publishers control the single largest incentive for most academic researchers and the major avenue for disseminating research to other scientists and to the broader public. As a result, they play a substantial role in how science gets done and ought to be the subject of continuing and close scrutiny.[1]

My concern here is with what I see as the impossible missions of some of our top journals. High-profile journals like Science, Nature, and Psychological Science try to be both groundbreaking and definitive. For example, Science, a publication of the American Association for the Advancement of Science, describes its mission as publishing findings that are both “novel” and “significantly advance scientific understanding” (AAAS, 2011).  Psychological Science, published by the Association of Psychological Science, describes itself as both “cutting-edge” and publishing articles with interdisciplinary relevance and “general theoretical interest” (APS, 2011). I think it is worth reflecting on whether groundbreaking and definitive are compatible goals.

Groundbreaking means original, novel, new. Just like a literal groundbreaking means putting your shovel into as-yet undisturbed soil, groundbreaking research is work that presents ideas or findings that nobody has presented before. From the investigator’s perspective, groundbreaking research is a first finding of its kind, something that is unprecedented. And from an audience’s perspective, it is research that makes you say “wow.

But that is a long way from definitive. In fact, in some key ways groundbreaking is the opposite of definitive. There is a lot of hard work to be done between scooping that first shovelful of dirt and completing a stable foundation. And the same goes for science (with the crucial difference that in science, you’re much more likely to discover along the way that you’ve started digging on a site that’s impossible to build on). “Definitive” means that there is a sufficient body of evidence to accept some conclusion with a high degree of confidence. And by the time that body of evidence builds up, the idea is no longer groundbreaking.

How do we get from groundbreaking to definitive? The key phrase is independent replication across multiple methods. (I for one would be very happy if this phrase eclipsed “correlation does not imply causation” in popularity, but that’s an argument for another time.) It is worth taking that phrase apart. Replication means that the study has been run again to determine whether the findings can be reproduced. Independent means that the replication has been conducted by a different set of researchers, who are less likely to share the biases, incentives, or errors of the original researchers. Multiple methods means that researchers reach the same underlying conclusion when they test it in different ways. (For example, if the conclusion is “suppressing emotion-expressive behavior causes difficulties getting close to others,” it should be possible to obtain supporting evidence in both laboratory experiments and longitudinal field studies.)

So more and more, I have been coming to the view that groundbreaking and definitive are incompatible. Popular depictions of science in movies and journalism often conflate them into a “eureka” moment when the scientist (usually wearing a white lab coat and surrounded by test tubes) makes a big discovery that changes everything. In this stock narrative the only hard work after that first discovery moment — if there is any hard work at all — is to convince the world of the brilliant idea. In the real world, that first discovery is just the beginning (and more often than not it is the beginning of a road to a dead end). A single study reported in a single paper cannot be both the start and the finish of an idea. A journal could, in principle, publish a mix of both kinds of papers– some groundbreaking new findings, some conclusive reviews of bodies of evidence. But that’s not what is usually happening. The way the high-profile journals carry out their missions, they expect most articles to do both.

None of this is to say that we don’t need journals for brand-new, groundbreaking findings. Nor does it contradict the many good ideas that have been floated recently about how scientists and journals could improve rigor and reporting. But some part of the tension between groundbreaking and definitive is irreducible. As long as a journal pursues a strategy of publishing “wow” studies, it will inevitably contain more unreplicable findings and unsupportable conclusions than equally rigorous but more “boring” journals. Groundbreaking will always be higher-risk. And definitive will be the territory of journals that publish meta-analyses and reviews, like Psychological Bulletin, or to a lesser extent (because definitive is a matter of degree) long-form journals that publish multi-study investigations.

So back to the question that I posed at the top: should our journals — and we scientists — stop telling the world about our newest discoveries?

At the very least, findings that are new and exciting to specialists should not yet be presented to scientists in other disciplines or the broader public as settled facts. Most conclusions, even those in peer-reviewed papers in rigorous journals, should be regarded as tentative at best; but press releases and other public communication rarely convey that. Some journalists are catching on and becoming more critical of science journals, but in the new media landscape we cannot count on a few skeptical science journalists to be gatekeepers. We as individual scientists need to remain skeptical of our journals, and to communicate that skepticism better. Our standard response to a paper in Science, Nature, or Psychological Science should be “wow, that’ll be really interesting if it replicates.” And in our teaching and our engagement with the press and public, we need to make clear why that is the most enthusiastic response we can justify.

But beyond individual efforts, our institutions, especially our journal publishers and the professional associations that sponsor them, have a lot of power to change the conversation: by clarifying their missions, by tempering their messages, and perhaps by bringing science to the press with less frequency but greater confidence. In the short term, the incentives work against them doing that. Who wants to be the publisher of the Journal of Things that Might Be True? But if 2011 has shown us anything, it is that in science, the facts eventually catch up with us. If we want to keep the trust of the public and of each other, we need to be mindful of that.

Note

[1] Along these lines, I have argued elsewhere that journals and electronic databases need to do a better job of removing retracted articles from circulation, and that after a journal publishes a study it should be required to publish and track direct replication attempts, both to encourage such studies and to hold journals more responsible for what they publish.

References

American Association for the Advancement of Science. (2011). General information for authors. Retrieved from http://www.sciencemag.org/site/feature/contribinfo/prep/gen_info.xhtml.

Association for Psychological Science. (2011).  Submission guidelines.  Retrieved from http://www.psychologicalscience.org/index.php/publications/journals/psychological_science/ps-submissions.

Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.

Crocker, J. (2011). The fraud among us, or within us? Personality and Social Psychology Connections. Retrieved from http://spsptalks.wordpress.com.

Roberts, B. W. (2011). Personality psychology has a serious problem (and so do many other areas of psychology). P: The Online Newsletter for Personality Science. Retrieved from http://www.personality-arp.org/newsletter06/problem.html.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.

Yarkoni, T. (2011). The psychology of parapsychology, or why good researchers publishing good articles in good journals can still get it totally wrong. [citation needed]. Retrieved from http://www.talyarkoni.org/blog/2011/01/10/the-psychology-of-parapsychology-or-why-good-researchers-publishing-good-articles-in-good-journals-can-still-get-it-totally-wrong/.

About these ads
Follow

Get every new post delivered to your Inbox.

Join 263 other followers

%d bloggers like this: