Earliest proposal for a new registered report format?
Back in March of 2012, about a year before we launched registered replication reports at Perspectives on Psychological Science and Chris Chambers and crew launched registered reports at Cortex, Alex Holcombe and I had been discussing ways to increase the incentives for replication studies. The discussion happened on the now-defunct Google+ platform. Those discussions led to plans for a new journal which soon after morphed into the registered replication reports at Perspectives.
Yesterday, Alex uncovered what I think is my first public post about the idea, dated 14 March 2012. I've copied it in full below, editing only to remove now broken Google+ tags/links.
The post describes the value of what are now called "registered reports," detailing what form such articles would take, how they would be reviewed, and how the new format would improve the publishing process and incentives (at least for replication studies). Although the post was specifically focused on replication studies, most of the elements described in the post are now a standard part of the registered report model today.
If you know of earlier proposals for adoption of registered reports as an article type, let me
know. It would be nice to know the full history of this format.
Originally posted to Google+ on 14 March 2012
Outlets to publicize replication attempts
Yesterday, Alex Holcombe started a thread discussing how best to encourage people to post replication attempts to psychfiledrawer.org.
PsychFileDrawer.org is great site where you can post the details of
your successful and failed replication attempts of other studies. Alex's
question: If you had a little bit of money to encourage people to post,
how would you use it. Lots of interesting comments, there.
After
some discussions of this and related issues with faculty and students
at Illinois, I've been wondering whether a new type of journal might be
successful. Below is an idea I posted (in slightly modified form) to Alex Holcombe's thread. What do you think?
------
I
know that most "null results" journals generally haven't been
successful. I wonder, though, whether an open access journal that
published both replication successes and failures might be. (Note:
Journals like PLoS go part of the way toward what I'm thinking.)
Here's
my idea: Researchers submit the intro (extremely short -- no need to
review the literature) and method section for review, along with an
analysis plan that specifies details like the sample size, assumed
effect size, methods for eliminating outliers, a-priori power, etc. They
would not submit results. Only the intro, method, and analysis plan
would undergo peer review. Once the replication plan passed peer review,
the results would be published regardless of outcome. But, in order to
be published, the method and results would have to follow the
pre-approved plan exactly.
Here are the benefits of this approach:
1)
It increase the incentives for people to do replications—it could
result in an actual journal article, so it might be worthwhile as an
undergraduate thesis project or a grad student project.
2) It
would encounter less resistance from the authors of the original
publication during the review process (a major problem when publishing
failed replications) -- their goal would be to verify that the methods
are acceptable to them given what they had done originally. If they
thought the method and analysis plan to be acceptable in advance, they
wouldn't have grounds to object if the result didn't support them (and
they should be excited if it supports them).
3) It would
encourage direct replication rather than conceptual replications that
differ in both method and analysis from the original.
4) It would
lead to more details from original authors about their methods during
the review process, avoiding the inevitable complains that surface when
trying to publish a failed replication in a traditional journal (the
incentive for the original authors in that case is to highlight any
method difference and claim it to be the reason for the failure).
5)
The submission and review process could be relatively quick as well
given that there wouldn't be lengthy reviews, and the original authors
could always be reviewers. I would favor all reviews being signed as
well so that there can be no objections later.
6) It's possible
that such a journal would publish a lot of replication attempts for the
same paper, but that's okay -- better cumulative effect size estimate
that way.
7) The end result could be posted to sites like
PsychFileDrawer as well, making meta-analysis of the size of an effect
possible (and more accurate).
What do you think? Any academic society publishers out there think that this might be a viable model.
No comments:
Post a Comment