I will occasionally post information about Replication Reports both here and on my Google+ page. If you are interested in participating in such projects or just in learning more about them, check back often. Here are a few clarifications:
Sample size:
The stated sample size in the protocol is the minimum required. We strongly encourage the use of larger samples if at all possible. Although the reports will not focus on whether individual studies reject the null hypothesis (we're not tallying succeed/fail decisions based on p<.05), greater sample sizes will give a more precise estimate of the true underlying effect. The larger the sample, the smaller the confidence interval around your effect size estimate, and the better the meta-analytic effect size estimate across studies will be. So, please use as large a sample as is practical, and specify your proposed sample size in the Secondary Replication Proposal Form when you submit it.
Necessary deviations from the protocol:
Please specify any necessary deviations from the protocol in your submission. The editors will review those deviations to make sure they do not substantively change the protocol. For example, several people have noted that the control task—naming states and capitals—might not work for subjects outside the USA. We have discussed this issue with Jonathan Schooler, and we have agreed that labs located outside the United States could use an countries/capitals alternative if necessary. Note that if a deviation in protocol would mean that the study is not a direct replication, we will not be able to approve it. These must be direct replications, not extensions of the result or conceptual replications that differ in important respects from the original. Please do not justify deviations by noting that the study will show something new and different than the original—that's not the goal.
Could it be done better:
The goal of Registered Replication Reports is different from that of a traditional journal article in that we are focusing on direct replications of an effect. No study is perfect, and any study can be improved. We hope to choose studies for Replication Reports that do not have fundamentally flawed designs even if they have quirks that might not optimally test a theoretical question. We might consider improvements to the measurement of the dependent variable, but not if they change the effect being measured. For example, we would consider computerized presentations for a design that originally was conducted using slides or paper, but only if the presentation did not change the nature of the dependent measure. More precise measurement of the same dependent measure (e.g., computerized timing rather than hand-timing) generally will be fine. Similarly, we would permit computerized presentation using E-Prime even if the original study was conducted using MatLab. The guiding principle is whether the change fundamentally alters what is being measured. All studies in a Replication Report should be measuring the same outcome.
No comments:
New comments are not allowed.