Supplementary material from "The Reliability of Replications: A Study in Computational Reproductions"
Posted on 2025-02-03 - 09:42
This study investigates researcher variability in computational reproduction, a place where it is least expected. Eighty-five independent teams attempted computational replication of results from an original study of policy preferences and immigration. Replication teams were randomly grouped into a ‘transparent group’ receiving original study and code, or ‘opaque group’ receiving only a method and results description, and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9% and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group were reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers “doing reproduction” would help, implying a need for better training. We also suggest increased awareness of complexity in the research process and ‘push button’ replications.
CITE THIS COLLECTION
DataCiteDataCite
No result found
Breznau, Nate; Wuttke, Alexander; Rinke, Eike Mark; Adem, Muna; Adriaans, Jule; Akdeniz, Esra; et al. (2025). Supplementary material from "The Reliability of Replications: A Study in Computational Reproductions". The Royal Society. Collection. https://doi.org/10.6084/m9.figshare.c.7655134.v1