The Royal Society
Browse

Supplementary material from "The Reliability of Replications: A Study in Computational Reproductions"

Posted on 2025-02-03 - 09:42
This study investigates researcher variability in computational reproduction, a place where it is least expected. Eighty-five independent teams attempted computational replication of results from an original study of policy preferences and immigration. Replication teams were randomly grouped into a ‘transparent group’ receiving original study and code, or ‘opaque group’ receiving only a method and results description, and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9% and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group were reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers “doing reproduction” would help, implying a need for better training. We also suggest increased awareness of complexity in the research process and ‘push button’ replications.

CITE THIS COLLECTION

DataCite
No result found
or
Select your citation style and then place your mouse over the citation text to select it.

SHARE

email

Usage metrics

Royal Society Open Science

AUTHORS (99)

Nate Breznau
Alexander Wuttke
Eike Mark Rinke
Muna Adem
Jule Adriaans
Esra Akdeniz
Amalia Alvarez-Benjumea
Henrik Kenneth Bent Axel Andersen
Daniel Auer
Flavio Azevedo
Oke Bahnsen
Ling Bai
Dave Balzer
Paul Cornelius Bauer
Gerrit Bauer
Markus Baumann
Sharon Baute
Verena Benoit
Julian Bernauer
Carl Berning
need help?