Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/42275
Title: Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis
Authors: Schweinsberg, Martin
Feldman, Michael
Staub, Nicola
Van Den Akker, Olmo
Van Aert, Robbie
Van Assen, Marcel
Liu, Yang
Althoff, Tim
Heer, Jeffrey
Kale, Alex
Mohamed, Zainab
Amireh, Hashem
Venkatesh Prasad, Vaishali
Bernstein, Abraham
Robinson, Emily
Snellman, Kaisa
Amy Sommer, S
Otner, Sarah
Robinson, David
Madan, Nikhil
Silberzahn, Raphael
Goldstein, Pavel
Tierney, Warren
Murase, Toshio
Mandl, Benjamin
Viganola, Domenico
Strobl, Carolin
Schaumans, Catherine
KELCHTERMANS, Stijn 
Naseeb, Chan
Mason Garrison, S
Yarkoni, Tal
Richard Chan, C
Adie, Prestone
Alaburda, Paulius
Albers, Casper
Alspaugh, Sara
Alstott, Jeff
Nelson, Andrew
Ariño De La Rubia, Eduardo
Arzi, Adbi
Bahník, Štěpán
Baik, Jason
Winther Balling, Laura
Banker, Sachin
Aa Baranger, David
Barr, Dale
Barros-Rivera, Brenda
Bauer, Matt
Blaise, Enuh
Boelen, Lisa
Bohle Carbonell, Katerina
Briers, Robert
Burkhard, Oliver
Canela, Miguel-Angel
Castrillo, Laura
Catlett, Timothy
Chen, Olivia
Clark, Michael
Cohn, Brent
Coppock, Alex
Cugueró-Escofet, Natàlia
Curran, Paul
Cyrus-Lai, Wilson
Dai, David
Valentino Dalla Riva, Giulio
Danielsson, Henrik
Russo, Rosaria
De Silva, Niko
Derungs, Curdin
Dondelinger, Frank
Duarte De Souza, Carolina
Tyson Dube, B
Dubova, Marina
Mark Dunn, Ben
Adriaan Edelsbrunner, Peter
Finley, Sara
Fox, Nick
Gnambs, Timo
Gong, Yuanyuan
Grand, Erin
Greenawalt, Brandon
Han, Dan
Hanel, Paul
Hong, Antony
Hood, David
Hsueh, Justin
Huang, Lilian
Hui, Kent
Hultman, Keith
Javaid, Azka
Ji Jiang, Lily
Jong, Jonathan
Kamdar, Jash
Kane, David
Kappler, Gregor
Kaszubowski, Erikson
Kavanagh, Christopher
Khabsa, Madian
Kleinberg, Bennett
Kouros, Jens
Krause, Heather
Krypotos, Angelos-Miltiadis
Lavbič, Dejan
Ling Lee, Rui
Leffel, Timothy
Yang Lim, Wei
Liverani, Silvia
Loh, Bianca
Lønsmann, Dorte
Wei Low, Jia
Lu, Alton
Macdonald, Kyle
Madan, Christopher
Hjorth Madsen, Lasse
Maimone, Christina
Mangold, Alexandra
Marshall, Adrienne
Ester Matskewich, Helena
Mavon, Kimia
Mclain, Katherine
Mcnamara, Amelia
Mcneill, Mhairi
Mertens, Ulf
Miller, David
Moore, Ben
Moore, Andrew
Nantz, Eric
Nasrullah, Ziauddin
Nejkovic, Valentina
Nell, Colleen
Arthur Nelson, Andrew
Nilsonne, Gustav
Nolan, Rory
O'brien, Christopher
O'neill, Patrick
O'shea, Kieran
Olita, Toto
Otterbacher, Jahna
Palsetia, Diana
Pereira, Bianca
Pozdniakov, Ivan
Protzko, John
Reyt, Jean-Nicolas
Riddle, Travis
(akmal) Ridhwan Omar Ali, Amal
Ropovik, Ivan
Rosenberg, Joshua
Rothen, Stephane
Schulte-Mecklenbeck, Michael
Sharma, Nirek
Shotwell, Gordon
Skarzynski, Martin
Stedden, William
Stodden, Victoria
Stoffel, Martin
Stoltzman, Scott
Subbaiah, Subashini
Tatman, Rachael
Thibodeau, Paul
Tomkins, Sabina
Valdivia, Ana
Druijff-Van De Woestijne, Gerrieke
Viana, Laura
Villesèche, Florence
Duncan Wadsworth, W
Wanders, Florian
Watts, Krista
Wells, Jason
Whelpley, Christopher
Won, Andy
Wu, Lawrence
Yip, Arthur
Youngflesh, Casey
Yu, Ju-Chi
Zandian, Arash
Zhang, Leilei
Zibman, Chava
Luis Uhlmann, Eric
Issue Date: 2021
Publisher: ACADEMIC PRESS INC ELSEVIER SCIENCE
Source: ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES, 165 , p. 228 -249
Abstract: In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists' gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for orga-nizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed.
Keywords: Crowdsourcing data analysis;Scientific transparency;Research reliability;Scientific robustness;Researcher degrees of freedom;Analysis-contingent results
Document URI: http://hdl.handle.net/1942/42275
ISSN: 0749-5978
e-ISSN: 1095-9920
DOI: https://doi.org/10.1016/j.obhdp.2021.02.003
ISI #: WOS:000674429500016
Category: A1
Type: Journal Contribution
Appears in Collections:Research publications

Show full item record

WEB OF SCIENCETM
Citations

49
checked on May 2, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.