• Friday, October 30th 2020 at 16:00 - 17:00 UK (Other timezones)
  • General participation info   |   Participate online   |   + Phone in United States (Toll Free): 1 877 309 2073 United States: +1 (571) 317-3129 Australia (Toll Free): 1 800 193 385 Australia: +61 2 8355 1020 Austria (Toll Free): 0 800 202148 Belgium (Toll Free): 0 800 78884 Canada (Toll Free): 1 888 455 1389 Denmark (Toll Free): 8090 1924 France (Toll Free): 0 805 541 047 Germany (Toll Free): 0 800 184 4222 Greece (Toll Free): 00 800 4414 3838 Hungary (Toll Free): (06) 80 986 255 Iceland (Toll Free): 800 9869 Ireland (Toll Free): 1 800 946 538 Israel (Toll Free): 1 809 454 830 Italy (Toll Free): 800 793887 Japan (Toll Free): 0 120 663 800 Luxembourg (Toll Free): 800 22104 Netherlands (Toll Free): 0 800 020 0182 New Zealand (Toll Free): 0 800 47 0011 Norway (Toll Free): 800 69 046 Poland (Toll Free): 00 800 1213979 Portugal (Toll Free): 800 819 575 Spain (Toll Free): 800 900 582 Sweden (Toll Free): 0 200 330 905 Switzerland (Toll Free): 0 800 740 393 United Kingdom (Toll Free): 0 800 169 0432 Access Code: 731-636-357

Reproducibility, the ability to replicate scientific findings, is a prerequisite for scientific discovery and clinical utility. Troublingly, we are in the midst of a reproducibility crisis. A key to reproducibility is that multiple measurements of the same item (e.g., experimental sample or clinical participant) under fixed experimental constraints are relatively similar to one another. We demonstrate that existing reproducibility statistics, such as intra-class correlation coefficient and fingerprinting, are not valid measures of reproducibility, in that they can provide unreasonably low or high results, even without model misspecification. We therefore propose a novel statistic, discriminability, which quantifies the degree to which an individual’s samples are relatively similar to one another, without restricting the data to be univariate, Gaussian, or even Euclidean. Using this statistic, we introduce the possibility of optimizing experimental design via increasing discriminability and prove that optimizing discriminability improves performance bounds in subsequent inference tasks. In extensive simulated and real datasets (focusing on brain imaging and demonstrating on genomics), only optimizing data discriminability improves performance on all subsequent inference tasks for each dataset. We therefore suggest that designing experiments and analyses to optimize discriminability may be a crucial step in solving the reproducibility crisis, and more generally, mitigating accidental measurement error.
Pre-print: https://www.biorxiv.org/content/10.1101/802629v6

 

 

 

 

 

Josha Vogelstein
Joshua T. Vogelstein, PhD
Assistant Professor
Institute for Computational Medicine
Center for Imaging Science
Institute for Data Intensive Engineering and Sciences
Johns Hopkins University

Joshua Vogelstein – Eliminating accidental deviations to minimize generalization error: applications in connectomics and genomics