The Stanford Center for Reproducible Neuroscience was founded in 2015, with the goal of harnessing high-performance computing to make neuroscience research more reliable.
In recent years there has been increasing concern about the reproducibility of scientific results. Because scientific research represents a major public investment and is the basis for many decisions that we make in medicine and society, it is essential that we can trust the results. Our goal is to provide researchers with tools to do better science. Our starting point is in the field of neuroimaging, because that’s the domain where our expertise lies.
Our center has three overall aims:
- Provide researchers with easily accessible tools to analyze their data in ways that focus on the reproducibility of the results. To do this we will leverage high-performance computing resources to allow researchers to understand how their results vary across different analysis approaches and different subsets of their data.
- Provide researchers with a way to easily share their data once the research is published. We have more than five years of experience in sharing of fMRI data, through the OpenfMRI and Neurovault projects. We believe that the sharing of data is essential to scientific transparency and reproducibility, and the work of our center will extend the OpenfMRI project to encompass a complete online data analysis and sharing platform.
- Provide researchers with a way to transparently share their analysis workflows. Interpretation of experimental results cannot be performed without deep understanding of what happened to the data. We will expose details (pipelines, parameters, software versions versions etc.) of each analysis to make future replications easier.
In the coming year we will be developing a new analysis platform, which we hope to unveil in early 2016. If you would like to stay apprised of our progress you can join our mailing list or follow us on Twitter @openfmri.