Response to Next-Generation Data Science Challenges in Health and Biomedicine RFI

November 1, 2017

Text of the Request for Information call: https://grants.nih.gov/grants/guide/notice-files/NOT-LM-17-006.html Recent years have proven that two components are essential for artificial intelligence breakthroughs: large, rich datasets and advanced algorithms. Advances in machine vision (especially object recognition) are a great example of this. The substantial increase in the accuracy of those systems was only possible due to availability of […]

OpenNeuro App Highlights: MAGeT-Brain

October 2, 2017

This is a first episode of a series of blog posts highlighting image analysis apps available on the OpenNeuro.org platform. This piece was contributed by Gabriel A. Devenyi and Mallar Chakravarty. New to the OpenNeuro platform is the automatic structural segmentation pipeline MAGeT-Brain. Coming along with the pipeline are 5 expertly-segmented atlas/label combinations, at 0.3 mm isotropic, […]

Announcing the OpenNeuro platform – Open and Reproducible Science as a Service

June 26, 2017

We are pleased to announce that the OpenNeuro platform is now available for users at http://www.openneuro.org.  This platform represents the culmination of more than two years of work by the members of our center working closely with Squishymedia and supported by the Laura and John Arnold Foundation.  Here we would like to provide an overview […]

Accepted projects for the 2nd CRN Coding Sprint

June 6, 2017

A few weeks ago we made an open call for applications for the 2nd CRN Coding Sprint. We received an overwhelming number of excellent applications. So many of the submitted projects exceeded our expectations we decided to sponsor more participants than initially planned. Here is the final list of accepted projects: Automate creation of BIDS-MEG […]

2nd Annual CRN Coding Sprint – open call for applications

April 17, 2017

Stanford Center for Reproducible Neuroscience is proud to announce an open call for applications to the 2nd Annual Coding Sprint. Following the success of last year’s sprint, this year we will be focusing on three topics: Adding support for producing compatible outputs (BIDS Derivatives) to existing BIDS Apps for preprocessing. Containerizing and adding support for […]

Should we rename OpenfMRI? Request for community input

August 17, 2016

We have been busy in the last year building a new platform that will ultimately serve as the new basis for the OpenfMRI project, supporting both data analysis and sharing. One major change with the new platform is that we will support processing and sharing of datasets that do not include fMRI (such as structural […]

Report from the first CRN coding sprint

August 14, 2016

Two weeks ago (1st-4th of August 2016) we hosted a coding sprint at Stanford aimed at making neuroimaging data processing and analysis tools more portable and accessible. We invited an international group representing many of the leading data processing pipelines (such as SPM, FSL, BROCCOLI, MRTrix, NIAK, C-PAC, Nipype, OPPNI aka NPAIRS, hyperalignment, nilearn, mindboggle […]

Coding sprint for a new neuroimaging data processing platform

April 6, 2016

The CRN mission is to make the best neuroimaging methods easily available to researchers and at the same time incentivize them to share their data. To achieve this we have built an infrastructure that makes uploading data easy (thanks to the new BIDS standard). Once the data are uploaded, researchers can run preprocessing and analysis […]

Big problems for common fMRI thresholding methods

December 8, 2015

A new preprint has been posted to the ArXiv that has very important implications and should be required reading for all fMRI researchers.  Anders Eklund, Tom Nichols, and Hans Knutsson applied task fMRI analyses to a large number of resting fMRI datasets, in order to identify the empirical corrected “familywise” Type I error rates observed […]

Reproducibilty and the ease of fooling ourselves

August 28, 2015

The new study from the Reproducibility Project, published this week in Science, has been getting a great deal of attention. In short, out of 100 attempted well-powered replications of findings from three psychology journals, less than half were found to replicate.  Ed Yong has a particularly nice piece at the Atlantic that discusses the results […]

How not to get lost in your data

July 7, 2015

Prof. Smith had a brilliant idea – without acquiring any new data he will be able to test his new hypothesis. All he has to do is to get his PhD student to reanalyze the data acquired by his postdoc two years ago. Brilliant! And so cheap! Everything was rosy until he tried to put […]