Research Stories

Opening a black box: Interpretation of machine learning models in neuroimaging

Development of evaluation system that can explain Artificial Intelligence model in neuroimaging field

Biomedical Engineering
Prof. WOO, CHOONG-WAN
Lada Kohoutová Researcher

  • Opening a black box: Interpretation of machine learning models in neuroimaging
  • Opening a black box: Interpretation of machine learning models in neuroimaging
Scroll Down

□ Neuroimaging allows us to gain huge amount of data of the brain structure and function, and these data need to be processed with sophisticated methods. Machine learning has gained a large popularity in this field as a tool with which it is possible to create computational models of the brain function related to behaviour or cognitive stimuli. However, these models are complex and often an unreadable “black box” to humans. A computational model of the brain, in which we cannot explain why and how it works, only hardly contributes to the neuroscientific knowledge or clinical practice. Itis, thus, necessary to develop interpretation methods of such models.


□ In the Cocoan lab (https://cocoanlab.github.io) lead by Choong-Wan Woo (Center for Neuroscience Imaging Research, Institute for Basic Science, Department of Biomedical Engineering, Sungkyunkwan University in South Korea), Lada Kohoutová (PhD student) and colleagues developed a framework of model interpretation and on its basis, they assembled a protocol according to which it is possible to easily assess and interpret a model from the aspect of its behaviour, significant features and in the context of biology (Fig. 1). Individual steps of their analysis pipeline yield a number of component results that together create an interpretable picture of a model. They introduced the framework along with the protocol in an article published in the journal, Nature Protocols.


□ Prof. Choong-Wan Woo, who led the study, said, "Machine learning and the use of artificial intelligence are becoming more and more popular and common in various fields of neuroimaging, and thus the need for interpreting and explaining neuroimaging-based machine learning models is rapidly increasing,” and added, "This study will help develop neuroimaging-based predictive models that can be explained and trusted, and further promote a deeper understanding of brain mechanisms and its disorders”


□ Lada Kohoutová, the first author also said, “Methods and procedures of interpreting machine-learning models in neuroimaging are not yet well-established and unified. Our protocol aims to establish a basis for a unified approach to model interpretation, open this black box a little, and so contribute to a deeper understanding of the brain and its function.”


□ This work was supported mainly by Institute for Basic Science (IBS-IBS-R015-D1), National Research Foundation (2019R1C1C1004512), Ministry of Science and ICT (18-BR-03 and 2019-0-01367-BabyMind).


□ This work is published in Nature Protorcols (IF 11.334) on March 18, 2020. 


※ For video abstract, please visit https://youtu.be/kcDfEkoQa7Y




COPYRIGHT ⓒ 2017 SUNGKYUNKWAN UNIVERSITY ALL RIGHTS RESERVED. Contact us