A Data-Driven Information Theoretic Approach for Neural Network Connectivity Inference
Master of Science
A major challenge in neuroscience is to develop effective tools that infer the circuit connectivity from large-scale recordings of neuronal activity patterns, such that we can study how structures of neural networks enable brain functioning. To tackle this challenge, we used context tree maximizing (CTM) to estimate directed information (DI), which measures causal influences among neural spike trains in order to infer synaptic connections. In contrast to existing methods, our method is data-driven and can readily identify both linear and nonlinear relations between neurons. This CTM-DI method reliably identified circuit structures underlying simulations of realistic conductance-based networks. It detected direct connections, eliminated indirect connections, quantified the amount of information flow, reliably distinguished synaptic excitation from inhibition and inferred the time-course of the synaptic influence. From voltage-sensitive dye recordings of the buccal ganglion of Aplysia, our method detected many putative motifs and patterns. This method can be applied to other large-scale recordings as well. It offers a systematic tool to map network connectivity and to track changes in network structure such as synaptic strengths as well as the degrees of connectivity of individual neurons, which in turn could provide insights into how modifications produced by learning are distributed in a neural network. Furthermore, this information theoretic approach can be extended to the analysis of other recordings that can be modeled as point processes, such as internet traffic, disease outbreak and seismic activity.
Functional connectivity; directed information; context tree maximizing