Examining methods for estimating mutual information in spiking neural systems
Mutual information enjoys wide use in the computational neuroscience community for analyzing spiking neural systems. Its direct calculation is difficult because estimating the joint stimulus-response distribution requires a prohibitive amount of data. Consequently, several techniques have appeared for bounding mutual information that rely on less data. We examine two upper bound techniques and find that they are unreliable and can introduce strong assumptions about the neural code. We also examine two lower bounds, showing that they can be very loose and possibly bear little relation to the mutual information's actual value.