14:00, Seminar Hall On understanding Neural Circuit Computation Venkatakrishnan Ramaswamy NCBS, Bangalore. 25-04-19 Abstract In this talk, I will describe some of my work in building theory to understand neural circuit computation in the brain. I will talk about two different lines of work. The talk will be self-contained, in that no Neuroscience background will be assumed. Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand how neural circuit computations mechanistically *cause* behavior. Here, using techniques from Computational Complexity Theory, we examine how many experiments are needed to obtain such an empirical understanding. It is proved, mathematically, that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in general, unless P=NP. Worse still, the feasible experimental regime is one where the number of experiments one can perform grows sub-linearly in the number of neurons, suggesting a fundamental impediment to such an understanding. Determining which notions of understanding are algorithmically tractable in what contexts, thus, becomes an important new direction in Neuroscience. The second line of work concerns building theory to understand how the structure of neuronal networks constrain computation in them. Neurons, primarily, communicate with each other using sequences of stereotypical action potentials or spikes, which are called spike trains. Here, we ask what spike-train to spike-train mappings are rendered impossible by virtue of the structure of networks. For acyclic networks, we show multiple results of this form. Our neurons are abstract mathematical objects satisfying a few axioms. We also show an instance of a case (depth two networks) where one cannot infer more results of this form, unless more axioms are assumed for our neurons.
|