Deep learning is an approach to artificial intelligence (AI) that loosely mimics how the brain works. By mimicking the operations of the brain, deep learning can rival, or even outperform, human beings in a number of functions, such as image recognition [1], motor control [2], and speech recognition [3]. Moreover, deep networks can develop representations that are a better match to recordings in the neocortex of humans or non-human primates than existing models in neuroscience [4,5]. This suggests that deep learning captures something important about how our own brains work.

The key to “deep” learning (as opposed to “shallow” learning) is the use of multilayer networks that possess “hidden layers” between sensory inputs and behavioural outputs. (Shallow networks have no hidden layers.) But, learning with multiple layers requires algorithms that can solve the credit assignment problem [6]. To train a network of neurons effectively, each neuron must receive “credit” for its contribution to behaviour. In shallow networks, this is easy, since each neuron either drives behaviour or lies only one synaptic connection away from those that do. In deep networks with hidden layers, however, assigning credit can be difficult, since the downstream effects of any individual neuron depend on the downstream network architecture.

AI researchers typically solve the credit assignment problem with the backpropagation of error algorithm [7]. But, backpropagation contains a host of biologically unrealistic mechanisms [6]. Moreover, backpropagation requires large amounts of labelled data, something our own brains do not require. Thus, we know that the real brain doesn’t use backpropagation, per se. Yet, the success of deep learning, and its match to experimental data, suggests that even though our own brains probably don’t use backpropagation, they probably do have a solution to the credit assignment problem [8].

One of the questions that the LiNC Lab’s research explores is: how could the neocortex solve the credit assignment problem? We are examining whether the morphological and electrophysiological properties of different neuron types in the neocortex [9-11] provide a mechanism for simultaneously communicating sensorimotor information and credit assignment signals. In particular, we are examining whether the properties of pyramidal neuron dendrites provide the means for effective credit assignment in multilayer networks [12]. As well, we are exploring how different types of inhibitory interneurons may contribute to credit assignment calculations. To do this, we use a combination of computational modelling, patch clamping, optogenetics and 2-photon imaging. The ultimate goal of this work is to better understand the neurobiological basis of animal and human intelligence, and provide new insights to help guide AI development.


  1. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems 1097–1105 (2012).
  2. Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015).
  3. G. Hinton et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 29, 82–97 (2012).
  4. Khaligh-Razavi, S.-M. & Kriegeskorte, N. Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Comput Biol 10, e1003915 (2014).
  5. Cadieu, C. F. et al. Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition. PLoS Comput Biol 10, e1003963 (2014).
  6. Bengio, Y., Lee, D.-H., Bornschein, J. & Lin, Z. Towards Biologically Plausible Deep Learning. arXiv Prepr. 1502.04156 (2015).
  7. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
  8. Marblestone, A. H., Wayne, G. & Kording, K. P. Toward an Integration of Deep Learning and Neuroscience. Front. Comput. Neurosci. 10, (2016).
  9. Larkum, M. E., Zhu, J. J. & Sakmann, B. A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature 398, 338–341 (1999).
  10. Larkum, M. E., Nevian, T., Sandler, M., Polsky, A. & Schiller, J. Synaptic Integration in Tuft Dendrites of Layer 5 Pyramidal Neurons: A New Unifying Principle. Science 325, 756–760 (2009).
  11. Spruston, N. Pyramidal neurons: dendritic structure and synaptic integration. Nat Rev Neurosci 9, 206–221 (2008).
  12. Guerguiev, J., Lillicrap, T. P. & Richards, B. A. Towards deep learning with segregated dendrites. arXiv Prepr. 1610.00161 (2017).