Not all of our memories are stored in the same way. We have multiple memory systems that can provide different types of information [1,2]. Some of our memories are very detailed, giving us the ability to recall specific events and remember what it felt like when we were living it [3]. Our recent memories are stored in this way, as are some of the more emotionally salient events from our life. For example, you might be able to recall exactly what you had for breakfast yesterday or the song that was playing during your first kiss. In contrast, many of our memories are stored in an abstract manner that loses the details of what happened to us but retains information about patterns in our experience [4,5]. For example, you may not remember exactly what you had for breakfast two years ago, but you probably remember what sorts of things you used to eat for breakfast at that point in your life. Interestingly, our brains seem to actively forget detailed memories [6], while storing abstract memories for longer periods of time [7].

Why do we have these different types of memory? And why do we forget some types of memory but not others? Arguably, the goal of memory is not to store records of our lives, but to help us make decisions. In particular, throughout our lives, we learn from the positive or negative consequences of our actions. In machine learning applications, this is referred to as reinforcement learning. Often, this type of learning is implicit, or subconscious. But perhaps we can use different types of memory to improve our ability to learn from positive or negative outcomes [8].

Research in the LiNC Lab seeks to understand how the brain uses these different types of memory when engaged in reinforcement learning [9]. Specifically, we hypothesize that in a changing world detailed vs. abstract memories may be more or less useful depending on the current context. Moreover, we are interested in the specific representations used to store memories and whether they are optimized for reinforcement learning, e.g. by providing a predictive map of the environment [10]. Using behavioural testing paired with chemogenetics and optogenetics in mice, as well as artificial neural network models, we are exploring how detailed versus general memories can improve reinforcement learning. This work will help us to better understand how, and why, human memory works in the ways it does. It also has the potential to help inform new types of memory systems for artificial intelligence [11].

References

  1. White, N. M. & McDonald, R. J. Multiple Parallel Memory Systems in the Brain of the Rat. Neurobiol. Learn. Mem. 77, 125–184 (2002).
  2. Moscovitch, M. et al. Functional neuroanatomy of remote episodic, semantic and spatial memory: a unified account based on multiple trace theory. J. Anat. 207, 35–66 (2005).
  3. Clayton, N. S., Salwiczek, L. H. & Dickinson, A. Episodic memory. Curr. Biol. 17, R189–R191 (2007).
  4. Moscovitch, M., Nadel, L., Winocur, G., Gilboa, A. & Rosenbaum, R. S. The cognitive neuroscience of remote episodic, semantic and spatial memory. Curr. Opin. Neurobiol. 16, 179–190 (2006).
  5. Richards, B. A. et al. Patterns across multiple memories are identified over time. Nat Neurosci 17, 981–986 (2014).
  6. Richards, B. A. & Frankland, P. W. The Persistence and Transience of Memory. Neuron 94, 1071–1084
  7. Frankland, P. W. & Bontempi, B. The organization of recent and remote memories. Nat. Rev. Neurosci. 6, 119–130 (2005).
  8. Gershman, S. J. & Daw, N. D. Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework. Annu. Rev. Psychol. 68, 101–128 (2017).
  9. Santoro, A., Frankland, P. W. & Richards, B. A. Memory Transformation Enhances Reinforcement Learning in Dynamic Environments. J. Neurosci. 36, 12228 (2016).
  10. Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. The hippocampus as a predictive map. bioRxiv (2016). doi:10.1101/097170
  11. Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D. & Lillicrap, T. Meta-learning with memory-augmented neural networks. in International conference on machine learning 1842–1850 (2016).