Throughout our lives, we learn from the positive or negative consequences of our actions. In machine learning applications, this is referred to as reinforcement learning. Often, this type of learning is implicit, or subconscious. But sometimes, we also have explicit memories for what happened to us and we use these memories to guide our actions. Not all of our memories are stored by our brains in the same way, though. Some of our memories are very detailed, giving us the ability to recall specific events and remember what it felt like when we were living it. Our recent memories are stored in this way, as are some of the more emotionally salient events from our life. For example, you might be able to recall exactly what you had for breakfast yesterday or the song that was playing during your first kiss. In contrast, many of our memories are stored in an abstract manner that loses the details of what happened to us but retains information about patterns in our experience. For example, you may not remember exactly what you had for breakfast two years ago, but you probably remember what sorts of things you used to eat for breakfast at that point in your life.

Our lab is working to understand how the brain pairs these two different types of memory with implicit learning when we learn from rewards or punishments. In particular, we hypothesize that in a changing world detailed vs. abstract memories may be more or less useful depending on the current context. Using behavioural testing paired with chemogenetics and optogenetics in mice, as well as artificial neural network models, we are exploring how detailed versus general memories can improve reinforcement learning in dynamic environments.