top of page
Dynamic Persistent Memory

February - May 2019

dpm_overview.png

This project was an opportunity for me to explore prior work and some of my own ideas in using dynamical systems for computation. In particular, I focused on a restriction of the space of all possible dynamical systems to those which could be embodied by recurrent networks of neurons. More specifically, I wanted to understand how information could be represented in the time-varying dynamics of neural circuits, as well as the classes of transformations on those representations which could be implemented within the constraints of neural dynamical systems.

 

I restricted my attention to the question of persistent memory storage in recurrent neural architectures. Much of my work went into carefully defining what exact conditions needed to be satisfied for this property to hold. Next, after finding a simple class of networks which exhibited this property, I found that, in many cases, a feedforward network could learn a decoding function to map the firing dynamics of the network back to the invariant vector stored in its memory via stimulation at the beginning of the trial.

More details and figures coming soon!

bottom of page