Human and animal brains learn remarkably fast from streams of data that arrive in a non-i.i.d. manner, and they do so without forgetting important information. Current deep neural networks, on the other hand, often struggle to learn quickly from data streams and struggle to prevent forgetting. My research seeks to use ideas about how brains store and process information to improve learning in deep neural networks.
My undergraduate degrees are in philosophy and computational cognitive science. After undergraduate, I studied philosophy and neuroscience at Georgia State as a Masters student under a fellowship from their neuroscience institute. After several years of pondering the nature of consciousness, I redirected my research to computational cognitive science and artificial intelligence at UC Irvine, where I recently received my PhD. My dissertation focused on bio-inspired learning algorithms for neural networks and bio-inspired memory models.
I now work at the start-up Zyphra where I work on memory-augmented language models. Although I love working on these machine learning projects, I still dabble in philosophy, which you can find on my blog.