Ben's headshot

Hi, I'm Ben Hoover

I'm an AI Researcher studying memory

Understanding AI foundation models from the perspective of large Associative Memories.

I am a Machine Learning PhD student at Georgia Tech advised by Polo Chau and an AI Research Engineer with IBM Research. My research focuses on building more interpretable and parameter efficient AI by rethinking the way we train and build deep models, taking inspiration from Associative Memories and Hopfield Networks. I like to visualize what happens inside AI models.

News

Oct 2023
πŸŽ‰ Memory in Plain Sight and Energy Transformer accepted to AMHN Workshop at NeurIPS 2023!
Sep 2023
πŸ“œ Memory in Plain Sight released on ArXiv!
Sep 2023
πŸŽ‰ Energy Transformer accepted to NeurIPS'23!
Sep 2023
πŸ§‘β€πŸ« Gave a talk about Energy Transformer to the McMahon Lab.
Sep 2023
πŸŽ‰ ConceptEvo accepted to CIKM'23!
Aug 2023
πŸŽ‰ Diffusion Explainer accepted to VIS'23
Aug 2023
πŸš€ Released an Associative Memory demo that runs in your browser.
Aug 2023
πŸ§‘β€πŸ« Selected as a panelist for the AMHN Workshop at NeurIPS'23.
Jun 2023
πŸš€ Released Molformer: a UI to explore AI-generated organic small molecules. See the blog at IBM Research and paper at Science Advances!
May 2023
🐣 Became a dad πŸ€—πŸ‘Ά
Jan 2023
πŸ“£ GATech highlighted my research in their College of Computing News

Memory Research Highlights

Thumbnail for Memory in Plain Sight

Memory in Plain Sight

We are the first work to discover that diffusion models perform memory retrieval in their denoising dynamics.
Thumbnail for Energy Transformer

Energy Transformer

We derive an Associative Memory inspired by the famous Transformer architecture, where the forward pass through the model is memory retrieval by energy descent.
Thumbnail for HAMUX

HAMUX

We invent a software abstraction around "synapses" and "neurons" to assemble energy functions of complicated Associative Memories, where memory retrieval is performed through autograd.

Visualization Research Highlights

Thumbnail for Diffusion Explainer

Diffusion Explainer

Diffusion models are complicated. We break down Stable Diffusion and explain each component of the model visually.
Thumbnail for Shared Interest

Shared Interest

Do humans and AI models agree on what features are important for model prediction? How much do they differ?
Thumbnail for RXNMapper

RXNMapper

We discover that Transformers trained on chemical reactions learn, on their own, how atoms physically rearrange.