Ben's headshot

Hi, I'm Ben Hoover

I'm an AI Researcher studying memory

Understanding AI foundation models from the perspective of large Associative Memories.

I am a Machine Learning PhD student at Georgia Tech advised by Polo Chau and an AI Research Engineer with IBM Research. My research focuses on building more interpretable and parameter efficient AI by rethinking the way we train and build deep models, taking inspiration from Associative Memories and Hopfield Networks. I like to visualize what happens inside AI models.

News

Jul 2024
πŸŽ‰ Transformer Explainer accepted as a VIS'24 Poster!
Jun 2024
πŸŽ‰ Diffusion Explainer accepted as a VIS'24 Short Paper!
May 2024
β˜•οΈ Invited to speak at Plectics Lab's Colloquium: "Hopfield Networks 2.0: Associative Memory for the Modern Era of AI" (recorded presentation here)
Apr 2024
πŸŽ‰ Diffusion Explainer accepted to the Demo Track at IJCAI 2024!
Feb 2024
🎬 NeurIPS'23 Associative Memory workshop recordings are openly available! See the recorded panel on software engineering and Hopfield Nets here (courtesy of SlidesLive).
Oct 2023
πŸŽ‰ Memory in Plain Sight and Energy Transformer accepted to AMHN Workshop at NeurIPS 2023!
See more...

Memory Research Highlights

Thumbnail for Memory in Plain Sight

Memory in Plain Sight

We are the first work to discover that diffusion models perform memory retrieval in their denoising dynamics.
Thumbnail for Energy Transformer

Energy Transformer

We derive an Associative Memory inspired by the famous Transformer architecture, where the forward pass through the model is memory retrieval by energy descent.
Thumbnail for HAMUX

HAMUX

We invent a software abstraction around "synapses" and "neurons" to assemble energy functions of complicated Associative Memories, where memory retrieval is performed through autograd.

Visualization Research Highlights

Thumbnail for Diffusion Explainer

Diffusion Explainer

Diffusion models are complicated. We break down Stable Diffusion and explain each component of the model visually.
Thumbnail for Shared Interest

Shared Interest

Do humans and AI models agree on what features are important for model prediction? How much do they differ?
Thumbnail for RXNMapper

RXNMapper

We discover that Transformers trained on chemical reactions learn, on their own, how atoms physically rearrange.