Satchel Grant

About Me

I'm currently a 4th year PhD candidate at Stanford studying cognition in Jay McClelland's lab. I study mathematical cognition, abstract reasoning, and generalization in computational settings through the use of interpretability methods in artificial neural networks. My interests tend to fall into the intersection of psych, neuro, and comp sci.

This site is mainly to showcase projects that are either published or promising/interesting but probably won't be published. These latter projects may find themselves here because they're at a reasonable state, but I can't find the time to pursue them further, or someone else beat me to the punch.

Note that the date near the beginning of each entry refers to the date that the linked writeup was pushed to github or published. This is not necessarily the date that the blog entry was made!

CV
LinkedIn
Twitter/X
grantsrb at stanford.edu

Published Work

Emergent Symbol-like Number Variables in Artificial Neural Networks

Date Submitted: October 1, 2024
Journal/Venue: TMLR 2025 (under review)

Satchel Grant, Noah D. Goodman, James L. McClelland

Abstract: What types of numeric representations emerge in neural systems? What would a satisfying answer to this question look like? In this work, we interpret Neural Network (NN) solutions to sequence based counting tasks through a variety of lenses. We seek to understand how well we can understand NNs through the lens of interpretable Symbolic Algorithms (SAs), where SAs are defined by precise, abstract, mutable variables used to perform computations. We use GRUs, LSTMs, and Transformers trained using Next Token Prediction (NTP) on numeric tasks where the solutions to the tasks depend on numeric information only latent in the task structure. We show through multiple causal and theoretical methods that we can interpret NN's raw activity through the lens of simplified SAs when we frame the neural activity in terms of interpretable subspaces rather than individual neurons. Depending on the analysis, however, these interpretations can be graded, existing on a continuum, highlighting the philosophical question of what it means to "interpret" neural activity, and motivating us to introduce Alignment Functions to add flexibility to the existing Distributed Alignment Search (DAS) method. Through our specific analyses we show the importance of causal interventions for NN interpretability; we show that recurrent models develop graded, symbol-like number variables within their neural activity; we introduce a generalization of DAS to frame NN activity in terms of linear functions of interpretable variables; and we show that Transformers must use anti-Markovian solutions -- solutions that avoid using cumulative, Markovian hidden states -- in the absence of sufficient attention layers. We use our results to encourage interpreting NNs at the level of neural subspaces through the lens of SAs.

Model Alignment Search

Date Submitted: January 10, 2025
Journal/Venue: ICLR ReAlign Workshop 2025

Satchel Grant

Abstract: When can we say that two neural systems are the same? The answer to this question is goal-dependent, and it is often addressed through correlative methods such as Representational Similarity Analysis (RSA) and Centered Kernel Alignment (CKA). We find ourselves chiefly interested in the relationship between representations and behavior, asking ourselves how we can isolate specific functional aspects of representational similarity to relate our measures to behavior -- avoiding cause vs. correlation pitfalls in the process. In this work, we introduce Model Alignment Search (MAS), a method for causally exploring distributed representational similarity as it relates to behavior. The method learns invertible linear transformations that find an aligned subspace between two distributed networks' representations where functional information can be isolated and manipulated. We first show that the method can be used to transfer values of specific causal variables -- such as the number of items in a counting task -- between networks with different training seeds and different architectures. We then explore open questions in number cognition by comparing different types of numeric representations in models trained on structurally different tasks, we explore differences between MAS and preexisting functional similarity methods, and lastly, we introduce a counterfactual latent auxiliary loss that helps shape functionally relevant alignments even in cases where we do not have causal access to one of the two models for training.

Interpreting the retinal neural code for natural scenes: From computations to neurons

Date Published: Sept. 6, 2023
Journal/Venue: Neuron

Niru Maheswaranathan*, Lane T McIntosh*, Hidenori Tanaka*, Satchel Grant*, David B Kastner, Joshua B Melander, Aran Nayebi, Luke E Brezovec, Julia H Wang, Surya Ganguli, Stephen A Baccus

Abstract: Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model’s internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.

This was a big collaboration over the course of many years. I love this work because it is a beautiful demonstration of how to establish an isomorphism between biological and artificial neural networks, and it shows how you can use that sort of model for interpreting the real biological neural code. I am a co-first author on this work for writing most of the project code, developing many architectural improvements, and developing much of the interneuron comparisons.

A mechanistically interpretable model of the retinal neural code for natural scenes with multiscale adaptive dynamics

Date Published: March 4, 2022
Journal/Venue: Asilomar

Xuehao Ding, Dongsoo Lee, Satchel Grant, Heike Stein, Lane McIntosh, Niru Maheswaranathan, Stephen Baccus

Abstract: The visual system processes stimuli over a wide range of spatiotemporal scales, with individual neurons receiving input from tens of thousands of neurons whose dynamics range from milliseconds to tens of seconds. This poses a challenge to create models that both accurately capture visual computations and are mechanistically interpretable. Here we present a model of salamander retinal ganglion cell spiking responses recorded with a multielectrode array that captures natural scene responses and slow adaptive dynamics. The model consists of a three-layer convolutional neural network (CNN) modified to include local recurrent synaptic dynamics taken from a linear-nonlinear-kinetic (LNK) model. We presented alternating natural scenes and uniform field white noise stimuli designed to engage slow contrast adaptation. To overcome difficulties fitting slow and fast dynamics together, we first optimized all fast spatiotemporal parameters, then separately optimized recurrent slow synaptic parameters. The resulting full model reproduces a wide range of retinal computations and is mechanistically interpretable, having internal units that correspond to retinal interneurons with biophysically modeled synapses. This model allows us to study the contribution of model units to any retinal computation, and examine how long-term adaptation changes the retinal neural code for natural scenes through selective adaptation of retinal pathways.

This project was a good extension of the CNN retinal model that I listed earlier. In this work, we managed to give the CNN model recurrence and used previous kinetics constants to get the model to exhibit slow adaptation (something that was lacking from the previous work).

Unpublished Projects

Direct Manifold Capacity Optimization

Date: October 9, 2024

Satchel Grant, Chi-Ning Chou, Thomas Edward Yerxa, SueYeon Chung

Abstract: Manifold capacity is a tool for interpreting artificial and biological neural representations. Although the technique has shown utility in many analyses, an open question remains about whether the theory can also be used as a training objective for useful/robust neural representations. Previous work has made progress towards this goal in self-supervised learning settings by making assumptions about the shape of the manifolds. In this work, we use differentiable quadratic programming to maximize manifold capacity directly, without using simplifying assumptions. We show that our technique can match the overall performance of the pre-existing baselines with the ability to tune a hyperparameter to minimize the cumulative gradient steps or the total training samples. Our results show promise for exploring domains less suited to pre-existing simplifying assumptions, and our results add to the mounting evidence of manifold capacity as a powerful tool for characterizing neural representations.

This is ongoing work that will soon be submitted to ICML.

Bidirectional Influences of Grounded Quantification and Language in Acquiring Numerical Cognitive Abilities

Date Released: Dec 6, 2023

Abstract: We explore the role of language in cognition within the domain of number, revisiting a debate on the role of exact count words in numeric matching tasks. To address these issues, we introduce a virtual environment to simulate exact equivalence tasks like those used to study the numerical abilities of members of the Pirah˜a tribe, who lack exact number words, in previous works. We use recurrent neural networks to model visuospatially grounded counting behavior with and without the influence of exact number words. We find that it is possible for networks to learn to perform exact numeric matching tasks correctly up to non-trivial quantities with and without the use of exact number words. Importantly, however, networks with limited counting experience with and without language capture the approximate behavior exhibited by adult members of the Pirah˜a and young children learning to count in cultures with number words. Our networks also exhibit aspects of human numerical cognition purely through learning to solve the tasks: a flat coefficient of variation and a compressed mental number representation. We explore the causal influences of language and actions, showing that number words decrease the amount of experience needed to learn the numeric matching tasks, and learning the task actions reduces experience needed to learn number words. We use these results as a proof of principle for expanding our understanding of number cognition, and we suggest refinement to our collective understanding of the interactions between language and thought.

This is ongoing work that will soon be submitted to Cognition.

Leveraging Large Language Models for Context Compression

Date Released: May 31, 2023

Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on a wide range of language modeling tasks. LLMs have also demonstrated an ability to learn new tasks from clever prompt sequences, without the need for gradient updates. The length of an LLM's context window, however, has quadratic computational complexity, making large context windows prohibitively expensive. Furthermore, a problem with LLMs as models of cognition is their perfect memory for tokens within their context window, and their non-existant memory for things outside of their context window in the absence of weight updates. To address the challenges of large context windows, we introduce a technique that uses pretrained LLMs to create compressed, representations of sub-sequences within the context. We introduce a new token type that can be trained to compress a history of tokens at inference without additional gradient updates after training. These tokens serve to increase the context size while taking a step toward aligning LLMs with human stimulus abstraction. We use this technique to augment the open source Bloom models, and we show that the compressed representations can recover ~80\% of the performance of the LLMs using the full context.

I never submitted this to any conferences because it ended up being very similar to Jesse Mu's work, Learning to Compress Prompts with Gist Tokens. Then Alexis Chevalier et. al. published Adapting Language Models to Compress Contexts that does the exact same idea. Chevalier et. al. managed to scale things up very nicely, and had seemingly good results.

Spontaneous Decomposition from Grouped Network Pathways

Date Submitted: Dec, 2022 (hosted online Jul 1, 2022)

Abstract: There have been many recent breakthroughs in self- supervised learning (SSL), i.e. unsupervised techniques used to obtain general purpose image features for down- stream tasks. However, these methods often require large amounts of computational resources, and much is still unknown about how architectural choices affect the quality of self-supervised learned representations. There is still a lack of understanding of why compositional features spontaneously arise in previous self-supervised publications. In this work, we propose a class of models that is reminiscent of an ensemble. We show how this class of models can greatly reduce the number of parameters needed for learning robust representations in a self-supervised setting. Additionally, we show that sparsely connected network pathways spontaneously create decomposed representations.

In this work, we imposed network pathway grouping on a simple CNN architecture and found that different isolated subpathways would spontaneously learn distinct features of the training data. We also showed that this grouped pathway architecture had performance benefits over vanilla variants when holding parameter counts constant. We also made a poster here. I think this work was great, but we struggled with our message and audience. It was a project for Stanford's Computer Vision course so we attempted to frame the project as an architectural contribution, validating the representations on a performance benchmark (CIFAR10 image classification). I think the project is more interesting, however, for its qualtitative findings about learned representations. I still think this work has promise, but I'm not familiar with the greater literature. And since we completed this project, I think there has been some good theory work that could potentially explain our findings in terms of a Neural Race Reduction.

Improving Chain of Thought with Chain of Shortcuts

Date Released: Dec 6, 2023

Abstract: Large Language Models (LLMs) have demonstrated remarkable language modeling and sequence modeling capabilities with capabilities like In-Context Learning (ICL) and Chain of Thought (CoT) reasoning, akin to human working memory and reasoning. Drawing inspiration from dual process theory in human cognition, we propose a novel training technique called Chain of Shortcuts (CoS) that bridges the gap between LLMs’ System 1 (automatic) and System 2 (deliberate) modes. CoS enables LLMs to compress reasoning trajectories, encouraging associations between earlier and later steps in problem-solving, resulting in shorter, more flexible solutions. We demonstrate that CoS-trained language models maintain or outperform baseline models while generating distilled problem solutions, enhancing stability during training, and excelling in high-temperature environments. CoS’s effectiveness increases with the number of transformer layers until saturation. Our work not only contributes to mathematical transformers but also offers insights into human dual process theory, paving the way for more efficient and robust AI systems.

Overall, this project's direction no longer seems promising as a longterm focus 😢 Another paper called GPT Can Solve Mathematical Problems Without a Calculator came out in September that essentially does what we were moving towards in terms of a Computer Science contribution. And the cognitive focus of this work is probably too abstract to be much of a contribution. This writeup was intended to be a NeurIPS workshop submission, but due to the reasons mentioned above, combined with a misinterpretation of the workshop deadline (12AM vs 12PM 😅), it was never submitted (and probably never will be).