Satchel Grant

About Me

I'm currently a 4th year PhD candidate at Stanford studying cognition in Jay McClelland's lab. I study mathematical cognition, abstract reasoning, and generalization in computational settings mainly through the use of mechanistic interpretability methods in artificial neural networks. My interests tend to fall into the intersection of psych, neuro, and comp sci.

This site is mainly to showcase projects that are either published or promising/interesting but probably won't be published. These latter projects may find themselves listed here because they're at a reasonable state, but I can't find the time to pursue them further, or someone else beat me to the punch.

Note that the date near the beginning of each entry refers to the date that the linked writeup was pushed to github or published. This is not necessarily the date that the blog entry was made!

CV.pdf

Published Work

Emergent Symbol-like Number Variables in Artificial Neural Networks

Date Submitted: October 1, 2024
Journal/Venue: ICLR 2025 (under review)

Satchel Grant, Noah D. Goodman, James L. McClelland

Abstract: Symbolic programs, defined by discrete variables with explicit rules and relations, often have the benefit of interpretability, ease of communication, and generalization. This is contrasted against neural systems, consisting of distributed representations with rules and relations defined by learned parameters, which often have opaque inner mechanisms. There is an interest in finding unity between these two types of systems for cognitive and computer scientists alike. There is no guarantee, however, that these two types of systems are reconcilable. To what degree do neural networks induce abstract, mutable, slot-like variables in order to achieve next-token prediction (NTP) goals? Can neural functions be thought of analogously to a computer program? In this work, we train neural systems using NTP on numeric cognitive tasks and then seek to understand them at the level of symbolic programs. We use a combination of causal interventions and visualization methods in pursuit of this goal. We find that models of sufficient dimensionality do indeed develop strong analogs of symbolic algorithms purely from the NTP objective. We then ask how variations on the tasks and model architectures affect the models' learned solutions to find that numeric symbols are not formed for every variant of the task, and transformers solve the problem in a different fashion than their recurrent counterparts. Lastly, we show that in all cases, some degree of gradience exists in the neural symbols, highlighting the difficulty of finding simple, interpretable symbolic stories of how neural networks perform their tasks. Taken together, our results are consistent with the view that neural networks can approximate interpretable symbolic programs of number cognition, but the particular program they approximate and the extent to which they approximate it can vary widely, depending on the network architecture, training data, extent of training, and network size.

This is the continuation of an ICLR Re-Align 2024 Workshop paper linked below. This work provides a good demonstration of what types of distributed representations can form as a function of next-token prediction, and how these representations can change depending on architectural choices such as size or attention vs recurrence. This is hopefully the first of many of my projects that use causal analyses to address claims about mechanism.

Symbolic Variables in Distributed Networks that Count

Date Published: March 2, 2024
Journal/Venue: ICLR Re-Align Workshop and CogSci 2024

Satchel Grant, Zhengxuan Wu, James L. McClelland, Noah D. Goodman

Abstract: The discrete entities of symbolic systems and their explicit relations make symbolic systems more transparent and easier to communicate. This is in contrast to neural systems, which are often opaque. It is understandable that psychologists often pursue interpretations of human cognition using symbolic characterizations, and it is clear that the ability to find symbolic variables within neural systems would be beneficial for interpreting and controlling Artificial Neural Networks (ANNs). Symbolic interpretations can, however, oversimplify non-symbolic systems. This has been demonstrated in findings from research on children's performance on tasks thought to depend on a concept of exact number, where recent findings suggest a gradience of counting ability in children's learning trajectories. In this work, we take inspiration from these findings to explore the emergence of symbolic representations in ANNs. We demonstrate how to align recurrent neural representations with high-level, symbolic representations of number by causally intervening on the neural system. We find that consistent, discrete representations of numbers do emerge in ANNs. We use this to inform the discussion on how neural systems represent quantity. The symbol-like representations in the network, however, evolve with learning, and can continue to vary after the neural network consistently solves the task, demonstrating the graded nature of symbolic variables in distributed systems.

This is an early iteration of ongoing work. The general direction is interesting because DAS effectively allows us to do precise cognitive science directly on distributed networks. This is useful for not only understanding how humans come to do the seemingly symbolic processing that they do, but it's also promising for better understanding how artificial neural networks perform abstract reasoning and generalization.

Interpreting the retinal neural code for natural scenes: From computations to neurons

Date Published: Sept. 6, 2023
Journal/Venue: Neuron

Niru Maheswaranathan*, Lane T McIntosh*, Hidenori Tanaka*, Satchel Grant*, David B Kastner, Joshua B Melander, Aran Nayebi, Luke E Brezovec, Julia H Wang, Surya Ganguli, Stephen A Baccus

Abstract: Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model’s internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes.

This was a big collaboration over the course of many years. I love this work because it is a beautiful demonstration of how to establish an isomorphism between biological and artificial neural networks, and it shows how you can use that sort of model for interpreting the real biological neural code. I am a co-first author on this work for writing most of the project code, developing many architectural improvements, and developing much of the interneuron comparisons.

A mechanistically interpretable model of the retinal neural code for natural scenes with multiscale adaptive dynamics

Date Published: March 4, 2022
Journal/Venue: Asilomar

Xuehao Ding, Dongsoo Lee, Satchel Grant, Heike Stein, Lane McIntosh, Niru Maheswaranathan, Stephen Baccus

Abstract: The visual system processes stimuli over a wide range of spatiotemporal scales, with individual neurons receiving input from tens of thousands of neurons whose dynamics range from milliseconds to tens of seconds. This poses a challenge to create models that both accurately capture visual computations and are mechanistically interpretable. Here we present a model of salamander retinal ganglion cell spiking responses recorded with a multielectrode array that captures natural scene responses and slow adaptive dynamics. The model consists of a three-layer convolutional neural network (CNN) modified to include local recurrent synaptic dynamics taken from a linear-nonlinear-kinetic (LNK) model. We presented alternating natural scenes and uniform field white noise stimuli designed to engage slow contrast adaptation. To overcome difficulties fitting slow and fast dynamics together, we first optimized all fast spatiotemporal parameters, then separately optimized recurrent slow synaptic parameters. The resulting full model reproduces a wide range of retinal computations and is mechanistically interpretable, having internal units that correspond to retinal interneurons with biophysically modeled synapses. This model allows us to study the contribution of model units to any retinal computation, and examine how long-term adaptation changes the retinal neural code for natural scenes through selective adaptation of retinal pathways.

This project was a good extension of the CNN retinal model that I listed earlier. In this work, we managed to give the CNN model recurrence and used previous kinetics constants to get the model to exhibit slow adaptation (something that was lacking from the previous work).

Unpublished Projects

Direct Manifold Capacity Optimization

Date: October 9, 2024

Satchel Grant, Chi-Ning Chou, Thomas Edward Yerxa, SueYeon Chung

Abstract: Manifold capacity is a tool for interpreting artificial and biological neural representations. Although the technique has shown utility in many analyses, an open question remains about whether the theory can also be used as a training objective for useful/robust neural representations. Previous work has made progress towards this goal in self-supervised learning settings by making assumptions about the shape of the manifolds. In this work, we use differentiable quadratic programming to maximize manifold capacity directly, without using simplifying assumptions. We show that our technique can match the overall performance of the pre-existing baselines with the ability to tune a hyperparameter to minimize the cumulative gradient steps or the total training samples. Our results show promise for exploring domains less suited to pre-existing simplifying assumptions, and our results add to the mounting evidence of manifold capacity as a powerful tool for characterizing neural representations.

This is ongoing work that will soon be submitted to ICML.

Bidirectional Influences of Grounded Quantification and Language in Acquiring Numerical Cognitive Abilities

Date Released: Dec 6, 2023

Abstract: We explore the role of language in cognition within the domain of number, revisiting a debate on the role of exact count words in numeric matching tasks. To address these issues, we introduce a virtual environment to simulate exact equivalence tasks like those used to study the numerical abilities of members of the Pirah˜a tribe, who lack exact number words, in previous works. We use recurrent neural networks to model visuospatially grounded counting behavior with and without the influence of exact number words. We find that it is possible for networks to learn to perform exact numeric matching tasks correctly up to non-trivial quantities with and without the use of exact number words. Importantly, however, networks with limited counting experience with and without language capture the approximate behavior exhibited by adult members of the Pirah˜a and young children learning to count in cultures with number words. Our networks also exhibit aspects of human numerical cognition purely through learning to solve the tasks: a flat coefficient of variation and a compressed mental number representation. We explore the causal influences of language and actions, showing that number words decrease the amount of experience needed to learn the numeric matching tasks, and learning the task actions reduces experience needed to learn number words. We use these results as a proof of principle for expanding our understanding of number cognition, and we suggest refinement to our collective understanding of the interactions between language and thought.

This is ongoing work that will soon be submitted to Cognition.

Leveraging Large Language Models for Context Compression

Date Released: May 31, 2023

Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on a wide range of language modeling tasks. LLMs have also demonstrated an ability to learn new tasks from clever prompt sequences, without the need for gradient updates. The length of an LLM's context window, however, has quadratic computational complexity, making large context windows prohibitively expensive. Furthermore, a problem with LLMs as models of cognition is their perfect memory for tokens within their context window, and their non-existant memory for things outside of their context window in the absence of weight updates. To address the challenges of large context windows, we introduce a technique that uses pretrained LLMs to create compressed, representations of sub-sequences within the context. We introduce a new token type that can be trained to compress a history of tokens at inference without additional gradient updates after training. These tokens serve to increase the context size while taking a step toward aligning LLMs with human stimulus abstraction. We use this technique to augment the open source Bloom models, and we show that the compressed representations can recover ~80\% of the performance of the LLMs using the full context.

I never submitted this to any conferences because it ended up being very similar to Jesse Mu's work, Learning to Compress Prompts with Gist Tokens. Then Alexis Chevalier et. al. published Adapting Language Models to Compress Contexts that does the exact same idea. Chevalier et. al. managed to scale things up very nicely, and had seemingly good results.

Spontaneous Decomposition from Grouped Network Pathways

Date Submitted: Dec, 2022 (hosted online Jul 1, 2022)

Abstract: There have been many recent breakthroughs in self- supervised learning (SSL), i.e. unsupervised techniques used to obtain general purpose image features for down- stream tasks. However, these methods often require large amounts of computational resources, and much is still unknown about how architectural choices affect the quality of self-supervised learned representations. There is still a lack of understanding of why compositional features spontaneously arise in previous self-supervised publications. In this work, we propose a class of models that is reminiscent of an ensemble. We show how this class of models can greatly reduce the number of parameters needed for learning robust representations in a self-supervised setting. Additionally, we show that sparsely connected network pathways spontaneously create decomposed representations.

In this work, we imposed network pathway grouping on a simple CNN architecture and found that different isolated subpathways would spontaneously learn distinct features of the training data. We also showed that this grouped pathway architecture had performance benefits over vanilla variants when holding parameter counts constant. We also made a poster here. I think this work was great, but we struggled with our message and audience. It was a project for Stanford's Computer Vision course so we attempted to frame the project as an architectural contribution, validating the representations on a performance benchmark (CIFAR10 image classification). I think the project is more interesting, however, for its qualtitative findings about learned representations. I still think this work has promise, but I'm not familiar with the greater literature. And since we completed this project, I think there has been some good theory work that could potentially explain our findings in terms of a Neural Race Reduction.

Improving Chain of Thought with Chain of Shortcuts

Date Released: Dec 6, 2023

Abstract: Large Language Models (LLMs) have demonstrated remarkable language modeling and sequence modeling capabilities with capabilities like In-Context Learning (ICL) and Chain of Thought (CoT) reasoning, akin to human working memory and reasoning. Drawing inspiration from dual process theory in human cognition, we propose a novel training technique called Chain of Shortcuts (CoS) that bridges the gap between LLMs’ System 1 (automatic) and System 2 (deliberate) modes. CoS enables LLMs to compress reasoning trajectories, encouraging associations between earlier and later steps in problem-solving, resulting in shorter, more flexible solutions. We demonstrate that CoS-trained language models maintain or outperform baseline models while generating distilled problem solutions, enhancing stability during training, and excelling in high-temperature environments. CoS’s effectiveness increases with the number of transformer layers until saturation. Our work not only contributes to mathematical transformers but also offers insights into human dual process theory, paving the way for more efficient and robust AI systems.

Overall, this project's direction no longer seems promising as a longterm focus 😢 Another paper called GPT Can Solve Mathematical Problems Without a Calculator came out in September that essentially does what we were moving towards in terms of a Computer Science contribution. And the cognitive focus of this work is probably too abstract to be much of a contribution. This writeup was intended to be a NeurIPS workshop submission, but due to the reasons mentioned above, combined with a misinterpretation of the workshop deadline (12AM vs 12PM 😅), it was never submitted (and probably never will be).