POPULARITY
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen's “The role of ‘explanation' in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati's “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda's “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton's “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020. Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no human programmer knows how to write. His work focuses on "mechanistic interpretability" - attempting to uncover and understand the internal structures and algorithms that emerge within these networks. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ *** SHOWNOTES, TRANSCRIPT, ALL REFERENCES (DONT MISS!): https://www.dropbox.com/scl/fi/36dvtfl3v3p56hbi30im7/NeelShow.pdf?rlkey=pq8t7lyv2z60knlifyy17jdtx&st=kiutudhc&dl=0 We riff on: * How neural networks develop meaningful internal representations beyond simple pattern matching * The effectiveness of chain-of-thought prompting and why it improves model performance * The importance of hands-on coding over extensive paper reading for new researchers * His journey from Cambridge to working with Chris Olah at Anthropic and eventually Google DeepMind * The role of mechanistic interpretability in AI safety NEEL NANDA: https://www.neelnanda.io/ https://scholar.google.com/citations?user=GLnX3MkAAAAJ&hl=en https://x.com/NeelNanda5 Interviewer - Tim Scarfe TOC: 1. Part 1: Introduction [00:00:00] 1.1 Introduction and Core Concepts Overview 2. Part 2: Outside Interview [00:06:45] 2.1 Mechanistic Interpretability Foundations 3. Part 3: Main Interview [00:32:52] 3.1 Mechanistic Interpretability 4. Neural Architecture and Circuits [01:00:31] 4.1 Biological Evolution Parallels [01:04:03] 4.2 Universal Circuit Patterns and Induction Heads [01:11:07] 4.3 Entity Detection and Knowledge Boundaries [01:14:26] 4.4 Mechanistic Interpretability and Activation Patching 5. Model Behavior Analysis [01:30:00] 5.1 Golden Gate Claude Experiment and Feature Amplification [01:33:27] 5.2 Model Personas and RLHF Behavior Modification [01:36:28] 5.3 Steering Vectors and Linear Representations [01:40:00] 5.4 Hallucinations and Model Uncertainty 6. Sparse Autoencoder Architecture [01:44:54] 6.1 Architecture and Mathematical Foundations [02:22:03] 6.2 Core Challenges and Solutions [02:32:04] 6.3 Advanced Activation Functions and Top-k Implementations [02:34:41] 6.4 Research Applications in Transformer Circuit Analysis 7. Feature Learning and Scaling [02:48:02] 7.1 Autoencoder Feature Learning and Width Parameters [03:02:46] 7.2 Scaling Laws and Training Stability [03:11:00] 7.3 Feature Identification and Bias Correction [03:19:52] 7.4 Training Dynamics Analysis Methods 8. Engineering Implementation [03:23:48] 8.1 Scale and Infrastructure Requirements [03:25:20] 8.2 Computational Requirements and Storage [03:35:22] 8.3 Chain-of-Thought Reasoning Implementation [03:37:15] 8.4 Latent Structure Inference in Language Models
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Showing SAE Latents Are Not Atomic Using Meta-SAEs, published by Bart Bussmann on August 24, 2024 on The AI Alignment Forum. Bart, Michael and Patrick are joint first authors. Research conducted as part of MATS 6.0 in Lee Sharkey and Neel Nanda's streams. Thanks to Mckenna Fitzgerald and Robert Krzyzanowski for their feedback! TL;DR: Sparse Autoencoder (SAE) latents have been shown to typically be monosemantic (i.e. correspond to an interpretable property of the input). It is sometimes implicitly assumed that they are therefore atomic, i.e. simple, irreducible units that make up the model's computation. We provide evidence against this assumption by finding sparse, interpretable decompositions of SAE decoder directions into seemingly more atomic latents, e.g. Einstein -> science + famous + German + astronomy + energy + starts with E We do this by training meta-SAEs, an SAE trained to reconstruct the decoder directions of a normal SAE. We argue that, conceptually, there's no reason to expect SAE latents to be atomic - when the model is thinking about Albert Einstein, it likely also thinks about Germanness, physicists, etc. Because Einstein always entails those things, the sparsest solution is to have the Albert Einstein latent also boost them. Key results SAE latents can be decomposed into more atomic, interpretable meta-latents. We show that when latents in a larger SAE have split out from latents in a smaller SAE, a meta SAE trained on the larger SAE often recovers this structure. We demonstrate that meta-latents allow for more precise causal interventions on model behavior than SAE latents on a targeted knowledge editing task. We believe that the alternate, interpretable decomposition using MetaSAEs casts doubt on the implicit assumption that SAE latents are atomic. We show preliminary results that MetaSAE latents have significant ovelap with latents in a normal SAE of the same size but may relate differently to the larger SAEs used in MetaSAE training. We made a dashboard that lets you explore meta-SAE latents. Terminology: Throughout this post we use "latents" to describe the concrete components of the SAE's dictionary, whereas "feature" refers to the abstract concepts, following Lieberum et al. Introduction Mechanistic interpretability (mech interp) attempts to understand neural networks by breaking down their computation into interpretable components. One of the key challenges of this line of research is the polysemanticity of neurons, meaning they respond to seemingly unrelated inputs. Sparse autoencoders (SAEs) have been proposed as a method for decomposing model activations into sparse linear sums of latents. Ideally, these latents should be monosemantic i.e. respond to inputs that clearly share a similar meaning (implicitly, from the perspective of a human interpreter). That is, a human should be able to reason about the latents both in relation to the features to which they are associated, and also use the latents to better understand the model's overall behavior. There is a popular notion, both implicitly in related work on SAEs within mech interp and explicitly by the use of the term "atom" in sparse dictionary learning as a whole, that SAE features are atomic or can be "true features". However, monosemanticity does not imply atomicity. Consider the example of shapes of different colors - the set of shapes is [circle, triangle, square], and the set of colors is [white, red, green, black], each of which is represented with a linear direction. 'Red triangle' represents a monosemantic feature, but not an atomic feature, as it can be decomposed into red and triangle. It has been shown that sufficiently wide SAEs on toy models will learn 'red triangle', rather than representing 'red' and 'triangle' with separate latents. Furthermore, whilst one may naively re...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Extracting SAE task features for ICL, published by Dmitrii Kharlapenko on August 12, 2024 on The AI Alignment Forum. TL;DR We try to study task vectors in the SAE basis. This is challenging because there is no canonical way to convert an arbitrary vector in the residual stream to a linear combination of SAE features - you can't just pass an arbitrary vector through the encoder without going off distribution. We explored the algorithm of gradient pursuit suggested in Smith et al, but it didn't work for us without modifications. Our approach is to apply the SAE encoder to the task vector, and then apply a gradient-based cleanup. This exploits the fact that task vectors have a differentiable objective. We find that this gives a sparser and cleaner reconstruction, which is also highly interpretable, and also serves as a better task vector due to directly optimizing for log likelihood. This takes us from ~100 active features to ~10. Using our algorithm, we find two classes of SAE features involved in ICL. One of them recognizes the exact tasks or output formats from the examples, and another one encodes the tasks for execution by the model later on. We show that steering with these features has causal effects similar to task vectors. This work was produced as part of the ML Alignment & Theory Scholars Program - Summer 24 Cohort, under mentorship from Neel Nanda and Arthur Conmy. Prior work Task or function vectors are internal representations of some task that LLMs form while processing an ICL prompt. They can be extracted from a model running on a few-shot prompt and then be used to make it complete the same task without having any prior context or task description. Several papers (Function vectors in large language models, In-Context Learning Creates Task Vectors) have proposed different ways to extract those task vectors. They all center around having ICL examples being fed to a model in the form of "input output, … " and averaging the residuals on the "separator" token over a batch. This approach can reconstruct some part of the ICL performance but does not admit a straightforward conversion to the SAE basis. ITO with gradient pursuit can be used to do a sparse coding of a residual vector using SAE features. The post suggests using this algorithm for steering vector SAE decomposition. Since task vectors can be thought of as steering vectors, ITO may provide some insight into the ways they operate. Initial Phi-3 experiments Direct SAE task vector reconstruction In our study we trained a set of gated SAEs for Phi-3 Mini 3.8B using a model-generated synthetic instruction dataset. While offering a sparse dictionary decomposition of residuals, SAEs tend to introduce a reconstruction error that impacts the performance of the model. They also have no guarantee to be able to decompose out-of-distribution vectors, and task vectors being a product of averaging activations across prompts and tokens may be the case of such vectors. Thus, we first studied the performance of SAE reconstructions of task vectors in transferring the definition of two tasks: 1) antonym generation and 2) English to Spanish word translation. These and other tasks used to study task vectors were taken from the ICL task vectors paper github repository. These charts show the NLL loss of the model on the evaluation set of zero-shot prompts for both of the tasks depending on the layer of extraction/insertion. TV stands for the original task vector performance; Recon of TV stands for using the SAE reconstruction of the task vector instead of the task vector; TV on recon stands for first doing a SAE reconstruction of the residuals and then collecting a task vector on them; ITO stands for the ITO algorithm with 40 target l0 loss. It can be seen from charts that SAE reconstruction significantly decrea...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Self-explaining SAE features, published by Dmitrii Kharlapenko on August 5, 2024 on The AI Alignment Forum. TL;DR We apply the method of SelfIE/Patchscopes to explain SAE features - we give the model a prompt like "What does X mean?", replace the residual stream on X with the decoder direction times some scale, and have it generate an explanation. We call this self-explanation. The natural alternative is auto-interp, using a larger LLM to spot patterns in max activating examples. We show that our method is effective, and comparable with Neuronpedia's auto-interp labels (with the caveat that Neuronpedia's auto-interp used the comparatively weak GPT-3.5 so this is not a fully fair comparison). We aren't confident you should use our method over auto-interp, but we think in some situations it has advantages: no max activating dataset examples are needed, and it's cheaper as you just run the model being studied (eg Gemma 2B) not a larger model like GPT-4. Further, it has different errors to auto-interp, so finding and reading both may be valuable for researchers in practice. We provide advice for using self-explanation in practice, in particular for the challenge of automatically choosing the right scale, which significantly affects explanation quality. We also release a tool for you to work with self-explanation. We hope the technique is useful to the community as is, but expect there's many optimizations and improvements on top of what is in this post. Introduction This work was produced as part of the ML Alignment & Theory Scholars Program - Summer 24 Cohort, under mentorship from Neel Nanda and Arthur Conmy. SAE features promise a flexible and extensive framework for interpretation of LLM internals. Recent work (like Scaling Monosemanticity) has shown that they are capable of capturing even high-level abstract concepts inside the model. Compared to MLP neurons, they can capture many more interesting concepts. Unfortunately, in order to learn things with SAE features and interpret what the SAE tells us, one needs to first interpret these features on their own. The current mainstream method for their interpretation requires storing the feature's activations on millions of tokens, filtering for the prompts that activate it the most, and looking for a pattern connecting them. This is typically done by a human, or sometimes somewhat automated with the use of larger LLMs like ChatGPT, aka auto-interp. Auto-interp is a useful and somewhat effective method, but requires an extensive amount of data and expensive closed-source language model API calls (for researchers outside scaling labs) Recent papers like SelfIE or Patchscopes have proposed a mechanistic method of directly utilizing the model in question to explain its own internals activations in natural language. It is an approach that replaces an activation during the forward pass (e.g. some of the token embeddings in the prompt) with a new activation and then makes the model generate explanations using this modified prompt. It's a variant of activation patching, with the notable differences that it generates a many token output (rather than a single token), and that the patched in activation may not be the same type as the activation it's overriding (and is just an arbitrary vector of the same dimension). We study how this approach can be applied to SAE feature interpretation, since it is: Potentially cheaper and does not require large closed model inference Can be viewed as a more truthful to the source, since it is uses the SAE feature vectors directly to generate explanations instead of looking at the max activating examples How to use Basic method We ask the model to explain the meaning of a residual stream direction as if it literally was a word or phrase: Prompt 1 (/ replaced according to model inp...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding Positional Features in Layer 0 SAEs, published by bilalchughtai on July 30, 2024 on LessWrong. This is an informal research note. It is the result of a few-day exploration into positional SAE features conducted as part of Neel Nanda's training phase of the ML Alignment & Theory Scholars Program - Summer 2024 cohort. Thanks to Andy Arditi, Arthur Conmy and Stefan Heimersheim for helpful feedback. Thanks to Joseph Bloom for training this SAE. Summary We investigate positional SAE features learned by layer 0 residual stream SAEs trained on gpt2-small. In particular, we study the activation blocks.0.hook_resid_pre, which is the sum of the token embeddings and positional embeddings. Importantly gpt2-small uses absolute learned positional embeddings - that is, the positional embeddings are a trainable parameter (learned) and are injected into the residual stream (absolute). We find that this SAE learns a set of positional features. We investigate some of the properties of these features, finding Positional and semantic features are entirely disjoint at layer 0. Note that we do not expect this to continue holding in later layers as attention mixes semantic and positional information. In layer 0, we should expect the SAE to disentangle positional and semantic features as there is a natural notion of ground truth positional and semantic features that interact purely additively. Generically, each positional feature spans a range of positions, except for the first few positions which each get dedicated (and sometimes, several) features. We can attribute degradation of SAE performance beyond the SAE training context length to (lack of) these positional features, and to the absolute nature of positional embeddings used by this model. Set Up We study pretrained gpt2-small SAEs trained on blocks.0.hook_resid_pre. This is particularly clean, as we can generate the entire input distribution to the SAE by summing each of the d_vocab token embeddings with each of the n_ctx positional embeddings, obtaining a tensor all_resid_pres: Float[Tensor, "d_vocab n_ctx d_model"] By passing this tensor through the SAE, we can grab all of the pre/post activation function feature activations all_feature_acts: Float[Tensor, "d_vocab n_ctx d_sae"] In this post, d_model = 768 and d_sae = 24576. Importantly the SAE we study in this post has context_size=128. The SAE context size corresponds is the maximal length of input sequence used to generate activations for training of the SAE. Finding features The activation space of study can be thought of as the direct sum of the token embedding space and the positional embedding space. As such, we hypothesize that semantic and positional features learned by the SAE should be distinct. That is, we hypothesize that the feature activations for some feature i can be written in the form where for each i, either gi=0 or hi=0 identically for all inputs in their domain and x is a d_model dimensional vector. To investigate this we hold tok or pos fixed in all_feature_acts and vary the other input. We first restrict to pos < sae.cfg.context_size. Positional features We first replicate Figure 1f of Gurnee et al. (2024), which finds instances of sinusoidal positional neurons in MLP layers. To do so, we assign each feature a positional score. We first compute the mean activation of each feature at each position by averaging over all possible input tokens. The position score is the max value of this over all positions, i.e. where fi(tok,pos) is the feature activation for feature i for the given input. We find positional scores drop off rapidly. There seem to only be ~50 positional features (of 24k total features) in this SAE. Inspecting the features, we find 1. Many positional features, each with small standard deviation over input tokens (shown in lower opacit...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding Positional Features in Layer 0 SAEs, published by bilalchughtai on July 30, 2024 on LessWrong. This is an informal research note. It is the result of a few-day exploration into positional SAE features conducted as part of Neel Nanda's training phase of the ML Alignment & Theory Scholars Program - Summer 2024 cohort. Thanks to Andy Arditi, Arthur Conmy and Stefan Heimersheim for helpful feedback. Thanks to Joseph Bloom for training this SAE. Summary We investigate positional SAE features learned by layer 0 residual stream SAEs trained on gpt2-small. In particular, we study the activation blocks.0.hook_resid_pre, which is the sum of the token embeddings and positional embeddings. Importantly gpt2-small uses absolute learned positional embeddings - that is, the positional embeddings are a trainable parameter (learned) and are injected into the residual stream (absolute). We find that this SAE learns a set of positional features. We investigate some of the properties of these features, finding Positional and semantic features are entirely disjoint at layer 0. Note that we do not expect this to continue holding in later layers as attention mixes semantic and positional information. In layer 0, we should expect the SAE to disentangle positional and semantic features as there is a natural notion of ground truth positional and semantic features that interact purely additively. Generically, each positional feature spans a range of positions, except for the first few positions which each get dedicated (and sometimes, several) features. We can attribute degradation of SAE performance beyond the SAE training context length to (lack of) these positional features, and to the absolute nature of positional embeddings used by this model. Set Up We study pretrained gpt2-small SAEs trained on blocks.0.hook_resid_pre. This is particularly clean, as we can generate the entire input distribution to the SAE by summing each of the d_vocab token embeddings with each of the n_ctx positional embeddings, obtaining a tensor all_resid_pres: Float[Tensor, "d_vocab n_ctx d_model"] By passing this tensor through the SAE, we can grab all of the pre/post activation function feature activations all_feature_acts: Float[Tensor, "d_vocab n_ctx d_sae"] In this post, d_model = 768 and d_sae = 24576. Importantly the SAE we study in this post has context_size=128. The SAE context size corresponds is the maximal length of input sequence used to generate activations for training of the SAE. Finding features The activation space of study can be thought of as the direct sum of the token embedding space and the positional embedding space. As such, we hypothesize that semantic and positional features learned by the SAE should be distinct. That is, we hypothesize that the feature activations for some feature i can be written in the form where for each i, either gi=0 or hi=0 identically for all inputs in their domain and x is a d_model dimensional vector. To investigate this we hold tok or pos fixed in all_feature_acts and vary the other input. We first restrict to pos < sae.cfg.context_size. Positional features We first replicate Figure 1f of Gurnee et al. (2024), which finds instances of sinusoidal positional neurons in MLP layers. To do so, we assign each feature a positional score. We first compute the mean activation of each feature at each position by averaging over all possible input tokens. The position score is the max value of this over all positions, i.e. where fi(tok,pos) is the feature activation for feature i for the given input. We find positional scores drop off rapidly. There seem to only be ~50 positional features (of 24k total features) in this SAE. Inspecting the features, we find 1. Many positional features, each with small standard deviation over input tokens (shown in lower opacit...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: BatchTopK: A Simple Improvement for TopK-SAEs, published by Bart Bussmann on July 20, 2024 on The AI Alignment Forum. Work done in Neel Nanda's stream of MATS 6.0. Epistemic status: Tried this on a single sweep and seems to work well, but it might definitely be a fluke of something particular to our implementation or experimental set-up. As there are also some theoretical reasons to expect this technique to work (adaptive sparsity), it seems probable that for many TopK SAE set-ups it could be a good idea to also try BatchTopK. As we're not planning to investigate this much further and it might be useful to others, we're just sharing what we've found so far. TL;DR: Instead of taking the TopK feature activations per token during training, taking the Top(K*batch_size) for every batch seems to improve SAE performance. During inference, this activation can be replaced with a single global threshold for all features. Introduction Sparse autoencoders (SAEs) have emerged as a promising tool for interpreting the internal representations of large language models. By learning to reconstruct activations using only a small number of features, SAEs can extract monosemantic concepts from the representations inside transformer models. Recently, OpenAI published a paper exploring the use of TopK activation functions in SAEs. This approach directly enforces sparsity by only keeping the K largest activations per sample. While effective, TopK forces every token to use exactly k features, which is likely suboptimal. We came up with a simple modification that solves this and seems to improve its performance. BatchTopK Standard TopK SAEs apply the TopK operation independently to each sample in a batch. For a target sparsity of K, this means exactly K features are activated for every sample. BatchTopK instead applies the TopK operation across the entire flattened batch: 1. Flatten all feature activations across the batch 2. Take the top (K * batch_size) activations 3. Reshape back to the original batch shape This allows more flexibility in how many features activate per sample, while still maintaining an average of K active features across the batch. Experimental Set-Up For both the TopK and the BatchTopK SAEs we train a sweep with the following hyperparameters: Model: gpt2-small Site: layer 8 resid_pre Batch size: 4096 Optimizer: Adam (lr=3e-4, beta1 = 0.9, beta2=0.99) Number of tokens: 1e9 Expansion factor: [4, 8, 16, 32] Target L0 (k): [16, 32, 64] As in the OpenAI paper, the input gets normalized before feeding it into the SAE and calculating the reconstruction loss. We also use the same auxiliary loss function for dead features (features that didn't activate for 5 batches) that calculates the loss on the residual using the top 512 dead features per sample and gets multiplied by a factor 1/32. Results For a fixed number of active features (L0=32) the BatchTopK SAE has a lower normalized MSE than the TopK SAE and less downstream loss degradation across different dictionary sizes. Similarly, for fixed dictionary size (12288) BatchTopK outperforms TopK for different values of k. Our main hypothesis for the improved performance is thanks to adaptive sparsity: some samples contain more highly activating features than others. Let's have look at the distribution of number of active samples for the BatchTopK model. The BatchTopK model indeed makes use of its possibility to use different sparsities for different inputs. We suspect that the weird peak on the left side are the feature activations on BOS-tokens, given that its frequency is very close to 1 in 128, which is the sequence length. This serves as a great example of why BatchTopK might outperform TopK. At the BOS-token, a sequence has very little information yet, but the TopK SAE still activates 32 features. The BatchTopK model "saves" th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: JumpReLU SAEs + Early Access to Gemma 2 SAEs, published by Neel Nanda on July 19, 2024 on The AI Alignment Forum. New paper from the Google DeepMind mechanistic interpretability team, led by Sen Rajamanoharan! We introduce JumpReLU SAEs, a new SAE architecture that replaces the standard ReLUs with discontinuous JumpReLU activations, and seems to be (narrowly) state of the art over existing methods like TopK and Gated SAEs for achieving high reconstruction at a given sparsity level, without a hit to interpretability. We train through discontinuity with straight-through estimators, which also let us directly optimise the L0. To accompany this, we will release the weights of hundreds of JumpReLU SAEs on every layer and sublayer of Gemma 2 2B and 9B in a few weeks. Apply now for early access to the 9B ones! We're keen to get feedback from the community, and to get these into the hands of researchers as fast as possible. There's a lot of great projects that we hope will be much easier with open SAEs on capable models! Gated SAEs already reduced to JumpReLU activations after weight tying, so this can be thought of as Gated SAEs++, but less computationally intensive to train, and better performing. They should be runnable in existing Gated implementations. Abstract: Sparse autoencoders (SAEs) are a promising unsupervised approach for identifying causally relevant and interpretable linear features in a language model's (LM) activations. To be useful for downstream tasks, SAEs need to decompose LM activations faithfully; yet to be interpretable the decomposition must be sparse - two objectives that are in tension. In this paper, we introduce JumpReLU SAEs, which achieve state-of the-art reconstruction fidelity at a given sparsity level on Gemma 2 9B activations, compared to other recent advances such as Gated and TopK SAEs. We also show that this improvement does not come at the cost of interpretability through manual and automated interpretability studies. JumpReLU SAEs are a simple modification of vanilla (ReLU) SAEs - where we replace the ReLU with a discontinuous JumpReLU activation function - and are similarly efficient to train and run. By utilising straight-through-estimators (STEs) in a principled manner, we show how it is possible to train JumpReLU SAEs effectively despite the discontinuous JumpReLU function introduced in the SAE's forward pass. Similarly, we use STEs to directly train L0 to be sparse, instead of training on proxies such as L1, avoiding problems like shrinkage. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stitching SAEs of different sizes, published by Bart Bussmann on July 13, 2024 on The AI Alignment Forum. Work done in Neel Nanda's stream of MATS 6.0, equal contribution by Bart Bussmann and Patrick Leask, Patrick Leask is concurrently a PhD candidate at Durham University TL;DR: When you scale up an SAE, the features in the larger SAE can be categorized in two groups: 1) "novel features" with new information not in the small SAE and 2) "reconstruction features" that sparsify information that already exists in the small SAE. You can stitch SAEs by adding the novel features to the smaller SAE. Introduction Sparse autoencoders (SAEs) have been shown to recover sparse, monosemantic features from language models. However, there has been limited research into how those features vary with dictionary size, that is, when you take the same activation in the same model and train a wider dictionary on it, what changes? And how do the features learned vary? We show that features in larger SAEs cluster into two kinds of features: those that capture similar information to the smaller SAE (either identical features, or split features; about 65%), and those which capture novel features absent in the smaller mode (the remaining 35%). We validate this by showing that inserting the novel features from the larger SAE into the smaller SAE boosts the reconstruction performance, while inserting the similar features makes performance worse. Building on this insight, we show how features from multiple SAEs of different sizes can be combined to create a "Frankenstein" model that outperforms SAEs with an equal number of features, though tends to lead to higher L0, making a fair comparison difficult. Our work provides new understanding of how SAE dictionary size impacts the learned feature space, and how to reason about whether to train a wider SAE. We hope that this method may also lead to a practically useful way of training high-performance SAEs with less feature splitting and a wider range of learned novel features. Larger SAEs learn both similar and entirely novel features Set-up We use sparse autoencoders as in Towards Monosemanticity and Sparse Autoencoders Find Highly Interpretable Directions. In our setup, the feature activations are computed as: Based on these feature activations, the input is then reconstructed as The encoder and decoder matrices and biases are trained with a loss function that combines an L2 penalty on the reconstruction loss and an L1 penalty on the feature activations: In our experiments, we train a range of sparse autoencoders (SAEs) with varying widths across residual streams in GPT-2 and Pythia-410m. The width of an SAE is determined by the number of features (F) in the sparse autoencoder. Our smallest SAE on GPT-2 consists of only 768 features, while the largest one has nearly 100,000 features. Here is the full list of SAEs used in this research: Name Model site Dictionary size L0 MSE CE Loss Recovered from zero ablation CE Loss Recovered from mean ablation GPT2-768 gpt2-small layer 8 of 12 resid_pre 768 35.2 2.72 0.915 0.876 GPT2-1536 gpt2-small layer 8 of 12 resid_pre 1536 39.5 2.22 0.942 0.915 GPT2-3072 gpt2-small layer 8 of 12 resid_pre 3072 42.4 1.89 0.955 0.937 GPT2-6144 gpt2-small layer 8 of 12 resid_pre 6144 43.8 1.631 0.965 0.949 GPT2-12288 gpt2-small layer 8 of 12 resid_pre 12288 43.9 1.456 0.971 0.958 GPT2-24576 gpt2-small layer 8 of 12 resid_pre 24576 42.9 1.331 0.975 0.963 GPT2-49152 gpt2-small layer 8 of 12 resid_pre 49152 42.4 1.210 0.978 0.967 GPT2-98304 gpt2-small layer 8 of 12 resid_pre 98304 43.9 1.144 0.980 0.970 Pythia-8192 Pythia-410M-deduped layer 3 of 24 resid_pre 8192 51.0 0.030 0.977 0.972 Pythia-16384 Pythia-410M-deduped layer 3 of 24 resid_pre 16384 43.2 0.024 0.983 0.979 The base language models used are those included in Transform...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0, published by James Fox on July 7, 2024 on LessWrong. TL;DR We are excited to announce the fourth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! ARENA's mission is to provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles. ARENA will be running in-person from LISA from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks). Apply here before 23:59 July 20th anywhere on Earth! Summary ARENA has been successfully run three times, with alumni going on to become MATS scholars and LASR participants; AI safety engineers at Apollo Research, Anthropic, METR, and OpenAI; and even starting their own AI safety organisations! This iteration will run from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks) at the London Initiative for Safe AI (LISA) in Old Street, London. LISA houses small organisations (e.g., Apollo Research, BlueDot Impact), several other AI safety researcher development programmes (e.g., LASR Labs, MATS extension, PIBBS, Pivotal), and many individual researchers (independent and externally affiliated). Being situated at LISA, therefore, brings several benefits, e.g. facilitating productive discussions about AI safety & different agendas, allowing participants to form a better picture of what working on AI safety can look like in practice, and offering chances for research collaborations post-ARENA. The main goals of ARENA are to: Help participants skill up in ML relevant for AI alignment. Produce researchers and engineers who want to work in alignment and help them make concrete next career steps. Help participants develop inside views about AI safety and the paths to impact of different agendas. The programme's structure will remain broadly the same as ARENA 3.0 (see below); however, we are also adding an additional week on evaluations. For more information, see our website. Also, note that we have a Slack group designed to support the independent study of the material (join link here). Outline of Content The 4-5 week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forward, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Note: Participants can optionally skip the program this week and join us at the start of Chapter 1 if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEs Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Steering Vectors Chapter 2 - Reinforcement Learning In this chapter, you w...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2, published by Neel Nanda on July 7, 2024 on The AI Alignment Forum. This post represents my personal hot takes, not the opinions of my team or employer. This is a massively updated version of a similar list I made two years ago There's a lot of mechanistic interpretability papers, and more come out all the time. This can be pretty intimidating if you're new to the field! To try helping out, here's a reading list of my favourite mech interp papers: papers which I think are important to be aware of, often worth skimming, and something worth reading deeply (time permitting). I've annotated these with my key takeaways, what I like about each paper, which bits to deeply engage with vs skim, etc. I wrote a similar post 2 years ago, but a lot has changed since then, thus v2! Note that this is not trying to be a comprehensive literature review - this is my answer to "if you have limited time and want to get up to speed on the field as fast as you can, what should you do". I'm deliberately not following academic norms like necessarily citing the first paper introducing something, or all papers doing some work, and am massively biased towards recent work that is more relevant to the cutting edge. I also shamelessly recommend a bunch of my own work here, sorry! How to read this post: I've bolded the most important papers to read, which I recommend prioritising. All of the papers are annotated with my interpretation and key takeaways, and tbh I think reading that may be comparable good to skimming the paper. And there's far too many papers to read all of them deeply unless you want to make that a significant priority. I recommend reading all my summaries, noting the papers and areas that excite you, and then trying to dive deeply into those. Foundational Work A Mathematical Framework for Transformer Circuits (Nelson Elhage et al, Anthropic) - absolute classic, foundational ideas for how to think about transformers (see my blog post for what to skip). See my youtube tutorial (I hear this is best watched after reading the paper, and adds additional clarity) Deeply engage with: All the ideas in the overview section, especially: Understanding the residual stream and why it's fundamental. The notion of interpreting paths between interpretable bits (eg input tokens and output logits) where the path is a composition of matrices and how this is different from interpreting every intermediate activations And understanding attention heads: what a QK and OV matrix is, how attention heads are independent and additive and how attention and OV are semi-independent. Skip Trigrams & Skip Trigram bugs, esp understanding why these are a really easy thing to do with attention, and how the bugs are inherent to attention heads separating where to attend to (QK) and what to do once you attend somewhere (OV) Induction heads, esp why this is K-Composition (and how that's different from Q & V composition), how the circuit works mechanistically, and why this is too hard to do in a 1L model Skim or skip: Eigenvalues or tensor products. They have the worst effort per unit insight of the paper and aren't very important. Superposition Superposition is a core principle/problem in model internals. For any given activation (eg the output of MLP13), we believe that there's a massive dictionary of concepts/features the model knows of. Each feature has a corresponding vector, and model activations are a sparse linear combination of these meaningful feature vectors. Further, there are more features in the dictionary than activation dimensions, and they are thus compressed in and interfere with each other, essentially causing cascading errors. This phenomena of compression is called superposition. Toy models of superpositio...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0, published by James Fox on July 7, 2024 on LessWrong. TL;DR We are excited to announce the fourth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! ARENA's mission is to provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles. ARENA will be running in-person from LISA from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks). Apply here before 23:59 July 20th anywhere on Earth! Summary ARENA has been successfully run three times, with alumni going on to become MATS scholars and LASR participants; AI safety engineers at Apollo Research, Anthropic, METR, and OpenAI; and even starting their own AI safety organisations! This iteration will run from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks) at the London Initiative for Safe AI (LISA) in Old Street, London. LISA houses small organisations (e.g., Apollo Research, BlueDot Impact), several other AI safety researcher development programmes (e.g., LASR Labs, MATS extension, PIBBS, Pivotal), and many individual researchers (independent and externally affiliated). Being situated at LISA, therefore, brings several benefits, e.g. facilitating productive discussions about AI safety & different agendas, allowing participants to form a better picture of what working on AI safety can look like in practice, and offering chances for research collaborations post-ARENA. The main goals of ARENA are to: Help participants skill up in ML relevant for AI alignment. Produce researchers and engineers who want to work in alignment and help them make concrete next career steps. Help participants develop inside views about AI safety and the paths to impact of different agendas. The programme's structure will remain broadly the same as ARENA 3.0 (see below); however, we are also adding an additional week on evaluations. For more information, see our website. Also, note that we have a Slack group designed to support the independent study of the material (join link here). Outline of Content The 4-5 week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forward, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Note: Participants can optionally skip the program this week and join us at the start of Chapter 1 if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEs Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Steering Vectors Chapter 2 - Reinforcement Learning In this chapter, you w...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How ARENA course material gets made, published by CallumMcDougall on July 3, 2024 on LessWrong. TL;DR In this post, I describe my methodology for building new material for ARENA. I'll mostly be referring to the exercises on IOI, Superposition and Function Vectors as case studies. I expect this to be useful for people who are interested in designing material for ARENA or ARENA-like courses, as well as people who are interested in pedagogy or ML paper replications. The process has 3 steps: 1. Start with something concrete 2. First pass: replicate, and understand 3. Second pass: exercise-ify Summary I'm mostly basing this on the following 3 sets of exercises: Indirect Object Identification - these exercises focus on the IOI paper (from Conmy et al). The goal is to have people understand what exploratory analysis of transformers looks like, and introduce the key ideas of the circuits agenda. Superposition & SAEs - these exercises focus on understanding superposition and the agenda of dictionary learning (specifically sparse autoencoders). Most of the exercises explore Anthropic's Toy Models of Superposition paper, except for the last 2 sections which explore sparse autoencoders (firstly by applying them to the toy model setup, secondly by exploring a sparse autoencoder trained on a language model). Function Vectors - these exercises focus on the Function Vectors paper by David Bau et al, although they also make connections with related work such as Alex Turner's GPT2-XL steering vector work. These exercises were interesting because they also had the secondary goal of being an introduction to the nnsight library, in much the same way that the intro to mech interp exercises were also an introduction to TransformerLens. The steps I go through are listed below. I'm indexing from zero because I'm a software engineer so of course I am. The steps assume you already have an idea of what exercises you want to create; in Appendix (1) you can read some thoughts on what makes for a good exercise set. 1. Start with something concrete When creating material, you don't want to be starting from scratch. It's useful to have source code available to browse - bonus points if that takes the form of a Colab or something which is self-contained and has easily visible output. IOI - this was Neel's "Exploratory Analysis Demo" exercises. The rest of the exercises came from replicating the paper directly. Superposition - this was Anthroic's Colab notebook (although the final version went quite far beyond this). The very last section (SAEs on transformers) was based on Neel Nanda's demo Colab). Function Vectors - I started with the NDIF demo notebook, to show how some basic nnsight syntax worked. As for replicating the actual function vectors paper, unlike the other 2 examples I was mostly just working from the paper directly. It helped that I was collaborating with some of this paper's authors, so I was able to ask them some questions to clarify aspects of the paper. 2. First-pass: replicate, and understand The first thing I'd done in each of these cases was go through the material I started with, and make sure I understood what was going on. Paper replication is a deep enough topic for its own series of blog posts (many already exist), although I'll emphasise that I'm not usually talking about full paper replication here, because ideally you'll be starting from something a it further along, be that a Colab, a different tutorial, or something else. And even when you are just working directly from a paper, you shouldn't make the replication any harder for yourself than you need to. If there's code you can take from somewhere else, then do. My replication usually takes the form of working through a notebook in VSCode. I'll either start from scratch, or from a downloaded Colab if I'm using one as a ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How ARENA course material gets made, published by CallumMcDougall on July 3, 2024 on LessWrong. TL;DR In this post, I describe my methodology for building new material for ARENA. I'll mostly be referring to the exercises on IOI, Superposition and Function Vectors as case studies. I expect this to be useful for people who are interested in designing material for ARENA or ARENA-like courses, as well as people who are interested in pedagogy or ML paper replications. The process has 3 steps: 1. Start with something concrete 2. First pass: replicate, and understand 3. Second pass: exercise-ify Summary I'm mostly basing this on the following 3 sets of exercises: Indirect Object Identification - these exercises focus on the IOI paper (from Conmy et al). The goal is to have people understand what exploratory analysis of transformers looks like, and introduce the key ideas of the circuits agenda. Superposition & SAEs - these exercises focus on understanding superposition and the agenda of dictionary learning (specifically sparse autoencoders). Most of the exercises explore Anthropic's Toy Models of Superposition paper, except for the last 2 sections which explore sparse autoencoders (firstly by applying them to the toy model setup, secondly by exploring a sparse autoencoder trained on a language model). Function Vectors - these exercises focus on the Function Vectors paper by David Bau et al, although they also make connections with related work such as Alex Turner's GPT2-XL steering vector work. These exercises were interesting because they also had the secondary goal of being an introduction to the nnsight library, in much the same way that the intro to mech interp exercises were also an introduction to TransformerLens. The steps I go through are listed below. I'm indexing from zero because I'm a software engineer so of course I am. The steps assume you already have an idea of what exercises you want to create; in Appendix (1) you can read some thoughts on what makes for a good exercise set. 1. Start with something concrete When creating material, you don't want to be starting from scratch. It's useful to have source code available to browse - bonus points if that takes the form of a Colab or something which is self-contained and has easily visible output. IOI - this was Neel's "Exploratory Analysis Demo" exercises. The rest of the exercises came from replicating the paper directly. Superposition - this was Anthroic's Colab notebook (although the final version went quite far beyond this). The very last section (SAEs on transformers) was based on Neel Nanda's demo Colab). Function Vectors - I started with the NDIF demo notebook, to show how some basic nnsight syntax worked. As for replicating the actual function vectors paper, unlike the other 2 examples I was mostly just working from the paper directly. It helped that I was collaborating with some of this paper's authors, so I was able to ask them some questions to clarify aspects of the paper. 2. First-pass: replicate, and understand The first thing I'd done in each of these cases was go through the material I started with, and make sure I understood what was going on. Paper replication is a deep enough topic for its own series of blog posts (many already exist), although I'll emphasise that I'm not usually talking about full paper replication here, because ideally you'll be starting from something a it further along, be that a Colab, a different tutorial, or something else. And even when you are just working directly from a paper, you shouldn't make the replication any harder for yourself than you need to. If there's code you can take from somewhere else, then do. My replication usually takes the form of working through a notebook in VSCode. I'll either start from scratch, or from a downloaded Colab if I'm using one as a ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OthelloGPT learned a bag of heuristics, published by jylin04 on July 2, 2024 on The AI Alignment Forum. Work performed as a part of Neel Nanda's MATS 6.0 (Summer 2024) training program. TLDR This is an interim report on reverse-engineering Othello-GPT, an 8-layer transformer trained to take sequences of Othello moves and predict legal moves. We find evidence that Othello-GPT learns to compute the board state using many independent decision rules that are localized to small parts of the board. Though we cannot rule out that it also learns a single succinct algorithm in addition to these rules, our best guess is that Othello-GPT's learned algorithm is just a bag of independent heuristics. Board state reconstruction 1. Direct attribution to linear probes indicate that the internal board representation is frequently up- and down-weighted during a forward pass. 2. Case study of a decision rule: 1. MLP Neuron L1N421 represents the decision rule: If the move A4 was just played AND B4 is occupied AND C4 is occupied update B4+C4+D4 to "theirs". This rule does not generalize to translations across the board. 2. Another neuron L0377 participates in the implementation of this rule by checking if B4 is occupied, and inhibiting the activation of L1N421 if no. Legal move prediction 1. A subset of neurons in mid to late MLP layers classify board configurations that are sufficient to make a certain move legal with an F1-score above 0.99. These neurons have high direct attribution to the logit for that move, and are causally relevant for legal move prediction. 2. Logit lens suggests that legal move predictions gradually solidify during a forward pass. 3. Some MLP neurons systematically activate at certain times in the game, regardless of the moves played so far. We hypothesize that these neurons encode heuristics about moves that are more probable in specific phases (early/mid/late) of the game. Review of Othello-GPT Othello-GPT is a transformer with 25M parameters trained on sequences of random legal moves in the board game Othello as inputs[1] to predict legal moves[2]. How it does this is a black box that we don't understand. Its claim to fame is that it supposedly 1. Learns an internal representation of the board state; 2. Uses it to predict legal moves which if true, resolves the black box in two[3]. The evidence for the first claim is that linear probes work. Namely, for each square of the ground-truth game board, if we train a linear classifier to take the model's activations at layer 6 as input and predict logits for whether that square is blank, "mine" (i.e. belonging to the player whose move it currently is) or "yours", the probes work with high accuracy on games not seen in training. The evidence for the second claim is that if we edit the residual stream until the probe's outputs change, the model's own output at the end of layer 7 becomes consistent with legal moves that are accessible from the new board state. However, we don't yet understand what's going on in the remaining black boxes. In particular, although it would be interesting if Othello-GPT emergently learned to implement them via algorithms with relatively short description lengths, the evidence so far doesn't rule out the possibility that they could be implemented via a bag of heuristics instead. Project goal Our goal in this project was simply to figure out what's going on in the remaining black boxes. 1. What's going on in box #1 - how does the model compute the board representation? 1. How does the model decide if a cell is blank or not blank? 2. How does the model decide if a cell is "mine" or "yours"? 2. What's going on in box #2 - how does the model use the board representation to pick legal moves? Results on box #1: Board reconstruction A circuit for how the model computes if a cell is blank or not blan...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: So you want to work on technical AI safety, published by gw on June 24, 2024 on LessWrong. I've been to two EAGx events and one EAG, and the vast majority of my one on ones with junior people end up covering some subset of these questions. I'm happy to have such conversations, but hopefully this is more efficient and wide-reaching (and more than I could fit into a 30 minute conversation). I am specifically aiming to cover advice on getting a job in empirically-leaning technical research (interp, evals, red-teaming, oversight, etc) for new or aspiring researchers without being overly specific about the field of research - I'll try to be more agnostic than something like Neel Nanda's mechinterp quickstart guide but more specific than the wealth of career advice that already exists but that applies to ~any career. This also has some overlap with this excellent list of tips from Ethan Perez but is aimed a bit earlier in the funnel. This advice is of course only from my perspective and background, which is that I did a PhD in combinatorics, worked as a software engineer at startups for a couple of years, did the AI Futures Fellowship, and now work at Timaeus as the research lead for our language model track. In particular, my experience is limited to smaller organizations, so "researcher" means some blend of research engineer and research scientist rather than strictly one or the other. Views are my own and don't represent Timaeus and so on. Requisite skills What kind of general research skills do I need? There's a lot of tacit knowledge here, so most of what I can offer is more about the research process. Items on this list aren't necessarily things you're expected to just have all of or otherwise pick up immediately, but they're much easier to describe than e.g. research taste. These items are in no particular order: Theory of change at all levels. Yes, yes, theories of change, they're great. But theories of change are most often explicitly spoken of at the highest levels: how is research agenda X going to fix all our problems? Really, it's theories of change all the way down. The experiment you're running today should have some theory of change for how you understand the project you're working on. Maybe it's really answering some question about a sub-problem that's blocking you. Your broader project should have some theory of change for your research agenda, even though it probably isn't solving it outright. If you can't trace up the stack why the thing you're doing day to day matters for your ultimate research ambitions, it's a warning flag that you're just spinning your wheels. Be ok with being stuck. From a coarse resolution, being stuck is a very common steady state to be in. This can be incredibly frustrating, especially if you feel external pressure from feeling that you're not meeting whatever expectations you think others have or if your time or money is running out (see also below, on managing burnout). Things that might help for a new researcher are to have a mentor (if you don't have access to a human, frontier LLMs are (un)surprisingly good!) that can reassure you that your rate of progress is fine and to be more fine-grained about what progress means. If your experiment failed but you learned something new, that's progress! Quickly prune bad ideas. Always look for cheap, fast ways to de-risk investing time (and compute) into ideas. If the thing you're doing is really involved, look for additional intermediates as you go that can disqualify it as a direction. Communication. If you're collaborating with others, they should have some idea of what you're doing and why you're doing it, and your results should be clearly and quickly communicated. Good communication habits are kind of talked about to death, so I won't get into them too much here. Write a lot. Wri...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Building intuition with spaced repetition systems, published by Jacob G-W on May 14, 2024 on LessWrong. Do you ever go to a lecture, follow it thinking it makes total sense, then look back at your notes later and realize it makes no sense? This used to happen to me, but I've learned how to use spaced repetition to fully avoid this if I want. I'm going to try to convey this method in this post. Much of my understanding of how to create flashcards comes from "Using spaced repetition systems to see through a piece of mathematics" by Michael Nielsen and "How to write good prompts: using spaced repetition to create understanding" by Andy Matuschak, but I think my method falls in between both, in terms of abstraction. Finally, I want to credit Quantum Country for being an amazing example of flashcards created to develop intuition in users. My method is more abstract than Michael Nielsen's approach, since it does not only apply to mathematics, but to any subject. Yet it is less abstract than Andy Matuschak's approach because I specifically use it for 'academic subjects' that require deep intuition of (causal or other) relationships between concepts. Many of Matuschak's principles in his essay apply here (I want to make sure to give him credit), but I'm looking at it through the 'how can we develop deep intuition in an academic subject in the fastest possible time?' lens. Minimize Inferential Distance on Flashcards A method that I like to repeat to myself while making flashcards that I haven't seen in other places is that each flashcard should only have one inferential step on it. I'm using 'inferential step' here to mean a step such as remembering a fact, making a logical deduction, visualizing something, or anything that requires thinking. It's necessary that a flashcard only have a single inferential step on it. Anki trains the mind to do these steps. If you learn all the inferential steps, you will be able to fully re-create any mathematical deduction, historical story, or scientific argument. Knowing (and continually remembering) the full story with spaced repetition builds intuition. I'm going to illustrate this point by sharing some flashcards that I made while trying to understand how Transformers (GPT-2) worked. I made these flashcards while implementing a transformer based on Neel Nanda's tutorials and these two blog posts. Understanding Attention The first step in my method is to learn or read enough so that you have part of the whole loaded into your head. For me, this looked like picking the attention step of a transformer and then reading about it in the two blog posts and watching the section of the video on it. It's really important to learn about something from multiple perspectives. Even when I'm making flashcards from a lecture, I have my web browser open and I'm looking up things that I thought were confusing while making flashcards. My next step is to understand that intuition is fake! Really good resources make you feel like you understand something, but to actually understand something, you need to engage with it. This engagement can take many forms. For technical topics, it usually looks like solving problems or coding, and this is good! I did this for transformers! But I also wanted to not forget it long term, so I used spaced repetition to cement my intuition. Enough talk, here are some flashcards about attention in a transformer. For each flashcard, I'll explain why I made it. Feel free to scroll through. Examples I start with a distillation of the key points of the article. I wanted to make sure that I knew what the attention operation was actually doing, as the blog posts emphasized this. When building intuition, I find it helpful to know "the shape" or constraints about something so that I can build a more accurate mental model. In this case, th...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mechanistic Interpretability Workshop Happening at ICML 2024!, published by Neel Nanda on May 3, 2024 on The AI Alignment Forum. Announcing the first academic Mechanistic Interpretability workshop, held at ICML 2024! We'd love to get papers submitted if any of you have relevant projects! Deadline May 29, max 4 or max 8 pages. We welcome anything that brings us closer to a principled understanding of model internals, even if it's not "traditional" mech interp. Check out our website for example topics! There's $1750 in best paper prizes. We also welcome less standard submissions, like open source software, models or datasets, negative results, distillations, or position pieces. And if anyone is attending ICML, you'd be very welcome at the workshop! We have a great speaker line-up: Chris Olah, Jacob Steinhardt, David Bau and Asma Ghandeharioun. And a panel discussion, hands-on tutorial, and social. I'm excited to meet more people into mech interp! And if you know anyone who might be interested in attending/submitting, please pass this on. Twitter thread, Website Thanks to my great co-organisers: Fazl Barez, Lawrence Chan, Kayo Yin, Mor Geva, Atticus Geiger and Max Tegmark Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transcoders enable fine-grained interpretable circuit analysis for language models, published by Jacob Dunefsky on April 30, 2024 on The AI Alignment Forum. Summary We present a method for performing circuit analysis on language models using "transcoders," an occasionally-discussed variant of SAEs that provide an interpretable approximation to MLP sublayers' computations. Transcoders are exciting because they allow us not only to interpret the output of MLP sublayers but also to decompose the MLPs themselves into interpretable computations. In contrast, SAEs only allow us to interpret the output of MLP sublayers and not how they were computed. We demonstrate that transcoders achieve similar performance to SAEs (when measured via fidelity/sparsity metrics) and that the features learned by transcoders are interpretable. One of the strong points of transcoders is that they decompose the function of an MLP layer into sparse, independently-varying, and meaningful units (like neurons were originally intended to be before superposition was discovered). This significantly simplifies circuit analysis, and so for the first time, we present a method for using transcoders in circuit analysis in this way. We performed a set of case studies on GPT2-small that demonstrate that transcoders can be used to decompose circuits into monosemantic, interpretable units of computation. We provide code for training/running/evaluating transcoders and performing circuit analysis with transcoders, and code for the aforementioned case studies carried out using these tools. We also provide a suite of 12 trained transcoders, one for each layer of GPT2-small. All of the code can be found at https://github.com/jacobdunefsky/transcoder_circuits, and the transcoders can be found at https://huggingface.co/pchlenski/gpt2-transcoders. Work performed as a part of Neel Nanda's MATS 5.0 (Winter 2024) stream and MATS 5.1 extension. Jacob Dunefsky is currently receiving funding from the Long-Term Future Fund for this work. Background and motivation Mechanistic interpretability is fundamentally concerned with reverse-engineering models' computations into human-understandable parts. Much early mechanistic interpretability work (e.g. indirect object identification) has dealt with decomposing model computations into circuits involving small numbers of model components like attention heads or MLP sublayers. But these component-level circuits operate at too coarse a granularity: due to the relatively small number of components in a model, each individual component will inevitably be important to all sorts of computations, oftentimes playing different roles. In other words, components are polysemantic. Therefore, if we want a more faithful and more detailed understanding of the model, we should aim to find fine-grained circuits that decompose the model's computation onto the level of individual feature vectors. As a hypothetical example of the utility that feature-level circuits might provide in the very near-term: if we have a feature vector that seems to induce gender bias in the model, then understanding which circuits this feature vector partakes in (including which earlier-layer features cause it to activate and which later-layer features it activates) would better allow us to understand the side-effects of debiasing methods. More ambitiously, we hope that similar reasoning might apply to a feature that would seem to mediate deception in a future unaligned AI: a fuller understanding of feature-level circuits could help us understand whether this deception feature actually is responsible for the entirety of deception in a model, or help us understand the extent to which alignment methods remove the harmful behavior. Some of the earliest work on SAEs aimed to use them to find such feature-level circuits (e.g. Cunn...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Refusal in LLMs is mediated by a single direction, published by Andy Arditi on April 27, 2024 on The AI Alignment Forum. This work was produced as part of Neel Nanda's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with co-supervision from Wes Gurnee. This post is a preview for our upcoming paper, which will provide more detail into our current understanding of refusal. We thank Nina Rimsky and Daniel Paleka for the helpful conversations and review. Executive summary Modern LLMs are typically fine-tuned for instruction-following and safety. Of particular interest is that they are trained to refuse harmful requests, e.g. answering "How can I make a bomb?" with "Sorry, I cannot help you." We find that refusal is mediated by a single direction in the residual stream: preventing the model from representing this direction hinders its ability to refuse requests, and artificially adding in this direction causes the model to refuse harmless requests. We find that this phenomenon holds across open-source model families and model scales. This observation naturally gives rise to a simple modification of the model weights, which effectively jailbreaks the model without requiring any fine-tuning or inference-time interventions. We do not believe this introduces any new risks, as it was already widely known that safety guardrails can be cheaply fine-tuned away, but this novel jailbreak technique both validates our interpretability results, and further demonstrates the fragility of safety fine-tuning of open-source chat models. See this Colab notebook for a simple demo of our methodology. Introduction Chat models that have undergone safety fine-tuning exhibit refusal behavior: when prompted with a harmful or inappropriate instruction, the model will refuse to comply, rather than providing a helpful answer. Our work seeks to understand how refusal is implemented mechanistically in chat models. Initially, we set out to do circuit-style mechanistic interpretability, and to find the "refusal circuit." We applied standard methods such as activation patching, path patching, and attribution patching to identify model components (e.g. individual neurons or attention heads) that contribute significantly to refusal. Though we were able to use this approach to find the rough outlines of a circuit, we struggled to use this to gain significant insight into refusal. We instead shifted to investigate things at a higher level of abstraction - at the level of features, rather than model components.[1] Thinking in terms of features As a rough mental model, we can think of a transformer's residual stream as an evolution of features. At the first layer, representations are simple, on the level of individual token embeddings. As we progress through intermediate layers, representations are enriched by computing higher level features (see Nanda et al. 2023). At later layers, the enriched representations are transformed into unembedding space, and converted to the appropriate output tokens. Our hypothesis is that, across a wide range of harmful prompts, there is a single intermediate feature which is instrumental in the model's refusal. In other words, many particular instances of harmful instructions lead to the expression of this "refusal feature," and once it is expressed in the residual stream, the model outputs text in a sort of "should refuse" mode.[2] If this hypothesis is true, then we would expect to see two phenomena: Erasing this feature from the model would block refusal. Injecting this feature into the model would induce refusal. Our work serves as evidence for this sort of conceptualization. For various different models, we are able to find a direction in activation space, which we can think of as a "feature," that satisfies the above two properties. Methodolog...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superposition is not "just" neuron polysemanticity, published by Lawrence Chan on April 26, 2024 on The AI Alignment Forum. TL;DR: In this post, I distinguish between two related concepts in neural network interpretability: polysemanticity and superposition. Neuron polysemanticity is the observed phenomena that many neurons seem to fire (have large, positive activations) on multiple unrelated concepts. Superposition is a specific explanation for neuron (or attention head) polysemanticity, where a neural network represents more sparse features than there are neurons (or number of/dimension of attention heads) in near-orthogonal directions. I provide three ways neurons/attention heads can be polysemantic without superposition: non--neuron aligned orthogonal features, non-linear feature representations, and compositional representation without features. I conclude by listing a few reasons why it might be important to distinguish the two concepts. Epistemic status: I wrote this "quickly" in about 12 hours, as otherwise it wouldn't have come out at all. Think of it as a (failed) experiment in writing brief and unpolished research notes, along the lines of GDM or Anthropic Interp Updates. Introduction Meaningfully interpreting neural networks involves decomposing them into smaller interpretable components. For example, we might hope to look at each neuron or attention head, explain what that component is doing, and then compose our understanding of individual components into a mechanistic understanding of the model's behavior as a whole. It would be very convenient if the natural subunits of neural networks - neurons and attention heads - are monosemantic - that is, each component corresponds to "a single concept". Unfortunately, by default, both neurons and attention heads seem to be polysemantic: many of them seemingly correspond to multiple unrelated concepts. For example, out of 307k neurons in GPT-2, GPT-4 was able to generate short explanations that captured over >50% variance for only 5203 neurons, and a quick glance at OpenAI microscope reveals many examples of neurons in vision models that fire on unrelated clusters such as "poetry" and "dice". One explanation for polysemanticity is the superposition hypothesis: polysemanticity occurs because models are (approximately) linearly representing more features[1] than their activation space has dimensions (i.e. place features in superposition). Since there are more features than neurons, it immediately follows that some neurons must correspond to more than one feature.[2] It's worth noting that most written resources on superposition clearly distinguish between the two terms. For example, in the seminal Toy Model of Superposition,[3] Elhage et al write: Why are we interested in toy models? We believe they are useful proxies for studying the superposition we suspect might exist in real neural networks. But how can we know if they're actually a useful toy model? Our best validation is whether their predictions are consistent with empirical observations regarding polysemanticity. ( Source) Similarly, Neel Nanda's mech interp glossary explicitly notes that the two concepts are distinct: Subtlety: Neuron superposition implies polysemanticity (since there are more features than neurons), but not the other way round. There could be an interpretable basis of features, just not the standard basis - this creates polysemanticity but not superposition. ( Source) However, I've noticed empirically that many researchers and grantmakers identify the two concepts, which often causes communication issues or even confused research proposals. Consequently, this post tries to more clearly point at the distinction and explain why it might matter. I start by discussing the two terms in more detail, give a few examples of why you might have po...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Improving Dictionary Learning with Gated Sparse Autoencoders, published by Neel Nanda on April 25, 2024 on The AI Alignment Forum. Authors: Senthooran Rajamanoharan*, Arthur Conmy*, Lewis Smith, Tom Lieberum, Vikrant Varma, János Kramár, Rohin Shah, Neel Nanda A new paper from the Google DeepMind mech interp team: Improving Dictionary Learning with Gated Sparse Autoencoders! Gated SAEs are a new Sparse Autoencoder architecture that seems to be a significant Pareto-improvement over normal SAEs, verified on models up to Gemma 7B. They are now our team's preferred way to train sparse autoencoders, and we'd love to see them adopted by the community! (Or to be convinced that it would be a bad idea for them to be adopted by the community!) They achieve similar reconstruction with about half as many firing features, and while being either comparably or more interpretable (confidence interval for the increase is 0%-13%). See Sen's Twitter summary, my Twitter summary, and the paper! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Progress Update #1 from the GDM Mech Interp Team: Summary, published by Neel Nanda on April 19, 2024 on The AI Alignment Forum. Introduction This is a progress update from the Google DeepMind mechanistic interpretability team, inspired by the Anthropic team's excellent monthly updates! Our goal was to write-up a series of snippets, covering a range of things that we thought would be interesting to the broader community, but didn't yet meet our bar for a paper. This is a mix of promising initial steps on larger investigations, write-ups of small investigations, replications, and negative results. Our team's two main current goals are to scale sparse autoencoders to larger models, and to do further basic science on SAEs. We expect these snippets to mostly be of interest to other mech interp practitioners, especially those working with SAEs. One exception is our infrastructure snippet, which we think could be useful to mechanistic interpretability researchers more broadly. We present preliminary results in a range of areas to do with SAEs, from improving and interpreting steering vectors, to improving ghost grads, to replacing SAE encoders with an inference-time sparse approximation algorithm. Where possible, we've tried to clearly state our level of confidence in our results, and the evidence that led us to these conclusions so you can evaluate for yourself. We expect to be wrong about at least some of the things in here! Please take this in the spirit of an interesting idea shared by a colleague at a lab meeting, rather than as polished pieces of research we're willing to stake our reputation on. We hope to turn some of the more promising snippets into more fleshed out and rigorous papers at a later date. We also have a forthcoming paper on an updated SAE architecture that seems to be a moderate Pareto-improvement, stay tuned! How to read this post: This is a short summary post, accompanying the much longer post with all the snippets. We recommend reading the summaries of each snippet below, and then zooming in to whichever snippets seem most interesting to you. They can be read in any order. Summaries Activation Steering with SAEs We analyse the steering vectors used in Turner et. al, 2023 using SAEs. We find that they are highly interpretable, and that in some cases we can get better performance by constructing interpretable steering vectors from SAE features, though in other cases we struggle to. We hope to better disentangle what's going on in future works. Replacing SAE Encoders with Inference-Time Optimisation There are two sub-problems in dictionary learning, learning the dictionary of feature vectors (an SAE's decoder, $W_{dec}$ and computing the sparse coefficient vector on a given input (an SAE's encoder). The SAE's encoder is a linear map followed by a ReLU, which is a weak function with a range of issues. We explore disentangling these problems by taking a trained SAE, throwing away the encoder, keeping the decoder, and learning the sparse coefficients at inference-time. This lets us study the question of how well the SAE encoder is working while holding the quality of the dictionary constant, and better evaluate the quality of different dictionaries. One notable finding is that high L0 SAEs have higher quality dictionaries than low L0 SAEs, even if we learn coefficients with low L0 at inference time. Improving Ghost Grads In their January update, the Anthropic team introduced a new auxiliary loss, "ghost grads", as a potential improvement on resampling for minimising the number of dead features in a SAE. We replicate their work, and find that it under-performs resampling. We present an improvement, multiplying the ghost grads loss by the proportion of dead features, which makes ghost grads competitive. We don't yet see a compelling reason to move away fro...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Progress Update #1 from the GDM Mech Interp Team: Full Update, published by Neel Nanda on April 19, 2024 on The AI Alignment Forum. This is a series of snippets about the Google DeepMind mechanistic interpretability team's research into Sparse Autoencoders, that didn't meet our bar for a full paper. Please start at the summary post for more context, and a summary of each snippet. They can be read in any order. Activation Steering with SAEs Arthur Conmy, Neel Nanda TL;DR: We use SAEs trained on GPT-2 XL's residual stream to decompose steering vectors into interpretable features. We find a single SAE feature for anger which is a Pareto-improvement over the anger steering vector from existing work (Section 3, 3 minute read). We have more mixed results with wedding steering vectors: we can partially interpret the vectors, but the SAE reconstruction is a slightly worse steering vector, and just taking the obvious features produces a notably worse vector. We can produce a better steering vector by removing SAE features which are irrelevant ( Section 4). This is one of the first examples of SAEs having any success for enabling better control of language models, and we are excited to continue exploring this in future work. 1. Background and Motivation We are uncertain about how useful mechanistic interpretability research, including SAE research, will be for AI safety and alignment. Unlike RLHF and dangerous capability evaluation (for example), mechanistic interpretability is not currently very useful for downstream applications on models. Though there are ambitious goals for mechanistic interpretability research such as finding safety-relevant features in language models using SAEs, these are likely not tractable on the relatively small base models we study in all our snippets. To address these two concerns, we decided to study activation steering[1] (introduced in this blog post and expanded on in a paper). We recommend skimming the blog post for an explanation of the technique and examples of what it can do. Briefly, activation steering takes vector(s) from the residual stream on some prompt(s), and then adds these to the residual stream on a second prompt. This makes outputs from the second forward pass have properties inherited from the first forward pass. There is early evidence that this technique could help with safety-relevant properties of LLMs, such as sycophancy. We have tentative early research results that suggest SAEs are helpful for improving and interpreting steering vectors, albeit with limitations. We find these results particularly exciting as they provide evidence that SAEs can identify causally meaningful intermediate variables in the model, indicating that they aren't just finding clusters in the data or directions in logit space, which seemed much more likely before we did this research. We plan to continue this research to further validate SAEs and to gain more intuition about what features SAEs do and don't learn in practice. 2. Setup We use SAEs trained on the residual stream of GPT-2 XL at various layers, the model used in the initial activation steering blog post, inspired by the success of residual stream SAEs on GPT-2 Small ( Bloom, 2024) and Pythia models ( Cunningham et. al, 2023). The SAEs have 131072 learned features, L0 of around 60[2], and loss recovered around 97.5% (e.g. splicing in the SAE from Section 3 increases loss from 2.88 to 3.06, compared to the destructive zero ablation intervention resulting in Loss > 10). We don't think this was a particularly high-quality SAE, as the majority of its learned features were dead, and we found limitations with training residual stream SAEs that we will discuss in an upcoming paper. Even despite this, we think the results in this work are tentative evidence for SAEs being useful. It is likely ea...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Full Post] Progress Update #1 from the GDM Mech Interp Team, published by Neel Nanda on April 19, 2024 on LessWrong. This is a series of snippets about the Google DeepMind mechanistic interpretability team's research into Sparse Autoencoders, that didn't meet our bar for a full paper. Please start at the summary post for more context, and a summary of each snippet. They can be read in any order. Activation Steering with SAEs Arthur Conmy, Neel Nanda TL;DR: We use SAEs trained on GPT-2 XL's residual stream to decompose steering vectors into interpretable features. We find a single SAE feature for anger which is a Pareto-improvement over the anger steering vector from existing work (Section 3, 3 minute read). We have more mixed results with wedding steering vectors: we can partially interpret the vectors, but the SAE reconstruction is a slightly worse steering vector, and just taking the obvious features produces a notably worse vector. We can produce a better steering vector by removing SAE features which are irrelevant ( Section 4). This is one of the first examples of SAEs having any success for enabling better control of language models, and we are excited to continue exploring this in future work. 1. Background and Motivation We are uncertain about how useful mechanistic interpretability research, including SAE research, will be for AI safety and alignment. Unlike RLHF and dangerous capability evaluation (for example), mechanistic interpretability is not currently very useful for downstream applications on models. Though there are ambitious goals for mechanistic interpretability research such as finding safety-relevant features in language models using SAEs, these are likely not tractable on the relatively small base models we study in all our snippets. To address these two concerns, we decided to study activation steering[1] (introduced in this blog post and expanded on in a paper). We recommend skimming the blog post for an explanation of the technique and examples of what it can do. Briefly, activation steering takes vector(s) from the residual stream on some prompt(s), and then adds these to the residual stream on a second prompt. This makes outputs from the second forward pass have properties inherited from the first forward pass. There is early evidence that this technique could help with safety-relevant properties of LLMs, such as sycophancy. We have tentative early research results that suggest SAEs are helpful for improving and interpreting steering vectors, albeit with limitations. We find these results particularly exciting as they provide evidence that SAEs can identify causally meaningful intermediate variables in the model, indicating that they aren't just finding clusters in the data or directions in logit space, which seemed much more likely before we did this research. We plan to continue this research to further validate SAEs and to gain more intuition about what features SAEs do and don't learn in practice. 2. Setup We use SAEs trained on the residual stream of GPT-2 XL at various layers, the model used in the initial activation steering blog post, inspired by the success of residual stream SAEs on GPT-2 Small ( Bloom, 2024) and Pythia models ( Cunningham et. al, 2023). The SAEs have 131072 learned features, L0 of around 60[2], and loss recovered around 97.5% (e.g. splicing in the SAE from Section 3 increases loss from 2.88 to 3.06, compared to the destructive zero ablation intervention resulting in Loss > 10). We don't think this was a particularly high-quality SAE, as the majority of its learned features were dead, and we found limitations with training residual stream SAEs that we will discuss in an upcoming paper. Even despite this, we think the results in this work are tentative evidence for SAEs being useful. It is likely easiest to simpl...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Tacit Knowledge Videos on Every Subject, published by Parker Conley on March 31, 2024 on LessWrong. TL;DR Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos - aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I'll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, George Hotz, and others. What are Tacit Knowledge Videos? Samo Burja claims YouTube has opened the gates for a revolution in tacit knowledge transfer. Burja defines tacit knowledge as follows: Tacit knowledge is knowledge that can't properly be transmitted via verbal or written instruction, like the ability to create great art or assess a startup. This tacit knowledge is a form of intellectual dark matter, pervading society in a million ways, some of them trivial, some of them vital. Examples include woodworking, metalworking, housekeeping, cooking, dancing, amateur public speaking, assembly line oversight, rapid problem-solving, and heart surgery. In my observation, domains like housekeeping and cooking have already seen many benefits from this revolution. Could tacit knowledge in domains like research, programming, mathematics, and business be next? I'm not sure, but maybe this post will help push the needle forward. For the purpose of this post, Tacit Knowledge Videos are any video that communicates "knowledge that can't properly be transmitted via verbal or written instruction". Here are some examples: Neel Nanda, who leads the Google DeepMind mechanistic interpretability team, has a playlist of "Research Walkthroughs". AI Safety research is discussed a lot around here. Watching research videos could help instantiate what AI research really looks and feels like. GiveWell has public audio recordings of its Board Meetings from 2007-2020. Participants include Elie Hassenfeld, Holden Karnofsky, Timothy Ogden, Rob Reich, Tom Rutledge, Brigid Slipka, Cari Tuna, Julia Wise, and others. Influential business meetings are not usually made public. I feel I have learned some about business communication and business operations, among other things, by listening to these recordings. Andy Matuschak recorded himself studying Quantum Mechanics with Dwarkesh Patel and doing research. Andy Matushak "helped build iOS at Apple and led R&D at Khan Academy". I found it interesting to have a peek into Matushak's spaced repetition practice and various studying heuristics and habits, as well as his process of digesting and taking notes on papers. Call to Action Share links to Tacit Knowledge Videos below! Share them frivolously! These videos are uncommon - the bottleneck to the YouTube knowledge transfer revolution is quantity, not quality. I will add the shared videos to the post. Here are the loose rules: Recall a video that you've seen that communicates tacit knowledge - "knowledge that can't properly be transmitted via verbal or written instruction". A rule of thumb for sharing: could a reader find this video through one or two YouTube searches? If not, share it. Post the title and the URL of the video. Provide information indicating why the expert in the video is credible. (However, don't let this last rule stop you from sharing a video! Again - quantity, not quality.)[1] For information on how to best use these videos, Cedric Chin and Jacob Steinhardt have some potentially relevant practical advice. Andy Matushak also has some working notes about this idea generally. Additionally, DM or email me (email in L...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AtP*: An efficient and scalable method for localizing LLM behaviour to components, published by Neel Nanda on March 18, 2024 on The AI Alignment Forum. Authors: János Kramár, Tom Lieberum, Rohin Shah, Neel Nanda A new paper from the Google DeepMind mechanistic interpretability team, from core contributors János Kramár and Tom Lieberum Tweet thread summary, paper Abstract: Activation Patching is a method of directly computing causal attributions of behavior to model components. However, applying it exhaustively requires a sweep with cost scaling linearly in the number of model components, which can be prohibitively expensive for SoTA Large Language Models (LLMs). We investigate Attribution Patching (AtP), a fast gradient-based approximation to Activation Patching and find two classes of failure modes of AtP which lead to significant false negatives. We propose a variant of AtP called AtP*, with two changes to address these failure modes while retaining scalability. We present the first systematic study of AtP and alternative methods for faster activation patching and show that AtP significantly outperforms all other investigated methods, with AtP* providing further significant improvement. Finally, we provide a method to bound the probability of remaining false negatives of AtP* estimates. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Laying the Foundations for Vision and Multimodal Mechanistic Interpretability & Open Problems, published by Sonia Joseph on March 13, 2024 on The AI Alignment Forum. Join our Discord here. This article was written by Sonia Joseph, in collaboration with Neel Nanda, and incubated in Blake Richards's lab at Mila and in the MATS community. Thank you to the Prisma core contributors, including Praneet Suresh, Rob Graham, and Yash Vadi. Full acknowledgements of contributors are at the end. I am grateful to my collaborators for their guidance and feedback. Outline Part One: Introduction and Motivation Part Two: Tutorial Notebooks Part Three: Brief ViT Overview Part Four: Demo of Prisma's Functionality Key features, including logit attribution, attention head visualization, and activation patching. Preliminary research results obtained using Prisma, including emergent segmentation maps and canonical attention heads. Part Five: FAQ, including Key Differences between Vision and Language Mechanistic Interpretability Part Six: Getting Started with Vision Mechanistic Interpretability Part Seven: How to Get Involved Part Eight: Open Problems in Vision Mechanistic Interpretability Introducing the Prisma Library for Multimodal Mechanistic Interpretability I am excited to share with the mechanistic interpretability and alignment communities a project I've been working on for the last few months. Prisma is a multimodal mechanistic interpretability library based on TransformerLens, currently supporting vanilla vision transformers (ViTs) and their vision-text counterparts CLIP. With recent rapid releases of multimodal models, including Sora, Gemini, and Claude 3, it is crucial that interpretability and safety efforts remain in tandem. While language mechanistic interpretability already has strong conceptual foundations, many research papers, and a thriving community, research in non-language modalities lags behind. Given that multimodal capabilities will be part of AGI, field-building in mechanistic interpretability for non-language modalities is crucial for safety and alignment. The goal of Prisma is to make research in mechanistic interpretability for multimodal models both easy and fun. We are also building a strong and collaborative open source research community around Prisma. You can join our Discord here. This post includes a brief overview of the library, fleshes out some concrete problems, and gives steps for people to get started. Prisma Goals Build shared infrastructure (Prisma) to make it easy to run standard language mechanistic interpretability techniques on non-language modalities, starting with vision. Build shared conceptual foundation for multimodal mechanistic interpretability. Shape and execute on research agenda for multimodal mechanistic interpretability. Build an amazing multimodal mechanistic interpretability subcommunity, inspired by current efforts in language. Set the cultural norms of this subcommunity to be highly collaborative, curious, inventive, friendly, respectful, prolific, and safety/alignment-conscious. Encourage sharing of early/scrappy research results on Discord/Less Wrong. Co-create a web of high-quality research. Tutorial Notebooks To get started, you can check out three tutorial notebooks that show how Prisma works. Main ViT Demo Overview of main mechanistic interpretability technique on a ViT, including direct logit attribution, attention head visualization, and activation patching. The activation patching switches the net's prediction from tabby cat to Border collie with a minimum ablation. Emoji Logit Lens Deeper dive into layer- and patch-level predictions with interactive plots. Interactive Attention Head Tour Deeper dive into the various types of attention heads a ViT contains with interactive JavaScript. Brief ViT Overview A vision transf...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding SAE Features with the Logit Lens, published by Joseph Isaac Bloom on March 11, 2024 on The AI Alignment Forum. This work was produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with support from Neel Nanda and Arthur Conmy. Joseph Bloom is funded by the LTFF, Manifund Regranting Program, donors and LightSpeed Grants. This post makes extensive use of Neuronpedia, a platform for interpretability focusing on accelerating interpretability researchers working with SAEs. Links: SAEs on HuggingFace, Analysis Code Executive Summary This is an informal post sharing statistical methods which can be used to quickly / cheaply better understand Sparse Autoencoder (SAE) features. Firstly, we use statistics (standard deviation, skewness and kurtosis) of the logit weight distributions of features (WuWdec[feature]) to characterize classes of features, showing that many features can be understood as promoting / suppressing interpretable classes of tokens. We propose 3 different kinds of features, analogous to previously characterized " universal neurons": Partition Features, which (somewhat) promote half the tokens and suppress the other half according to capitalization and spaces (example pictured below) Suppression Features, which act like partition features but are more asymmetric. Prediction Features which promote tokens in classes of varying sizes, ranging from promoting tokens that have a close bracket to promoting all present tense verbs. Secondly, we propose a statistical test for whether a feature's output direction is trying to distinguish tokens in some set (eg: "all caps tokens") from the rest. We borrowed this technique from systems biology where it is used at scale frequently. The key limitation here is that we need to know in advance which sets of tokens are promoted / inhibited. Lastly, we demonstrate the utility of the set-based technique by using it to locate features which enrich token categories of interest (defined by regex formulas, NLTK toolkit parts of speech tagger and common baby names for boys/girls). Feature 4467. Above: Feature Dashboard Screenshot from Neuronpedia. It is not immediately obvious from the dashboard what this feature does. Below: Logit Weight distribution classified by whether the token starts with a space, clearly indicating that this feature promotes tokens which lack an initial space character. Introduction In previous work, we trained and open-sourced a set of sparse autoencoders (SAEs) on the residual stream of GPT2 small. In collaboration with Neuronpedia, we've produced feature dashboards, auto-interpretability explanations and interfaces for browsing for ~300k+ features. The analysis in this post is performed on features from the layer 8 residual stream of GPT2 small (for no particular reason). SAEs might enable us to decompose model internals into interpretable components. Currently, we don't have a good way to measure interpretability at scale, but we can generate feature dashboards which show things like how often the feature fires, its direct effect on tokens being sampled (the logit weight distribution) and when it fires (see examples of feature dashboards below). Interpreting the logit weight distribution in feature dashboards for multi-layer models is implicitly using Logit Lens, a very popular technique in mechanistic interpretability. Applying the logit lens to features means that we compute the product of a feature direction and the unembed (WuWdec[feature]), referred to as the "logit weight distribution". Since SAEs haven't been around for very long, we don't yet know what the logit weight distributions typically look like for SAE features. Moreover, we find that the form of logit weight distribution can vary greatly. In most cases we see a vaguely normal distribution and s...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Chess-GPT Linear Emergent World Representation, published by karvonenadam on February 8, 2024 on LessWrong. A Chess-GPT Linear Emergent World Representation Introduction Among the many recent developments in ML, there were two I found interesting and wanted to dig into further. The first was gpt-3.5-turbo-instruct's ability to play chess at 1800 Elo. The fact that an LLM could learn to play chess well from random text scraped off the internet seemed almost magical. The second was Kenneth Li's Emergent World Representations paper. There is an excellent summary on The Gradient and a follow-up from Neel Nanda. In it, they trained a 25 million parameter GPT to predict the next character in an Othello game. It learns to accurately make moves in games unseen in its training dataset, and using both non-linear and linear probes it was found that the model accurately tracks the state of the board. However, this only worked for a model trained on a synthetic dataset of games uniformly sampled from the Othello game tree. They tried the same techniques on a model trained using games played by humans and had poor results. To me, this seemed like a major caveat to the findings of the paper which may limit its real world applicability. We cannot, for example, generate code by uniformly sampling from a code tree. There was also discussion on the implications of this on LessWrong, such as if pretraining should begin with synthetic data to improve interpretability. So I dug into it. I trained some models on chess games and used linear probes on the trained models. My results were very positive, and answered all of my previous questions (although of course, more questions were generated). A 50 million parameter GPT trained on 5 million games of chess learns to play at ~1300 Elo in one day on 4 RTX 3090 GPUs. This model is only trained to predict the next character in PGN strings (1.e4 e5 2.Nf3 ...) and is never explicitly given the state of the board or the rules of chess. Despite this, in order to better predict the next character, it learns to compute the state of the board at any point of the game, and learns a diverse set of rules, including check, checkmate, castling, en passant, promotion, pinned pieces, etc. In addition, to better predict the next character it also learns to estimate latent variables such as the Elo rating of the players in the game. All code, data, and models have been open sourced. Training Chess GPT My initial hypothesis was that Othello-GPT trained on human games performed poorly due to a lack of data. They only had 130k human Othello games, but the synthetic model was trained on 20 million games. I tried two different approaches to create my datasets: First, I had Stockfish Elo 3200 play 5 million games as White against a range of Stockfish 1300-3200 as Black. Hopefully, this synthetic dataset of superhuman chess bot games would provide higher quality data than human games. Second, I grabbed 16 million games from Lichess's public chess game database. I trained separate models on individual datasets and various mixes of datasets. Initially, I looked at fine-tuning open source models like LLama 7B or OpenLlama 3B. However, I almost immediately had to abandon that approach to keep my GPU costs down (I used RTX 3090s from runpod). Instead, I started training models from scratch using Andrej Karpathy's nanogpt repository. I experimented with 25M and 50M parameter models. It basically worked on the first try. The 50M parameter model played at 1300 Elo with 99.8% of its moves being legal within one day of training. I find it fairly impressive that a model with only 8 layers can correctly make a legal move 80 turns into a game. I left one training for a few more days and it reached 1500 Elo. So, gpt-3.5-turbo-instruct's performance is not magic. If you give an L...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small, published by Joseph Isaac Bloom on February 2, 2024 on The AI Alignment Forum. UPDATE: Since we posted this last night, someone pointed out that our implementation of ghost grads has a non-trivial error (which makes the results a-priori quite surprising). We computed the ghost grad forward pass using Exp(Relu(W_enc(x)[dead_neuron_mask])) rather than Exp((W_enc(x)[dead_neuron_mask])). I'm running some ablation experiments now to get to the bottom of this. This work was produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, under mentorship from Neel Nanda and Arthur Conmy. Funding for this work was provided by the Manifund Regranting Program and donors as well as LightSpeed Grants. This is intended to be a fairly informal post sharing a set of Sparse Autoencoders trained on the residual stream of GPT2-small which achieve fairly good reconstruction performance and contain fairly sparse / interpretable features. More importantly, advice from Anthropic and community members has enabled us to train these fairly more efficiently / faster than before. The specific methods that were most useful were: ghost gradients, learning rate warmup, and initializing the decoder bias with the geometric median. We discuss each of these in more detail below. 5 Minute Summary We're publishing a set of 12 Sparse AutoEncoders for the GPT2 Small residual stream. These dictionaries have approximately 25,000 features each, with very few dead features (mainly in the early layers) and high quality reconstruction (log loss when the activations are replaced with the output is 3.3 - 3.6 as compared with 3.3 normally). The L0's range from 5 in the first layer to 70 in the 9th SAE (increasing by about 5-10 per layer and dropping in the last two layers. By choosing a fixed dictionary size, we can see how statistics like the number of dead features or reconstruction cross entropy loss change with layer giving some indication of how properties of the feature distribution change with layer depth. We haven't yet extensively analyzed these dictionaries, but will share automatically generated dashboards we've generated. Readers can access the Sparse Autoencoder weights in this HuggingFace Repo. Training code and code for loading the weights / model and data loaders can be found in this Github Repository. Training curves and feature dashboards can also be found in this wandb report. Users can download all 25k feature dashboards generated for layer 2 and 10 SAEs and the first 5000 of the layer 5 SAE features here (note the left hand of column of the dashboards should currently be ignored). Layer Variance Explained L1 loss L0* % Alive Features Reconstruction CE Log Loss 0 99.15% 4.58 12.24 80.0% 3.32 1 98.37% 41.04 14.68 83.4% 3.33 2 98.07% 51.88 18.80 80.0% 3.37 3 96.97% 74.96 25.75 86.3% 3.48 4 95.77% 90.23 33.14 97.7% 3.44 5 94.90% 108.59 43.61 99.7% 3.45 6 93.90% 136.07 49.68 100% 3.44 7 93.08% 138.05 57.29 100% 3.45 8 92.57% 167.35 65.47 100% 3.45 9 92.05% 198.42 71.10 100% 3.45 10 91.12% 215.11 53.79 100% 3.52 11 93.30% 270.13 59.16 100% 3.57 Original Model 3.3 Summary Statistics for GPT2 Small Residual Stream SAEs. *L0 = Average number of features firing per token. Training SAEs that we were happy with used to take much longer than it is taking us now. Last week, it took me 20 hours to train a 50k feature SAE on 1 billion tokens and over the weekend it took 3 hours for us to train 25k SAE on 300M tokens with similar variance explained, L0 and CE loss recovered. We attribute the improvement to having implemented various pieces of advice that have made our lives a lot easier: Ghost Gradients / Avoiding Resampling: Prior to ghost gradients (which we were made aware of last week in the Anthropic Jan...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small, published by Joseph Bloom on February 2, 2024 on LessWrong. This work was produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, under mentorship from Neel Nanda and Arthur Conmy. Funding for this work was provided by the Manifund Regranting Program and donors as well as LightSpeed Grants. This is intended to be a fairly informal post sharing a set of Sparse Autoencoders trained on the residual stream of GPT2-small which achieve fairly good reconstruction performance and contain fairly sparse / interpretable features. More importantly, advice from Anthropic and community members has enabled us to train these fairly more efficiently / faster than before. The specific methods that were most useful were: ghost gradients, learning rate warmup, and initializing the decoder bias with the geometric median. We discuss each of these in more detail below. 5 Minute Summary We're publishing a set of 12 Sparse AutoEncoders for the GPT2 Small residual stream. These dictionaries have approximately 25,000 features each, with very few dead features (mainly in the early layers) and high quality reconstruction (log loss when the activations are replaced with the output is 3.3 - 3.6 as compared with 3.3 normally). The L0's range from 5 in the first layer to 70 in the 9th SAE (increasing by about 5-10 per layer and dropping in the last two layers. By choosing a fixed dictionary size, we can see how statistics like the number of dead features or reconstruction cross entropy loss change with layer giving some indication of how properties of the feature distribution change with layer depth. We haven't yet extensively analyzed these dictionaries, but will share automatically generated dashboards we've generated. Readers can access the Sparse Autoencoder weights in this HuggingFace Repo. Training code and code for loading the weights / model and data loaders can be found in this Github Repository. Training curves and feature dashboards can also be found in this wandb report. Users can download all 25k feature dashboards generated for layer 2 and 10 SAEs and the first 5000 of the layer 5 SAE features here (note the left hand of column of the dashboards should currently be ignored). Layer Variance Explained L1 loss L0* % Alive Features Reconstruction CE Log Loss 0 99.15% 4.58 12.24 80.0% 3.32 1 98.37% 41.04 14.68 83.4% 3.33 2 98.07% 51.88 18.80 80.0% 3.37 3 96.97% 74.96 25.75 86.3% 3.48 4 95.77% 90.23 33.14 97.7% 3.44 5 94.90% 108.59 43.61 99.7% 3.45 6 93.90% 136.07 49.68 100% 3.44 7 93.08% 138.05 57.29 100% 3.45 8 92.57% 167.35 65.47 100% 3.45 9 92.05% 198.42 71.10 100% 3.45 10 91.12% 215.11 53.79 100% 3.52 11 93.30% 270.13 59.16 100% 3.57 Original Model 3.3 Summary Statistics for GPT2 Small Residual Stream SAEs. *L0 = Average number of features firing per token. Training SAEs that we were happy with used to take much longer than it is taking us now. Last week, it took me 20 hours to train a 50k feature SAE on 1 billion tokens and over the weekend it took 3 hours for us to train 25k SAE on 300M tokens with similar variance explained, L0 and CE loss recovered. We attribute the improvement to having implemented various pieces of advice that have made our lives a lot easier: Ghost Gradients / Avoiding Resampling: Prior to ghost gradients (which we were made aware of last week in the Anthropic January Update), we were training SAEs with approximately 50k features on 1 billion tokens with 3 resampling events to reduce the number of dead features. This took around 20 hours and might cost about $10 with an A6000 GPU. With ghost gradients, we don't need to resample (or wait for loss curves to plateau after resampling). Now we can train on only 300M tokens instead. Simultaneously, since we now...
Full episode on PATREON! along with bonus content, early access, and lots more :) Pranav discusses how we ratioed a leading presidential candidate on X when he had less than 100 follows (follow him there @pranahaha), the Maldives vs. India Lakshadweep dispute, Sheikh Hasina remaining in power, and a special tribute to our friend and past Mango bae guest, Neel Nanda.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sparse Autoencoders Work on Attention Layer Outputs, published by Connor Kissane on January 16, 2024 on The AI Alignment Forum. This post is the result of a 2 week research sprint project during the training phase of Neel Nanda's MATS stream. Executive Summary We replicate Anthropic's MLP Sparse Autoencoder (SAE) paper on attention outputs and it works well: the SAEs learn sparse, interpretable features, which gives us insight into what attention layers learn. We study the second attention layer of a two layer language model (with MLPs). Specifically, rather than training our SAE on attn_output, we train our SAE on "hook_z" concatenated over all attention heads (aka the mixed values aka the attention outputs before a linear map - see notation here). This is valuable as we can see how much of each feature's weights come from each head, which we believe is a promising direction to investigate attention head superposition, although we only briefly explore that in this work. We open source our SAE, you can use it via this Colab notebook . Shallow Dives: We do a shallow investigation to interpret each of the first 50 features. We estimate 82% of non-dead features in our SAE are interpretable (24% of the SAE features are dead). See this feature interface to browse the first 50 features. Deep dives: To verify our SAEs have learned something real, we zoom in on individual features for much more detailed investigations: the "'board' is next by induction" feature, the local context feature of "in questions starting with 'Which'", and the more global context feature of "in texts about pets". We go beyond the techniques from the Anthropic paper, and investigate the circuits used to compute the features from earlier components, including analysing composition with an MLP0 SAE. We also investigate how the features are used downstream, and whether it's via MLP1 or the direct connection to the logits. Automation: We automatically detect and quantify a large "{token} is next by induction" feature family. This represents ~5% of the living features in the SAE. Though the specific automation technique won't generalize to other feature families, this is notable, as if there are many "one feature per vocab token" families like this, we may need impractically wide SAEs for larger models. Introduction In Anthropic's SAE paper, they find that training sparse autoencoders (SAEs) on a one layer model's MLP activations finds interpretable features, providing a path to breakdown these high dimensional activations into units that we can understand. In this post, we demonstrate that the same technique works on attention layer outputs and learns sparse, interpretable features! To see how interpretable our SAE is we perform shallow investigations of the first 50 features of our SAE (i.e. randomly chosen features). We found that 76% are not dead (i.e. activate on at least some inputs), and within the alive features we think 82% are interpretable. To get a feel for the features we find see our interactive visualizations of the first 50. Here's one example:[1] Shallow investigations are limited and may be misleading or illusory, so we then do some deep dives to more deeply understand multiple individual features including: "'board' is next, by induction" - one of many "{token} is next by induction" features "In questions starting with 'Which'" - a local context feature, which interestingly is computed by multiple heads "In pet context" - one of many high level context features Similar to the Anthropic paper's "Detailed Investigations", we understand when these features activate and how they affect downstream computation. However, we also go beyond Anthropic's techniques, and look into the upstream circuits by which these features are computed from earlier components. An attention layer (with frozen att...
NBA News, NFL News, MLB News, MLB hotstove, WNBA News, NHL News, CBS's Neighborhood, A Farewell to Kamar De Los Reyes, Lee Sun-Kyun, David Leland, Henry Sandon, Norby Walters, Garly Sojo, Ruth Seymour, Mars Williams, Ryan Minor, Terry Jill Saperstein, Essra Mohawk, Laura Lynch, Selma Archerd, Bobbie Jean Carter, Herman Rush, Seamus Lyte, Neel Nanda, Ian Pepperell, Richard Franklin, Casey Kramer, Matt Napolitano & Mike Nussbaum! --- Support this podcast: https://podcasters.spotify.com/pod/show/on-the-radar/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fact Finding: Do Early Layers Specialise in Local Processing? (Post 5), published by Neel Nanda on December 23, 2023 on The AI Alignment Forum. This is the fifth post in the Google DeepMind mechanistic interpretability team's investigation into how language models recall facts. This post is a bit tangential to the main sequence, and documents some interesting observations about how, in general, early layers of models somewhat (but not fully) specialise into processing recent tokens. You don't need to believe these results to believe our overall results about facts, but we hope they're interesting! And likewise you don't need to read the rest of the sequence to engage with this. Introduction In this sequence we've presented the multi-token embedding hypothesis, that a crucial mechanism behind factual recall is that on the final token of a multi-token entity there forms an "embedding", with linear representations of attributes of that entity. We further noticed that this seemed to be most of what early layers did, and that they didn't seem to respond much to prior context (e.g. adding "Mr Michael Jordan" didn't substantially change the residual). We hypothesised the stronger claim that early layers (e.g. the first 10-20%), in general, specialise in local processing, and that the prior context (e.g. more than 10 tokens back) is only brought in in early-mid layers. We note that this is stronger than the multi-token embedding hypothesis in two ways: it's a statement about how early layers behave on all tokens, not just the final tokens of entities about which facts are known; and it's a claim that early layers are not also doing longer range stuff in addition to producing the multi-token embedding (e.g. detecting the language of the text). We find this stronger hypothesis plausible, because tokens are a pretty messy input format, and analysing individual tokens in isolation can be highly misleading, e.g. We tested this by taking a bunch of arbitrary prompts from the Pile, taking residual streams on those, truncating the prompts to the most recent few tokens and taking residual streams on the truncated prompts, and looking at the mean centred cosine sim at different layers. Our findings: Early layers do, in general, specialise in local processing, but it's a soft division of labour not a hard split. There's a gradual transition where more context is brought in across the layers. Early layers do significant processing on recent tokens, not just the current token - this is not just a trivial result where the residual stream is dominated by the current token and slightly adjusted by each layer Early layers do much more long-range processing on common tokens (punctuation, articles, pronouns, etc) Experiments The "early layers specialise in local processing" hypothesis concretely predicts that, for a given token X in a long prompt, if we truncate the prompt to just the most recent few tokens before X, the residual stream at X should be very similar at early layers and dissimilar at later layers. We can test this empirically by looking at the cosine sim of the original vs truncated residual streams, as a function of layer and truncated context length. Taking cosine sims of residual streams naively can be misleading, as there's often a significant shared mean across all tokens, so we first subtract the mean residual stream across all tokens, and then take the cosine sim. Set-Up Model: Pythia 2.8B, as in the rest of our investigation Dataset: Strings from the Pile, the Pythia pre-training distribution. Metric: To measure how similar the original and truncated residual streams are we subtract the mean residual stream and then take the cosine sim. We compute a separate mean per layer, across all tokens in random prompts from the Pile Truncated context: We vary the number of tokens i...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intro to Superposition & Sparse Autoencoders (Colab exercises), published by CallumMcDougall on November 29, 2023 on The AI Alignment Forum. This is a linkpost for some exercises in sparse autoencoders, which I've recently finished working on as part of the upcoming ARENA 3.0 iteration. Having spoken to Neel Nanda and others in interpretability-related MATS streams, it seemed useful to make these exercises accessible out of the context of the rest of the ARENA curriculum. Links to Colabs: Exercises, Solutions. If you don't like working in Colabs, then you can clone the repo, download the exercises & solutions Colabs as notebooks, and run them in the same directory. The exercises were built out from the Toy Models of Superposition exercises from the previous iteration, but now with new Sparse Autoencoder content. These exercises fall into 2 groups: SAEs in toy models We take the toy models from Anthropic's Toy Models of Superposition paper (which there are also exercises for), and train sparse autoencoders on the representations learned by these toy models. These exercises culminate in using neuron resampling to successfully recover all the learned features from the toy model of bottleneck superposition: SAEs in real models And there are exercises on interpreting an SAE trained on a transformer, where you can discover some cool learned features (e.g. a neuron exhibiting skip trigam-like behaviour, which activates on left-brackets following Django-related sytax, and predicts the completion (' -> django). You can either read through the Solutions colab (which has all output displayed & explained), or go through the Exercises colab and fill in the functions according to the specifications you are given, looking at the Solutions when you're stuck. Both colabs come with test functions you can run to verify your solution works. List of all exercises I've listed all the exercises down here, along with prerequisites (although I expect most readers will only be interested in the sparse autoencoder exercises). Each set of exercises is labelled with their prerequisites. For instance, the label (1*, 3) means the first set of exercises is essential, and the third is recommended but not essential. Abbreviations: TMS = Toy Models of Superposition, SAE = Sparse Autoencoders. TMS: Superposition in a Nonprivileged Basis TMS: Correlated / Anticorrelated Features (1*) TMS: Superposition in a Privileged Basis (1*) TMS: Feature Geometry (1*) SAEs in Toy Models (1*, 3) SAEs in Real Models (1*, 5*, 3) Please reach out to me if you have any questions or suggestions about these exercises (either by email at cal.s.mcdougall@gmail.com, or a LessWrong private message / comment on this post). Happy coding! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Polysemantic Attention Head in a 4-Layer Transformer, published by Jett on November 9, 2023 on LessWrong. Produced as a part of MATS Program, under @Neel Nanda and @Lee Sharkey mentorship Epistemic status: optimized to get the post out quickly, but we are confident in the main claims TL;DR: head 1.4 in attn-only-4l exhibits many different attention patterns that are all relevant to model's performance Introduction In previous post about the docstring circuit, we found that attention head 1.4 (Layer 1, Head 4) in a 4-layer attention-only transformer would act as either a fuzzy previous token head or as an induction head in different parts of the prompt. These results suggested that attention head 1.4 was polysemantic, i.e. performing different functions within different contexts. In Section 1, we classify ~5 million rows of attention patterns associated with 5,000 prompts from the model's training distribution. In doing so, we identify many more simple behaviours that this head exhibits. In Section 2, we explore 3 simple behaviours (induction, fuzzy previous token, and bigger indentation) more deeply. We construct a set of prompts for each behaviour, and we investigate its importance to model performance. This post provides evidence of the complex role that attention heads play within a model's computation, and that simplifying an attention head to a simple, singular behaviour can be misleading. Section 1 Methods We uniformly sample 5,000 prompts from the model's training dataset of web text and code. We collect approximately 5 million individual rows of attention patterns corresponding to these prompts, ie. rows from the head's attention matrices that correspond to a single destination position. We then classify each of these patterns as (a mix of) simple, salient behaviours. If there is a behaviour that accounts for at least 95% of a pattern, then it is classified. Otherwise we refer to it as unknown (but there is a multitude of consistent behaviours that we did not define, and thus did not classify) Results Distribution of behaviours In Figure 1 we present results of the classification, where "all" refers to "all destination tokens" and other labels refer to specific destination tokens. Character · is for a space, for a new line, and labels such as [·K]mean " n and K spaces". We distinguish the following behaviours: previous: attention concentrated on a few previous tokens inactive: attention to BOS and EOS previous+induction: a mix of previous and basic induction unknown: not classified Some observations: Across all the patterns, previous is the most common behaviour, followed by inactive and unknown. A big chunk of the patterns (unknown) were not automatically classified. There are many examples of consistent behaviours there, but we do not know for how many patterns they account. Destination token does not determine the attention pattern. [·3] and [·7] have basically the same distributions, with ~87% of patterns not classified Prompt examples for each destination token Token: [·3] Behaviour: previous+induction There are many ways to understand this pattern, there is likely more going on than simple previous and induction behaviours. Token: ·R Behaviour: inactive Token: [·7] Behaviour: unknown This is a very common pattern, where attention is paid from "new line and indentation" to "new line and bigger indentation". We believe it accounts for most of what classified as unknown for [·7] and [·3]. Token: width Behaviour: unknown We did not see many examples like this, but looks like attention is being paid to recent tokens representing arithmetic operations. Token: dict Behaviour: previous Mostly previous token, but ·collections gets more than . and default, which points at something more complicated. Section 2 Methods We select a few behaviours and construct pro...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on November 7, 2023 on The Effective Altruism Forum. TL;DR Apply here for the third iteration of ARENA (Jan 8th - Feb 2nd)! Introduction We are excited to announce the third iteration of ARENA (Alignment Research Engineer Accelerator), a 4-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers. The program will run from January 8th - February 2nd 2024[1], and will be held at the offices of the London Initiative for Safe AI. These offices are also being used by several safety orgs (BlueDot, Apollo, Leap Labs), as well as the current London MATS cohort, and several independent researchers. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice. ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, work in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision. For more information, see our website. Outline of Content The 4-week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.Note - participants can optionally not attend the program during this week, and instead join us at the start of Chapter 1, if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEsDuration: 5 days Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers, and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformersTransformerLensIn-context Learning and Induction HeadsIndirect Object IdentificationSuperpositionSteering VectorsDuration: 5 days Chapter 2 - Reinforcement Learning In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments. Topics include: Fundamentals of RLVanilla Policy GradientProximal Policy Gradient RLHF (& finetuning LLMs with RLHF) Gym & Gymnasium environmentsDuration: 5 days Chapter 3 - Paper Replications We will conclude this program with paper replications, where participants will get guidance and mentorship while they replicate a paper containing material relevant to this course. This should draw on much of the skills and knowledge participants will have accumulated over the last 3 weeks.Duration: 5 days Below is a diagram of the curriculum as a whole, and the dependencies between sections. Note that this may change slightly in the lead-up to the program.Here is som...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxVirtual: Speaker announcements, timings, and other updates, published by Sasha Berezhnoi on November 2, 2023 on The Effective Altruism Forum. EAGxVirtual is fast approaching and we're excited to share some more details about the event! This post covers updates from the team, including dates and times, content, unique features, and demographic data. In the previous post , we covered the conference theme, reasons to attend, and reviews from the previous attendees. Content: what to expect We're very excited to announce our key speakers for this event: Peter Singer on the most pressing moral issues facing humanity . Bruce Friedrich, President of The Good Food Institute on longtermism and alternative proteins. Carl Robichaud, Co-lead on nuclear policy grantmaking at Longview Philanthropy on a turning point in the story of nuclear weapons. Olga Kikou, Head of the EU Office of Compassion in World Farming on ending the cage age in the EU. Neel Nanda, Research Engineer at DeepMind on open problems in mechanistic interpretability. We are working hard on the program. Beyond the above talks (and many more talks and workshops!), you can expect office hours hosted by experts and EA orgs, fireside chats, group meetups and icebreakers, lightning talks from attendees, and unofficial satellite events. The tentative schedule is available here (all times are in UTC). Please note that the schedule is subject to change. The final schedule will be available on the Swapcard app, which we aim to launch next week. Taking action anywhere in the world We have already received 600 applications from people representing over 70 countries. We welcome all who have a genuine interest in learning more or connecting, including those who are new to effective altruism. If you are a highly-engaged EA, you can make a difference by being responsive to requests from first-time attendees. The map below shows the geographical distribution of the participants: We would love to see more applications. If you know someone who you think should attend the conference, please encourage them to apply by sending them this link: eagxvirtual.com The deadline for applications is 11:59 pm UTC on Thursday, 16 November. Apply here if you haven't already. Dates and times The conference will be taking place from 10 am UTC on Friday, November 17th, until 11:59 pm UTC on Sunday, November 19th. We don't expect you to always be online you can be flexible with your participation! It's completely okay if you can attend only on one of the days. Recordings will be available for registered attendees, so you can watch the sessions you missed later. Friday will feature introductory-level content for participants who are relatively new to EA and a career fair on Gather Town . Saturday and Sunday will have full-day schedules, starting at 7 am UTC each day. There will be a break in the program on Sunday between 2 am and 7 am UTC. Conference features Our main content and networking platform for the conference is the Swapcard . We will share access to the app with all the attendees on November 6 and provide guidance on how to use it and get the most out of the conference. We collaborate with EA Gather Town to make an always-available virtual venue for the attendees to spark more connections and unstructured discussions throughout the conference. Extensive stewardship program . We will highlight ambassadors across different cause areas whom you can speak to get advice or feedback on your career plans. Evergreen discussion space : we are inviting everyone to use EA Anywhere Slack as a discussion space. No more Slacks that are abandoned immediately after the conference is over! Ways to contribute If you want to represent your organization at the career fair or host office hours, please, fill out this form . Apply to give a Lightning talk if ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Neel Nanda on the Mechanistic Interpretability Researcher Mindset, published by Michaël Trazzi on September 22, 2023 on LessWrong. Some excerpts from my interview with Neel Nanda about how to productively carry out research in mechanistic interpretability. Posting this here since I believe his advice is relevant for building accurate world models in general. An Informal Definition Of Mechanistic Interpretability It's kind of this weird flavor of AI interpretability that says, "Bold hypothesis. Despite the entire edifice of established wisdom and machine learning, saying that these models are bullshit, inscrutable black boxes, I'm going to assume there is some actual structure here. But the structure is not there because the model wants to be interpretable or because it wants to be nice to me. The structure is there because the model learns an algorithm, and the algorithms that are most natural to express in the model's structure and its particular architecture and stack of linear algebra are algorithms that make sense to humans. (context) Three Modes Of Mechanistic Interpretability Research: Confirming, Red Teaming And Gaining Surface Area I kind of feel a lot of my research style is dominated by this deep seated conviction that models are comprehensible and that everything is fundamentally kind of obvious and that I should be able to just go inside the model and there should be this internal structure. And so one mode of research is I just have all of these hypotheses and guesses about what's going on. I generate experiment ideas for things that should be true if my hypothesis is true. And I just repeatedly try to confirm it. Another mode of research is trying to red team and break things, where I have this hypothesis, I do this experiment, I'm like, "oh my God, this is going so well", and then get kind of stressed because I'm concerned that I'm having wishful thinking and I try to break it and falsify it and come up with experiments that would show that actually life is complicated. A third mode of research is what I call "trying to gain surface area" where I just have a system that I'm pretty confused about. I just don't really know where to get started. Often, I'll just go and do things that I think will get me more information. Just go and plot stuff or follow random things I'm curious about in a fairly undirected fuzzy way. This mode of research has actually been the most productive for me. [...] You could paraphrase them as, "Isn't it really obvious what's going on?", "Oh man, am I so sure about this?" and "Fuck around and find out". (context) Strong Beliefs Weakly Held: Having Hypotheses But Being Willing To Be Surprised You can kind of think of it as "strong beliefs weakly held". I think you should be good enough that you can start to form hypotheses, being at the point where you can sit down, set a five minute timer and brainstorm what's going on and come up with four different hypotheses is just a much, much stronger research position than when you sit down and try to brainstorm and you come up with nothing. Yeah, maybe having two hypotheses is the best one. You want to have multiple hypotheses in mind. You also want to be aware that probably both of them are wrong, but you want to have enough engagement with the problem that you can generate experiment ideas. Maybe one way to phrase it is if you don't have any idea what's going on, it's hard to notice what's surprising. And often noticing what's surprising is one of the most productive things you can do when doing research. (context) On The Benefits Of The Experimental Approach I think there is a strong trend among people, especially the kind of people who get drawn to alignment from very theory based arguments to go and just pure theory craft and play around with toy models and form beautiful, elegant hy...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Walkthrough: Automated Circuit Discovery with Arthur Conmy, published by Neel Nanda on August 29, 2023 on The AI Alignment Forum. Arthur Conmy's Automated Circuit Discovery is a great paper that makes initial forays into automating parts of mechanistic interpretability (specifically, automatically finding a sparse subgraph for a circuit). In this three part series of Youtube videos, I interview him about the paper, and we walk through it and discuss the key results and takeaways. We discuss the high-level point of the paper and what researchers should takeaway from it, the ACDC algorithm and its key nuances, existing baselines and how they adapted them to be relevant to circuit discovery, how well the algorithm works, and how you can even evaluate how well an interpretability method works. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Almost Every Theory of Impact of Interpretability, published by Charbel-Raphaël on August 17, 2023 on LessWrong. Epistemic Status: I believe I am well-versed in this subject. I erred on the side of making claims that were too strong and allowing readers to disagree and start a discussion about precise points rather than trying to edge-case every statement. I also think that using memes is important because safety ideas are boring and anti-memetic. So let's go! Many thanks to @scasper, @Sid Black , @Neel Nanda , @Fabien Roger , @Bogdan Ionut Cirstea, @WCargo, @Alexandre Variengien, @Jonathan Claybrough, @Edoardo Pona, @Andrea_Miotti, Diego Dorn, Angélina Gentaz, Clement Dumas, and Enzo Marsot for useful feedback and discussions. When I started this post, I began by critiquing the article A Long List of Theories of Impact for Interpretability, from Neel Nanda, but I later expanded the scope of my critique. Some ideas which are presented are not supported by anyone, but to explain the difficulties, I still need to 1. explain them and 2. criticize them. It gives an adversarial vibe to this post. I'm sorry about that, and I think that doing research into interpretability, even if it's no longer what I consider a priority, is still commendable. How to read this document? Most of this document is not technical, except for the section "What does the end story of interpretability look like?" which can be mostly skipped at first. I expect this document to also be useful for people not doing interpretability research. The different sections are mostly independent, and I've added a lot of bookmarks to help modularize this post. If you have very little time, just read (this is also the part where I'm most confident): Auditing deception with Interp is out of reach (4 min) Enumerative safety critique (2 min) Technical Agendas with better Theories of Impact (1 min) Here is the list of claims that I will defend: (bolded sections are the most important ones) The overall Theory of Impact is quite poor Interp is not a good predictor of future systems Auditing deception with interp is out of reach What does the end story of interpretability look like? That's not clear at all. Enumerative safety? Reverse engineering? Olah's Interpretability dream? Retargeting the search? Relaxed adversarial training? Microscope AI? Preventive measures against Deception seem much more workable Steering the world towards transparency Cognitive Emulations - Explainability By design Interpretability May Be Overall Harmful Outside view: The proportion of junior researchers doing Interp rather than other technical work is too high So far my best ToI for interp: Nerd Sniping? Even if we completely solve interp, we are still in danger Technical Agendas with better Theories of Impact Conclusion Note: The purpose of this post is to criticize the Theory of Impact (ToI) of interpretability for deep learning models such as GPT-like models, and not the explainability and interpretability of small models. The emperor has no clothes? I gave a talk about the different risk models, followed by an interpretability presentation, then I got a problematic question, "I don't understand, what's the point of doing this?" Hum. Feature viz? (left image) Um, it's pretty but is this useful? Is this reliable? GradCam (a pixel attribution technique, like on the above right figure), it's pretty. But I've never seen anybody use it in industry. Pixel attribution seems useful, but accuracy remains the king. Induction heads? Ok, we are maybe on track to retro engineer the mechanism of regex in LLMs. Cool. The considerations in the last bullet points are based on feeling and are not real arguments. Furthermore, most mechanistic interpretability isn't even aimed at being useful right now. But in the rest of the post, we'll find out if...
Today's episode features stand-up comedian Neel Nanda and sex worker, writer and actress Bree Essrig. We discuss post nut clarity, pheromones, role play, pegging, dominant girls, foot fetishes and more. Whether or not vibrators outsource pleasure in the bedroom or just positively enhance it. Bree shares the strangest request she ever got from a client on OnlyFans. We learn Nina is actually vanilla… but don't worry she's still a slut, just a romantic one. FOLLOW OUR SOCIALS: https://www.flowcode.com/page/girlsonguys FOLLOW NINA: https://www.instagram.com/pizzaparty69 (00:00) Intro (01:15) Post Nut Clarity (02:59) Porn Categories (06:47) Vibrators in the Bedroom (08:28) Neel's Savior Michelle (14:24) Vibrators Pt. 2 (22:48) Dominant Girls and Roleplay (29:26) Fetish-Based Sex Work (30:18) Body Hair and Pheromones (34:30) Sh*t in a Box (36:31) Foot Fetish (48:09) She Put a Hair Tie Around My... (57:45) Bent D*cks
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Mech Interp Puzzle 2: Word2Vec Style Embeddings, published by Neel Nanda on July 28, 2023 on The AI Alignment Forum. Code can be found here. No prior knowledge of mech interp or language models is required to engage with this. Language model embeddings are basically a massive lookup table. The model "knows" a vocabulary of 50,000 tokens, and each one has a separate learned embedding vector. But these embeddings turn out to contain a shocking amount of structure! Notably, it's often linear structure, aka word2vec style structure. Word2Vec is a famous result (in old school language models, back in 2013!), that 'man - woman == king - queen'. Rather than being a black box lookup table, the embedded words were broken down into independent variables, "gender" and "royalty". Each variable gets its own direction, and the embedded word is seemingly the sum of its variables. One of the more striking examples of this I've found is a "number of characters per token" direction - if you do a simple linear regression mapping each token to the number of characters in it, this can be very cleanly recovered! (If you filter out ridiculous tokens, like 19979: 512 spaces). Notably, this is a numerical feature not a categorical feature - to go from 3 tokens to four, or four to five, you just add this direction! This is in contrast to the model just learning to cluster tokens of length 3, of length 4, etc. Question 2.1: Why do you think the model cares about the "number of characters" feature? And why is it useful to store it as a single linear direction? There's tons more features to be uncovered! There's all kinds of fundamental syntax-level binary features that are represented strongly, such as "begins with a space". Question 2.2: Why is "begins with a space" an incredibly important feature for a language model to represent? (Playing around a tokenizer may be useful for building intuition here) You can even find some real word2vec style relationships between pairs of tokens! This is hard to properly search for, because most interesting entities are multiple tokens. One nice example of meaningful single token entities is common countries and capitals (idea borrowed from Merullo et al). If you take the average embedding difference for single token countries and capitals, this explains 18.58% of the variance of unseen countries! (0.25% is what I get for a randomly chosen vector). Caveats: This isn't quite the level we'd expect for real word2vec (which should be closer to 100%), and cosine sim only tracks that the direction matters, not what its magnitude is (while word2vec should be constant magnitude, as it's additive). My intuition is that models think more in terms of meaningful directions though, and that the exact magnitude isn't super important for a binary variable. Question 2.3: A practical challenge: What other features can you find in the embedding? Here's the colab notebook I generated the above graphs from, it should be pretty plug and play. The three sections should give examples for looking for numerical variables (number of chars), categorical variables (begins with space) and relationships (country to capital). Here's some ideas - I encourage you to spend time brainstorming your own! Is a number How frequent is it? (Use pile-10k to get frequency data for the pile) Is all caps Is the first token of a common multi-token word Is a first name Is a function word (the, a, of, etc) Is a punctuation character Is unusually common in German (or language of your choice) The indentation level in code Relationships between common English words and their French translations Relationships between the male and female version of a word Please share your thoughts and findings in the comments! (Please wrap them in spoiler tags) Thanks for listening. To help us out with The Nonlinear Library o...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Visible loss landscape basins don't correspond to distinct algorithms, published by Mikhail Samin on July 28, 2023 on LessWrong. Thanks to Justis, Arthur Conmy, Neel Nanda, Joseph Miller, and Tilman Räuker for their feedback on a draft. I feel like many people haven't noticed an important result of mechanistic interpretability analysis of grokking, and so haven't updated how they think about loss landscapes and algorithms that neural networks end up implementing. I think this has implications for alignment research. When thinking about grokking, people often imagine something like this: the neural network implements Algorithm 1 (e.g., memorizes the training data), achieves ~ the lowest loss available via memorization, then moves around the bottom of the Algorithm 1 basin and after a while, stumbles across a path to Algorithm 2 (e.g., the general algorithm for modular addition). But the mechanistic interpretability of grokking analysis has shown that this is not true! Approximately from the start of the training, Algorithm 1 is most of what the circuits are doing and what almost entirely determines the neural network's output; but at the same time, the entire time the neural network's parameters visibly move down the wider basin, they don't just become better at memorization; they increasingly implement the circuits for Algorithm 1 and the circuits for Algorithm 2, in superposition. (Neel Nanda et al. have shown that the circuits that at the end implement the general algorithm for modular addition start forming approximately at the start of the training: the gradient was mostly an arrow towards memorization, but also, immediately from the initialization of the weights, a bit of an arrow pointing towards the general algorithm. The circuits were gradually tuned throughout the training. The noticeable change in the test loss starts occurring when the circuits are already almost right.) A path through the loss landscape visible in 3D doesn't correspond to how and what the neural network is actually learning. Almost all of the changes to the loss are due to the increasingly good implementation of Algorithm 1; but apparently, the entire time, the gradient also points towards some faraway implementation of Algorithm 2. Somehow, the direction in which Algorithm 2 lies is also visible to the derivative, and moving the parameters in the direction the gradient points means mostly increasingly implementing Algorithm 1, and also increasingly implementing the faraway Algorithm 2. "Grokking", visible in the test loss, is due to the change that happens when the parameters already implement Algorithm 2 accurately enough for the switch from mostly outputting the results of an implementation of Algorithm 1 to the results of an improving implementation of Algorithm 2 not to hurt the performance. Once it's the case, the neural network puts more weight into Algorithm 2 and at the same time quickly tunes it to be even more accurate (which is increasingly easy as the output is increasingly determined by the implementation of Algorithm 2). This is something many people seem to have missed. I did not expect it to be the case, was surprised, and updated how I think about loss landscapes. Does this generalize? Maybe. I'm not sure whether it's correct to generalize from the mechanistic interpretability of grokking analysis to neural networks in general, real LLMs are under-parametrised while the grokking model is very over-parameterised, but I guess it might be reasonable to expect that this is how deep learning generally works. People seem to think that multi-dimensional loss landscapes of neural networks have basins for specific algorithms, and neural networks get into these depending on how relatively large these basins are, which might be caused by how simple the algorithms are, how path-depe...
In this wide-ranging conversation, Tim Scarfe interviews Neel Nanda, a researcher at DeepMind working on mechanistic interpretability, which aims to understand the algorithms and representations learned by machine learning models. Neel discusses how models can represent their thoughts using motifs, circuits, and linear directional features which are often communicated via a "residual stream", an information highway models use to pass information between layers. Neel argues that "superposition", the ability for models to represent more features than they have neurons, is one of the biggest open problems in interpretability. This is because superposition thwarts our ability to understand models by decomposing them into individual units of analysis. Despite this, Neel remains optimistic that ambitious interpretability is possible, citing examples like his work reverse engineering how models do modular addition. However, Neel notes we must start small, build rigorous foundations, and not assume our theoretical frameworks perfectly match reality. The conversation turns to whether models can have goals or agency, with Neel arguing they likely can based on heuristics like models executing long term plans towards some objective. However, we currently lack techniques to build models with specific goals, meaning any goals would likely be learned or emergent. Neel highlights how induction heads, circuits models use to track long range dependencies, seem crucial for phenomena like in-context learning to emerge. On the existential risks from AI, Neel believes we should avoid overly confident claims that models will or will not be dangerous, as we do not understand them enough to make confident theoretical assertions. However, models could pose risks through being misused, having undesirable emergent properties, or being imperfectly aligned. Neel argues we must pursue rigorous empirical work to better understand and ensure model safety, avoid "philosophizing" about definitions of intelligence, and focus on ensuring researchers have standards for what it means to decide a system is "safe" before deploying it. Overall, a thoughtful conversation on one of the most important issues of our time. Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk Neel Nanda: https://www.neelnanda.io/ TOC [00:00:00] Introduction and Neel Nanda's Interests (walk and talk) [00:03:15] Mechanistic Interpretability: Reverse Engineering Neural Networks [00:13:23] Discord questions [00:21:16] Main interview kick-off in studio [00:49:26] Grokking and Sudden Generalization [00:53:18] The Debate on Systematicity and Compositionality [01:19:16] How do ML models represent their thoughts [01:25:51] Do Large Language Models Learn World Models? [01:53:36] Superposition and Interference in Language Models [02:43:15] Transformers discussion [02:49:49] Emergence and In-Context Learning [03:20:02] Superintelligence/XRisk discussion Transcript: https://docs.google.com/document/d/1FK1OepdJMrqpFK-_1Q3LQN6QLyLBvBwWW_5z8WrS1RI/edit?usp=sharing Refs: https://docs.google.com/document/d/115dAroX0PzSduKr5F1V4CWggYcqIoSXYBhcxYktCnqY/edit?usp=sharing
[Bonus Episode] Future of Life Institute Podcast host Gus Docker interviews Conjecture CEO Connor Leahy to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev Future of Life Institute is the organization that recently published an open letter calling for a six-month pause on training new AI systems. FLI was founded by Jann Tallinn who we interviewed in Episode 16 of The Cognitive Revolution. We think their podcast is excellent. They frequently interview critical thinkers in AI like Neel Nanda, Ajeya Cotra, and Connor Leahy - an episode we found particularly fascinating and is airing for our audience today. The FLI Podcast also recently interviewed Nathan Labenz for a 2-part episode: https://futureoflife.org/podcast/nathan-labenz-on-how-ai-will-transform-the-economy/ SUBSCRIBE: Future of Life Institute Podcast: Apple: https://podcasts.apple.com/us/podcast/future-of-life-institute-podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics TIMESTAMPS: (00:00) Episode introduction (01:55) GPT-4 (18:30) "Magic" in machine learning (29:43) Cognitive emulations (40:00) Machine learning VS explainability (49:50) Human data = human AI? (1:01:50) Analogies for cognitive emulations (1:28:10) Demand for human-like AI (1:33:50) Aligning superintelligence If you'd like to listen to Part 2 of this interview with Connor Leahy, you can head here: https://podcasts.apple.com/us/podcast/connor-leahy-on-the-state-of-ai-and-alignment-research/id1170991978?i=1000609972001
Patreon: www.patreon.com/thetastelessgentlemen Alex: www.instagram.com/tasteless_alex/ Dom: www.instagram.com/djdomking/ www.twitch.tv/djdomking Schoeny: www.instagram.com/hangtymemusic www.twitch.tv/djschoeny djschoeny.com/ www.youtube.com/user/djschoeny Scoop: www.twitch.tv/scoopttg Audio Version: Spotify: open.spotify.com/show/3c4htUxSEpZ…rLRyeL4bXBThLhwA Soundcloud: @thetastelessgentlemen Itunes: podcasts.apple.com/us/podcast/the-…en/id1050400644 Stitcher: www.stitcher.com/show/the-tasteless-gentlemen Youtube: www.youtube.com/c/TheTastelessGentlemen Please sub, thumbs up, rate on iTunes, follow on twitter, or whatever… it really helps spread the good word. FB: www.facebook.com/Tastelessgentlemenshow IG: www.instagram.com/thetastelessgentlemen/ TW: twitter.com/TastelessGents Outro – Born Sinners – Heavenly – audiograb.com/8msZpY5OfA Intro – Monkeys Spinning Monkeys – Kevin MacLeod (incompetech.com)