Large Language Models

Linear representations in language models can change dramatically over a conversation

AAndrew Kyle LampinenYYuxuan LiEEghbal HosseiniSSangnie BhardwajMMurray Shanahan
Published
January 28, 2026
Authors
5
Word Count
8,854
Code
Includes code

Language models dynamically change beliefs in conversations.

Abstract

Language model representations often contain linear directions that correspond to high-level concepts. Here, we study the dynamics of these representations: how representations evolve along these dimensions within the context of (simulated) conversations. We find that linear representations can change dramatically over a conversation; for example, information that is represented as factual at the beginning of a conversation can be represented as non-factual at the end and vice versa. These changes are content-dependent; while representations of conversation-relevant information may change, generic information is generally preserved. These changes are robust even for dimensions that disentangle factuality from more superficial response patterns, and occur across different model families and layers of the model. These representation changes do not require on-policy conversations; even replaying a conversation script written by an entirely different model can produce similar changes. However, adaptation is much weaker from simply having a sci-fi story in context that is framed more explicitly as such. We also show that steering along a representational direction can have dramatically different effects at different points in a conversation. These results are consistent with the idea that representations may evolve in response to the model playing a particular role that is cued by a conversation. Our findings may pose challenges for interpretability and steering -- in particular, they imply that it may be misleading to use static interpretations of features or directions, or probes that assume a particular range of features consistently corresponds to a particular ground-truth value. However, these types of representational dynamics also point to exciting new research directions for understanding how models adapt to context.

Key Takeaways

  • 1

    Language models adapt internal representations during conversations.

  • 2

    Model representations can flip between factual and non-factual.

  • 3

    Interpretability and safety are impacted by representational changes.

Limitations

  • Experiments based on a small set of conversations.

  • Mechanisms behind representational changes not explored.

Keywords

linear directionslanguage model representationsconversation dynamicsfactual representationrepresentational driftcontext adaptationmodel steeringinterpretability

More in Large Language Models

View all
Linear representations in language models can change dramatically over a conversation | Paperchime