Generative AI

Rethinking Global Text Conditioning in Diffusion Transformers

NNikita StarodubcevDDaniil PakhomovZZongze WuIIlya DrobyshevskiyYYuchen LiuZZhonghao WangYYuqian ZhouZZhe LinDDmitry Baranchuk
Published
February 9, 2026
Authors
9
Word Count
11,212

Pooled CLIP embeddings are useless for conditioning but powerful as guidance signals.

Abstract

Diffusion transformers typically incorporate textual information via attention layers and a modulation mechanism using a pooled text embedding. Nevertheless, recent approaches discard modulation-based text conditioning and rely exclusively on attention. In this paper, we address whether modulation-based text conditioning is necessary and whether it can provide any performance advantage. Our analysis shows that, in its conventional usage, the pooled embedding contributes little to overall performance, suggesting that attention alone is generally sufficient for faithfully propagating prompt information. However, we reveal that the pooled embedding can provide significant gains when used from a different perspective-serving as guidance and enabling controllable shifts toward more desirable properties. This approach is training-free, simple to implement, incurs negligible runtime overhead, and can be applied to various diffusion models, bringing improvements across diverse tasks, including text-to-image/video generation and image editing.

Key Takeaways

  • 1

    Pooled CLIP embeddings provide minimal contribution to modern diffusion transformers like FLUX and HiDream.

  • 2

    Repurposing pooled embeddings as active guidance signals instead of passive conditioning dramatically improves effectiveness.

  • 3

    Guidance signals created from positive-negative prompt embedding differences steer generation toward desirable aesthetic qualities.

Limitations

  • The paper only tests on specific state-of-the-art models, limiting generalizability across all diffusion transformer architectures.

  • Effectiveness of the guidance approach depends on carefully crafted positive and negative prompts for desired results.

Keywords

diffusion transformersattention layersmodulation mechanismpooled text embeddingtext-to-image generationimage editingcontrollable generation

More in Generative AI

View all
Rethinking Global Text Conditioning in Diffusion Transformers | Paperchime