Get the latest tech news

Transformers without normalization


Normalization layers are ubiquitous in modern neural networks and have long been considered essential. This work demonstrates that Transformers without normalization can achieve the same or better performance using a remarkably simple technique. We introduce Dynamic Tanh (DyT), an element-wise operation $DyT($x$) = \tanh(α$x$)$, as a drop-in replacement for normalization layers in Transformers. DyT is inspired by the observation that layer normalization in Transformers often produces tanh-like, $S$-shaped input-output mappings. By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning. We validate the effectiveness of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models. These findings challenge the conventional understanding that normalization layers are indispensable in modern neural networks, and offer new insights into their role in deep networks.

View PDFHTML (experimental) Abstract:Normalization layers are ubiquitous in modern neural networks and have long been considered essential. DyT is inspired by the observation that layer normalization in Transformers often produces tanh-like, $S$-shaped input-output mappings. We validate the effectiveness of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Transformers

Transformers

Photo of normalization

normalization

Related news:

News photo

The Tradeoffs of SSMs and Transformers

News photo

Understanding Transformers via N-gram Statistics

News photo

Beyond transformers: Nvidia’s MambaVision aims to unlock faster, cheaper enterprise computer vision