Transformer (deep learning architecture)
{{Short description|Algorithm for modelling sequential data}}
{{Machine learning|Artificial neural network}}
File:Transformer,_full_architecture.png
The transformer is a deep learning architecture that was developed by researchers at Google and is based on the multi-head attention mechanism, which was proposed in the 2017 paper "Attention Is All You Need".{{cite journal |last1=Vaswani |first1=Ashish |author1-link=Ashish Vaswani |last2=Shazeer |first2=Noam |last3=Parmar |first3=Niki |last4=Uszkoreit |first4=Jakob |last5=Jones |first5=Llion |last6=Gomez |first6=Aidan N |author6-link=Aidan Gomez |last7=Kaiser |first7=Łukasz |last8=Polosukhin |first8=Illia |date=2017 |title=Attention is All you Need |url=https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=30}} Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM).{{cite journal |last1=Hochreiter |first1=Sepp |author-link=Sepp Hochreiter |last2=Schmidhuber |first2=Jürgen |author-link2=Jürgen Schmidhuber |date=1 November 1997 |title=Long Short-Term Memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735 |issn=0899-7667 |pmid=9377276 |s2cid=1915014}} Later variations have been widely adopted for training large language models (LLM) on large (language) datasets.{{cite web|url=https://openai.com/blog/better-language-models/|title=Better Language Models and Their Implications|date=2019-02-14|website=OpenAI|access-date=2019-08-25|archive-date=2020-12-19|archive-url=https://web.archive.org/web/20201219132206/https://openai.com/blog/better-language-models/|url-status=live}}
Transformers were first developed as an improvement over previous architectures for machine translation,{{cite arXiv |eprint=1409.0473 |class=cs.CL |last1=Bahdanau |first2=Kyunghyun |last2=Cho |title=Neural Machine Translation by Jointly Learning to Align and Translate |date=September 1, 2014 |last3=Bengio |first3=Yoshua}}{{cite arXiv |eprint=1508.04025 |class=cs.CL |first1=Minh-Thang |last1=Luong |first2=Hieu |last2=Pham |title=Effective Approaches to Attention-based Neural Machine Translation |date=August 17, 2015 |last3=Manning |first3=Christopher D.}} but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning,{{Cite journal |last1=Parisotto |first1=Emilio |last2=Song |first2=Francis |last3=Rae |first3=Jack |last4=Pascanu |first4=Razvan |last5=Gulcehre |first5=Caglar |last6=Jayakumar |first6=Siddhant |last7=Jaderberg |first7=Max |last8=Kaufman |first8=Raphaël Lopez |last9=Clark |first9=Aidan |last10=Noury |first10=Seb |last11=Botvinick |first11=Matthew |last12=Heess |first12=Nicolas |last13=Hadsell |first13=Raia |date=2020-11-21 |title=Stabilizing Transformers for Reinforcement Learning |url=https://proceedings.mlr.press/v119/parisotto20a.html |journal=Proceedings of the 37th International Conference on Machine Learning |language=en |publisher=PMLR |pages=7487–7498}} audio,{{cite arXiv|eprint=2212.04356 |last1=Radford |first1=Alec |author2=Jong Wook Kim |last3=Xu |first3=Tao |last4=Brockman |first4=Greg |last5=McLeavey |first5=Christine |last6=Sutskever |first6=Ilya |title=Robust Speech Recognition via Large-Scale Weak Supervision |year=2022 |class=eess.AS }} multimodal learning, robotics,{{Cite journal |last1=Monastirsky |first1=Maxim |last2=Azulay |first2=Osher |last3=Sintov |first3=Avishai |date=February 2023 |title=Learning to Throw With a Handful of Samples Using Decision Transformers |url=https://ieeexplore.ieee.org/document/9984828 |journal=IEEE Robotics and Automation Letters |volume=8 |issue=2 |pages=576–583 |doi=10.1109/LRA.2022.3229266 |issn=2377-3766}} and even playing chess.{{cite arXiv |last1=Ruoss |first1=Anian |last2=Delétang |first2=Grégoire |last3=Medapati |first3=Sourabh |last4=Grau-Moya |first4=Jordi |last5=Wenliang |first5=Li |last6=Catt |first6=Elliot |last7=Reid |first7=John |last8=Genewein |first8=Tim |date=2024-02-07 |title=Grandmaster-Level Chess Without Search |class=cs.LG |eprint=2402.04494v1}} It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs){{cite book|last1=Wolf |first1=Thomas| last2=Debut |first2=Lysandre| last3=Sanh |first3=Victor |last4=Chaumond |first4=Julien| last5=Delangue |first5=Clement| last6=Moi |first6=Anthony |last7=Cistac |first7=Pierric |last8=Rault |first8=Tim |last9=Louf |first9=Remi |last10=Funtowicz |first10=Morgan |last11=Davison |first11=Joe |last12=Shleifer |first12=Sam |last13=von Platen |first13=Patrick |last14=Ma |first14=Clara |last15=Jernite |first15=Yacine |last16=Plu |first16=Julien |last17=Xu |first17=Canwen |last18=Le Scao |first18=Teven |last19=Gugger |first19=Sylvain |last20=Drame |first20=Mariama |last21=Lhoest |first21=Quentin |last22=Rush |first22=Alexander |title=Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations |chapter=Transformers: State-of-the-Art Natural Language Processing |year=2020|pages=38–45 |doi=10.18653/v1/2020.emnlp-demos.6 |s2cid=208117506}} and BERT{{cite web|url=http://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html|title=Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing|website=Google AI Blog|date=2 November 2018 |access-date=2019-08-25|archive-date=2021-01-13|archive-url=https://web.archive.org/web/20210113211449/https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html|url-status=live}} (bidirectional encoder representations from transformers).{{TOC limit|3}}
History
{{See also|Timeline of machine learning}}
= Predecessors =
For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough was LSTM (1995),{{NoteTag|Gated recurrent units (2014) further reduced its complexity.}} a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units.{{Cite journal |last1=Feldman |first1=J. A. |last2=Ballard |first2=D. H. |date=1982-07-01 |title=Connectionist models and their properties |url=https://www.sciencedirect.com/science/article/pii/S0364021382800013 |journal=Cognitive Science |volume=6 |issue=3 |pages=205–254 |doi=10.1016/S0364-0213(82)80001-3 |issn=0364-0213}} Neural networks using multiplicative units were later called sigma-pi networks{{Cite book |last1=Rumelhart |first1=David E. |url=https://stanford.edu/~jlmcc/papers/PDP/Chapter2.pdf |title=Parallel Distributed Processing, Volume 1: Explorations in the Microstructure of Cognition: Foundations, Chapter 2 |last2=McClelland |first2=James L. |last3=Hinton |first3=Geoffrey E. |date=1987-07-29 |publisher=Bradford Books |isbn=978-0-262-68053-0 |location=Cambridge, Mass |language=en}} or higher-order networks.{{Cite journal |last1=Giles |first1=C. Lee |last2=Maxwell |first2=Tom |date=1987-12-01 |title=Learning, invariance, and generalization in high-order neural networks |url=https://opg.optica.org/abstract.cfm?URI=ao-26-23-4972 |journal=Applied Optics |language=en |volume=26 |issue=23 |pages=4972–4978 |doi=10.1364/AO.26.004972 |pmid=20523475 |issn=0003-6935}} LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs.{{NoteTag|Some architectures, such as RWKV or state space models, avoid the issue.}} Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input.{{cite journal |last1=Schmidhuber |first1=Jürgen |author-link1=Jürgen Schmidhuber |date=1992 |title=Learning to control fast-weight memories: an alternative to recurrent nets. |url=https://archive.org/download/wikipedia-scholarly-sources-corpus/10.1162.zip/10.1162%252Fneco.1992.4.1.131.pdf |journal=Neural Computation |volume=4 |issue=1 |pages=131–139 |doi=10.1162/neco.1992.4.1.131 |s2cid=16683347}} One of its two networks has "fast weights" or "dynamic links" (1981).Christoph von der Malsburg: The correlation theory of brain function. Internal Report 81-2, MPI Biophysical Chemistry, 1981. http://cogprints.org/1380/1/vdM_correlation.pdf See Reprint in Models of Neural Networks II, chapter 2, pages 95-119. Springer, Berlin, 1994.Jerome A. Feldman, "Dynamic connections in neural networks," Biological Cybernetics, vol. 46, no. 1, pp. 27-39, Dec. 1982.{{Cite journal |last1=Hinton |first1=Geoffrey E. |last2=Plaut |first2=David C. |date=1987 |title=Using Fast Weights to Deblur Old Memories |url=https://escholarship.org/uc/item/0570j1dp |journal=Proceedings of the Annual Meeting of the Cognitive Science Society |language=en |volume=9}} A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer.{{cite conference |last1=Katharopoulos |first1=Angelos |last2=Vyas |first2=Apoorv |last3=Pappas |first3=Nikolaos |last4=Fleuret |first4=François |date=2020 |title=Transformers are RNNs: Fast autoregressive Transformers with linear attention |url=https://proceedings.mlr.press/v119/katharopoulos20a.html |publisher=PMLR |pages=5156–5165 |book-title=ICML 2020}}{{cite conference |last1=Schlag |first1=Imanol |last2=Irie |first2=Kazuki |last3=Schmidhuber |first3=Jürgen |author-link3=Juergen Schmidhuber |date=2021 |title=Linear Transformers Are Secretly Fast Weight Programmers |publisher=Springer |pages=9355–9366 |book-title=ICML 2021}}
= Attention with seq2seq =
{{Main|Seq2seq#History}}
The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see previous papers{{Cite book |last1=Cho |first1=Kyunghyun |last2=van Merriënboer |first2=Bart |last3=Gulcehre |first3=Caglar |last4=Bahdanau |first4=Dzmitry |last5=Bougares |first5=Fethi |last6=Schwenk |first6=Holger |last7=Bengio |first7=Yoshua |chapter=Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation |date=October 2014 |editor-last=Moschitti |editor-first=Alessandro |editor2-last=Pang |editor2-first=Bo |editor3-last=Daelemans |editor3-first=Walter |title=Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) |chapter-url=https://aclanthology.org/D14-1179 |location=Doha, Qatar |publisher=Association for Computational Linguistics |pages=1724–1734 |doi=10.3115/v1/D14-1179|arxiv=1406.1078 }}{{cite arXiv |eprint=1409.3215 |class=cs.CL |first1=Ilya |last1=Sutskever |first2=Oriol |last2=Vinyals |title=Sequence to sequence learning with neural networks |date=14 Dec 2014 |last3=Le |first3=Quoc Viet}} [first version posted to arXiv on 10 Sep 2014]). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.
A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.{{cite arXiv |eprint=1412.3555 |class=cs.NE |first1=Junyoung |last1=Chung |first2=Caglar |last2=Gulcehre |title=Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling |last3=Cho |first3=KyungHyun |last4=Bengio |first4=Yoshua |year=2014}}{{citation |last1=Gruber |first1=N. |title=Are GRU cells more specific and LSTM cells more sensitive in motive classification of text? |journal=Frontiers in Artificial Intelligence |volume=3 |page=40 |year=2020 |doi=10.3389/frai.2020.00040 |pmc=7861254 |pmid=33733157 |s2cid=220252321 |last2=Jockisch |first2=A. |doi-access=free}}
These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.{{Cite journal |last1=Sutskever |first1=Ilya |last2=Vinyals |first2=Oriol |last3=Le |first3=Quoc V |date=2014 |title=Sequence to Sequence Learning with Neural Networks |url=https://proceedings.neurips.cc/paper/2014/hash/a14ac55a4f27472c5d894ec1c3c743d2-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=27|arxiv=1409.3215 }}
The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".
The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.{{Cite arXiv |eprint=1508.04025 |class=cs.CL |first1=Minh-Thang |last1=Luong |first2=Hieu |last2=Pham |title=Effective Approaches to Attention-based Neural Machine Translation |date=2015 |last3=Manning |first3=Christopher D.}}
In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.{{cite arXiv |eprint=1609.08144 |class=cs.CL |first1=Yonghui |last1=Wu |first2=Mike |last2=Schuster |title=Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation |date=2016-09-01 |display-authors=1 |last3=Chen |first3=Zhifeng |last4=Le |first4=Quoc V. |last5=Norouzi |first5=Mohammad |last6=Macherey |first6=Wolfgang |last7=Krikun |first7=Maxim |last8=Cao |first8=Yuan |last9=Gao |first9=Qin |last10=Macherey |first10=Klaus |last11=Klingner |first11=Jeff |last12=Shah |first12=Apurva |last13=Johnson |first13=Melvin |last14=Liu |first14=Xiaobing |last15=Kaiser |first15=Łukasz}} It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.{{cite news |last=Lewis-Kraus |first=Gideon |date=2016-12-14 |title=The Great A.I. Awakening |url=https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html |archive-url=https://web.archive.org/web/20230524052626/https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html |archive-date=24 May 2023 |access-date=2023-06-22 |work=The New York Times |issn=0362-4331}}
= Parallelizing attention =
{{main|Attention (machine learning)#History}}
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs.{{cite arXiv|last1=Parikh |first1=Ankur P. |title=A Decomposable Attention Model for Natural Language Inference |date=2016-09-25 |eprint=1606.01933 |last2=Täckström |first2=Oscar |last3=Das |first3=Dipanjan |last4=Uszkoreit |first4=Jakob|class=cs.CL }} One of its authors, Jakob Uszkoreit, suspected that attention without recurrence is sufficient for language translation, thus the title "attention is all you need".{{Cite magazine |last=Levy |first=Steven |title=8 Google Employees Invented Modern AI. Here's the Inside Story |url=https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/ |url-status=live |archive-url=https://web.archive.org/web/20240320101528/https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/ |archive-date=20 Mar 2024 |access-date=2024-08-06 |magazine=Wired |language=en-US |issn=1059-1028}} That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs.{{Cite book |last1=Cheng |first1=Jianpeng |title=Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing |last2=Dong |first2=Li |last3=Lapata |first3=Mirella |date=November 2016 |publisher=Association for Computational Linguistics |editor-last=Su |editor-first=Jian |location=Austin, Texas |pages=551–561 |chapter=Long Short-Term Memory-Networks for Machine Reading |doi=10.18653/v1/D16-1053 |editor2-last=Duh |editor2-first=Kevin |editor3-last=Carreras |editor3-first=Xavier |chapter-url=https://aclanthology.org/D16-1053/}}
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.{{Citation |last1=Peng |first1=Bo |title=RWKV: Reinventing RNNs for the Transformer Era |date=2023-12-10 |arxiv=2305.13048 |last2=Alcaide |first2=Eric |last3=Anthony |first3=Quentin |last4=Albalak |first4=Alon |last5=Arcadinho |first5=Samuel |last6=Biderman |first6=Stella |last7=Cao |first7=Huanqi |last8=Cheng |first8=Xin |last9=Chung |first9=Michael}}
{{anchor|Transformer boom}}
= AI boom era =
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles.{{Cite magazine |last=Marche |first=Stephen |date=2024-08-23 |title=Was Linguistic A.I. Created by Accident? |url=https://www.newyorker.com/science/annals-of-artificial-intelligence/was-linguistic-ai-created-by-accident |access-date=2024-08-27 |magazine=The New Yorker |language=en-US |issn=0028-792X}} Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom.
In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model.{{cite arXiv |eprint=1810.04805v2 |class=cs.CL |first1=Jacob |last1=Devlin |first2=Ming-Wei |last2=Chang |title=BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |date=11 October 2018 |last3=Lee |first3=Kenton |last4=Toutanova |first4=Kristina}} In 2019 October, Google started using BERT to process search queries.{{Cite web |date=2020-10-15 |title=Google: BERT now used on almost every English query |url=https://searchengineland.com/google-bert-used-on-almost-every-english-query-342193 |access-date=2020-11-24 |website=Search Engine Land}} In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.{{Cite web |title=Recent Advances in Google Translate |url=http://research.google/blog/recent-advances-in-google-translate/ |access-date=2024-05-08 |website=research.google |language=en}}
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly{{Cite web |title=The inside story of how ChatGPT was built from the people who made it |url=https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ |access-date=2024-08-06 |website=MIT Technology Review |language=en}} popular, triggering a boom around large language models.{{cite web |date=June 11, 2018 |title=Improving language understanding with unsupervised learning |url=https://openai.com/research/language-unsupervised |url-status=live |archive-url=https://web.archive.org/web/20230318210736/https://openai.com/research/language-unsupervised |archive-date=2023-03-18 |access-date=2023-03-18 |website=openai.com}}{{Citation |title=finetune-transformer-lm |date=June 11, 2018 |url=https://github.com/openai/finetune-transformer-lm |access-date=2023-05-01 |publisher=OpenAI}}
Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer,{{cite arXiv |eprint=2010.11929 |class=cs.CV |first1=Alexey |last1=Dosovitskiy |first2=Lucas |last2=Beyer |title=An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale |date=2021-06-03 |last3=Kolesnikov |first3=Alexander |last4=Weissenborn |first4=Dirk |last5=Zhai |first5=Xiaohua |last6=Unterthiner |first6=Thomas |last7=Dehghani |first7=Mostafa |last8=Minderer |first8=Matthias |last9=Heigold |first9=Georg |last10=Gelly |first10=Sylvain |last11=Uszkoreit |first11=Jakob}} speech recognition, robotics,{{Citation |last1=Chen |first1=Lili |title=Decision Transformer: Reinforcement Learning via Sequence Modeling |date=2021-06-24 |arxiv=2106.01345 |last2=Lu |first2=Kevin |last3=Rajeswaran |first3=Aravind |last4=Lee |first4=Kimin |last5=Grover |first5=Aditya |last6=Laskin |first6=Michael |last7=Abbeel |first7=Pieter |last8=Srinivas |first8=Aravind |last9=Mordatch |first9=Igor}} and multimodal.{{Citation |last1=Choromanski |first1=Krzysztof |title=Rethinking Attention with Performers |date=2022-11-19 |arxiv=2009.14794 |last2=Likhosherstov |first2=Valerii |last3=Dohan |first3=David |last4=Song |first4=Xingyou |last5=Gane |first5=Andreea |last6=Sarlos |first6=Tamas |last7=Hawkins |first7=Peter |last8=Davis |first8=Jared |last9=Mohiuddin |first9=Afroz}} The vision transformer, in turn, stimulated new developments in convolutional neural networks.{{Cite conference |last1=Liu |first1=Zhuang |last2=Mao |first2=Hanzi |last3=Wu |first3=Chao-Yuan |last4=Feichtenhofer |first4=Christoph |last5=Darrell |first5=Trevor |last6=Xie |first6=Saining |date=2022 |conference=Conference on Computer Vision and Pattern Recognition |title=A ConvNet for the 2020s |url=https://openaccess.thecvf.com/content/CVPR2022/html/Liu_A_ConvNet_for_the_2020s_CVPR_2022_paper.html |language=en |pages=11976–11986}} Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024),{{Citation |last1=Esser |first1=Patrick |title=Scaling Rectified Flow Transformers for High-Resolution Image Synthesis |date=2024-03-05 |arxiv=2403.03206 |last2=Kulal |first2=Sumith |last3=Blattmann |first3=Andreas |last4=Entezari |first4=Rahim |last5=Müller |first5=Jonas |last6=Saini |first6=Harry |last7=Levi |first7=Yam |last8=Lorenz |first8=Dominik |last9=Sauer |first9=Axel}} and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
Training
= Methods for stabilizing training =
The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.{{cite arXiv |eprint=2002.04745 |class=cs.LG |first1=Ruibin |last1=Xiong |first2=Yunchang |last2=Yang |title=On Layer Normalization in the Transformer Architecture |date=2020-06-29 |last3=He |first3=Di |last4=Zheng |first4=Kai |last5=Zheng |first5=Shuxin |last6=Xing |first6=Chen |last7=Zhang |first7=Huishuai |last8=Lan |first8=Yanyan |last9=Wang |first9=Liwei |last10=Liu |first10=Tie-Yan}}
= Pretrain-finetune =
Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include:
- language modeling
- next-sentence prediction
- question answering
- reading comprehension
- sentiment analysis
- paraphrasing
The T5 transformer report{{Cite journal |last1=Raffel |first1=Colin |last2=Shazeer |first2=Noam |last3=Roberts |first3=Adam |last4=Lee |first4=Katherine |last5=Narang |first5=Sharan |last6=Matena |first6=Michael |last7=Zhou |first7=Yanqi |last8=Li |first8=Wei |last9=Liu |first9=Peter J. |date=2020-01-01 |title=Exploring the limits of transfer learning with a unified text-to-text transformer |url=https://dl.acm.org/doi/abs/10.5555/3455716.3455856 |journal=The Journal of Machine Learning Research |volume=21 |issue=1 |pages=140:5485–140:5551 |arxiv=1910.10683 |issn=1532-4435}} documents a large number of natural language pretraining tasks. Some examples are:
- restoring or repairing incomplete or corrupted text. For example, the input, "Thank you{{nnbsp|~~}}me to your party{{nnbsp|~~}}week", might generate the output, "Thank you for inviting me to your party last week".
- translation between natural languages (machine translation)
- judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable",{{cite arXiv | eprint=1910.10683 | last1=Raffel | first1=Colin | last2=Shazeer | first2=Noam | last3=Roberts | first3=Adam | last4=Lee | first4=Katherine | last5=Narang | first5=Sharan | last6=Matena | first6=Michael | last7=Zhou | first7=Yanqi | last8=Li | first8=Wei | last9=Liu | first9=Peter J. | title=Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | date=2019 | class=cs.LG }} because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well.
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
= Tasks =
{{See also|Large language model#Evaluation}}
In general, there are 3 classes of language modelling tasks: "masked",{{Cite web |title=Masked language modeling |url=https://huggingface.co/docs/transformers/tasks/masked_language_modeling |access-date=2023-10-05 |website=huggingface.co}} "autoregressive",{{Cite web |title=Causal language modeling |url=https://huggingface.co/docs/transformers/tasks/language_modeling |access-date=2023-10-05 |website=huggingface.co}} and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens: and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task.
In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks.
In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model).
Architecture
All transformers have the same primary components:
- Tokenizers, which convert text into tokens.
- Embedding layer, which converts tokens and positions of the tokens into vector representations.
- Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants.
- Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens.
The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as .
= Tokenization =
{{Main|Lexical analysis}}
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size . When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece.
= Embedding =
{{See|Word embedding}}
Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix . For example, if the input token is , then the one-hot representation is , and its embedding vector isThe token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is called hidden size or embedding size and written as . This size is written as in the original Transformer paper.
= Un-embedding =
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmax layer:The matrix has shape . The embedding matrix and the un-embedding matrix are sometimes required to be transposes of each other, a practice called weight tying.{{Citation |last1=Press |first1=Ofir |title=Using the Output Embedding to Improve Language Models |date=2017-02-21 |arxiv=1608.05859 |last2=Wolf |first2=Lior}}
= Positional encoding =
File:Positional encoding.png positional encoding with parameters ]]
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of type , where is a positive even integer. The full positional encoding defined in the original paper is:where .
Here, is a free parameter that should be significantly larger than the biggest that would be input into the positional encoding function. The original paper uses .
The function is in a simpler form when written as a complex function of type where .
The main reason for using this positional encoding function is that using it, shifts are linear transformations:where is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:for any constants . This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
= Encoder-decoder (overview) =
File:Transformer,_one_encoder-decoder_block.png
File:Transformer,_stacked_layers_and_sublayers.png
Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).{{cite web|url=https://indico.io/blog/sequence-modeling-neural-networks-part2-attention-models/|title=Sequence Modeling with Neural Networks (Part 2): Attention Models|date=2016-04-18|website=Indico|access-date=2019-10-15|archive-date=2020-10-21|archive-url=https://web.archive.org/web/20201021203352/https://indico.io/blog/sequence-modeling-neural-networks-part2-attention-models/|url-status=live |last1=Lintz |first1=Nathan }}{{cite web |last=Alammar |first=Jay |title=The Illustrated Transformer |url=http://jalammar.github.io/illustrated-transformer/ |url-status=live |archive-url=https://web.archive.org/web/20201018061610/https://jalammar.github.io/illustrated-transformer/ |archive-date=2020-10-18 |access-date=2019-10-15 |website=jalammar.github.io}}
Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model.
= Feedforward network =
{{Anchor|FFN|Feedforward network|Feedforward module}}File:Transformer architecture - FFN module.png
The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:where and are weight matrices and and are bias vectors, and is its activation function. The original Transformer used ReLU activation.
The number of neurons in the middle layer is called intermediate size (GPT),{{Cite web |last=Team |first=Keras |title=Keras documentation: GPT2Backbone model |url=https://keras.io/api/keras_nlp/models/gpt2/gpt2_backbone/ |access-date=2024-08-08 |website=keras.io |language=en}} filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size: .
= Scaled dot-product attention =
{{Main|Dot-product attention}}
== Attention head ==
File:Transformer,_attention_block_diagram.png
File:Transformer architecture - Attention Head module.png
The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights , the key weights , and the value weights .
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length , and each entry is a vector of dimension . Similarly for the key and value sequences.
For each vector in the query sequence, it is multiplied by a matrix to produce a query vector . The matrix of all query vectors is the query matrix:Similarly, we construct the key matrix and the value matrix .
It is usually the case that all are square matrices, meaning , etc.
Attention weights are calculated using the query and key vectors: the attention weight from token to token is the dot product between and . The attention weights are divided by the square root of the dimension of the key vectors, , which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that and are different matrices allows attention to be non-symmetric: if token attends to token (i.e. is large), this does not necessarily mean that token will attend to token (i.e. could be small). The output of the attention unit for token is the weighted sum of the value vectors of all tokens, weighted by , the attention from token to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices , and are defined as the matrices where the th rows are vectors , , and respectively. Then we can represent the attention as
\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^\mathrm{T}}{\sqrt{d_k}}\right)V
\end{align}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector is query size and similarly for the key size and value size . The output dimension of an attention head is its head dimension . The attention mechanism requires the following three equalities to hold:
but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, then . If the attention head is used in a cross-attention fashion, then usually . It is theoretically possible for all three to be different, but that is rarely the case in practice.
== Multiheaded attention ==
File:Multiheaded_attention,_block_diagram.png
File:Transformer architecture - Multiheaded Attention module.png
One set of matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices, and , which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix , in combination with the part of the output projection matrix , determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects.{{cite journal|last1=Clark|first1=Kevin |last2=Khandelwal|first2=Urvashi|last3=Levy|first3=Omer|last4=Manning|first4=Christopher D.|date=August 2019|title=What Does BERT Look at? An Analysis of BERT's Attention|url=https://www.aclweb.org/anthology/W19-4828|journal=Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP|location=Florence, Italy|publisher=Association for Computational Linguistics|pages=276–286|doi=10.18653/v1/W19-4828|doi-access=free|access-date=2020-05-20|archive-date=2020-10-21|archive-url=https://web.archive.org/web/20201021211357/https://www.aclweb.org/anthology/W19-4828/|url-status=live|arxiv=1906.04341}} The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers.
Concretely, let the multiple attention heads be indexed by , then we have where the matrix is the concatenation of word embeddings, and the matrices are "projection matrices" owned by individual attention head , and is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimension , but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:Since , its output projection matrix is a square matrix.
== Masked attention ==
The Transformer architecture is constructed to calculate output tokens iteratively. Assuming refers to the calculation of the first output token , for step , the output token shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step , the calculation for all outputs should not have access to tokens at position for (as it naturally is the case for time step , when tokens are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix that is at entries where the attention link must be cut, and at other places:
\text{MaskedAttention}(Q, K, V) = \text{softmax}\left(M + \frac{QK^\mathrm{T}}{\sqrt{d_k}}\right)V
\end{align} The following matrix is commonly used in decoder self-attention modules, called "causal masking":
0 & -\infty & -\infty & \dots & -\infty \\
0 & 0 & -\infty & \dots & -\infty \\
0 & 0 & 0 & \dots & -\infty \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \dots & 0
\end{bmatrix}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form
, where
is a random permutation matrix.{{Cite journal |last1=Yang |first1=Zhilin |last2=Dai |first2=Zihang |last3=Yang |first3=Yiming |last4=Carbonell |first4=Jaime |last5=Salakhutdinov |first5=Russ R |last6=Le |first6=Quoc V |date=2019 |title=XLNet: Generalized Autoregressive Pretraining for Language Understanding |url=https://proceedings.neurips.cc/paper/2019/hash/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=32|arxiv=1906.08237 }}
= Encoder =
File:Transformer,_one_encoder_block.png
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
\text{given input vectors } & h_0, h_1, \dots\\
\text{combine them into a matrix } H &= \begin{bmatrix} h_0 \\ h_1 \\ \vdots \end{bmatrix} \\
\text{EncoderLayer}(H) &= \begin{bmatrix} \text{FFN}(\text{MultiheadedAttention}(H, H, H)_0) \\ \text{FFN}(\text{MultiheadedAttention}(H, H, H)_1) \\ \vdots \end{bmatrix} \\
\end{aligned}
where stands for "feed-forward network". We can more succinctly write it as
with the implicit convention that the is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
= Decoder =
File:Transformer,_one_decoder_block.png
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:
H' &= \text{MaskedMultiheadedAttention}(H, H, H) \\
\text{DecoderLayer}(H) &=\text{FFN}(\text{MultiheadedAttention}(H', H^E, H^E))
\end{aligned}
where
is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
=Adapted architectures=
Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence.{{cite web
|url = https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
|title = Improving Language Understanding by Generative Pre-Training
|last1 = Radford
|first1 = Alec
|last2 = Narasimhan
|first2 = Karthik
|last3 = Salimans
|first3 = Tim
|last4 = Sutskever
|first4 = Ilya
|page = 12
|publisher = OpenAI
|date = 11 June 2018
|access-date = 23 January 2021
|archive-date = 26 January 2021
|archive-url = https://web.archive.org/web/20210126024542/https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
|url-status = live}} BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.
Full transformer architecture
= Sublayers =
File:Transformer,_stacked_multilayers.pngEach encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
File:Transformer_encoder,_with_norm-first_and_norm-last.png
File:Transformer_decoder,_with_norm-first_and_norm-last.png
File:Transformer,_full_architecture.pngFile:Transformer,_schematic_object_hierarchy,_for_implementation_in_object-oriented_programming.png for the full Transformer architecture, in object-oriented programming style]]The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
{{Anchor|pre-LN}}There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is where is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer isThe original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,{{Citation |last1=Wang |first1=Qiang |title=Learning Deep Transformer Models for Machine Translation |date=2019-06-04 |arxiv=1906.01787 |last2=Li |first2=Bei |last3=Xiao |first3=Tong |last4=Zhu |first4=Jingbo |last5=Li |first5=Changliang |last6=Wong |first6=Derek F. |last7=Chao |first7=Lidia S.}} was found to be easier to train, requiring no warm-up, leading to faster convergence.
= Pseudocode =
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from{{Citation |last1=Phuong |first1=Mary |title=Formal Algorithms for Transformers |date=2022-07-19 |arxiv=2207.09238 |last2=Hutter |first2=Marcus}}
input: Encoder input t_e
Decoder input t_d
output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence))
/* encoder */
z_e ← encoder.tokenizer(t_e)
for each t in 1:length(z_e) do
z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t)
for each l in 1:length(encoder.layers) do
layer ← encoder.layers[l]
/* first sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.multiheaded_attention(z_e, z_e, z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
/* second sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.feedforward(z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
for each t in 1:length(z_e) do
z_e[t] ← encoder.final_layer_norm(z_e[t])
/* decoder */
z_d ← decoder.tokenizer(t_d)
for each t in 1:length(z_d) do
z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t)
for each l in 1:length(decoder.layers) do
layer ← decoder.layers[l]
/* first sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* second sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.multiheaded_attention(z_d, z_e, z_e)
for each i in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* third sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.feedforward(z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
z_d ← decoder.final_layer_norm(z_d)
output_distributions ← []
for each t in 1:length(z_d) do
output_distributions.append(decoder.unembed(z_d[t]))
return output_distributions
= Terminology =
The Transformer architecture, being modular, allows variations. Several common variations are described here.{{Cite journal |last1=Raffel |first1=Colin |last2=Shazeer |first2=Noam |last3=Roberts |first3=Adam |last4=Lee |first4=Katherine |last5=Narang |first5=Sharan |last6=Matena |first6=Michael |last7=Zhou |first7=Yanqi |last8=Li |first8=Wei |last9=Liu |first9=Peter J. |date=2020 |title=Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer |url=http://jmlr.org/papers/v21/20-074.html |journal=Journal of Machine Learning Research |volume=21 |issue=140 |pages=1–67 |arxiv=1910.10683 |issn=1533-7928}}
{{Anchor|encoder-only}}An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.
{{Anchor|decoder-only}}A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only.
{{Anchor|encoder-decoder}}An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.
{{Anchor|prefixLM}}A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form{{Pg|location=Figure 3}}
\mathbf{0} & -\infty \\
\mathbf{0} & M_{\text{causal}}
\end{bmatrix}
where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.{{Citation |last1=Tay |first1=Yi |title=UL2: Unifying Language Learning Paradigms |date=2023-02-28 |arxiv=2205.05131 |last2=Dehghani |first2=Mostafa |last3=Tran |first3=Vinh Q. |last4=Garcia |first4=Xavier |last5=Wei |first5=Jason |last6=Wang |first6=Xuezhi |last7=Chung |first7=Hyung Won |last8=Shakeri |first8=Siamak |last9=Bahri |first9=Dara}}
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.{{Cite web |date=June 8, 2020 |title=Recent Advances in Google Translate |url=http://research.google/blog/recent-advances-in-google-translate/ |url-status=live |archive-url=https://web.archive.org/web/20240704042433/https://research.google/blog/recent-advances-in-google-translate/ |archive-date=4 Jul 2024 |access-date=2024-08-07 |website=Google Research |language=en}}
Subsequent work
= Alternative activation functions =
The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU;{{Cite arXiv |eprint=2002.05202 |class=cs.LG |first=Noam |last=Shazeer |title=GLU Variants Improve Transformer |date=2020-02-01}} both GPT-1 and BERT used GELU.{{Cite arXiv |last1=Hendrycks |first1=Dan |last2=Gimpel |first2=Kevin |date=2016-06-27 |title=Gaussian Error Linear Units (GELUs) |class=cs.LG |eprint=1606.08415v5 |language=en}}
Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module.
= Alternative normalizations =
The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm{{Cite journal |last1=Zhang |first1=Biao |last2=Sennrich |first2=Rico |date=2019 |title=Root Mean Square Layer Normalization |url=https://proceedings.neurips.cc/paper/2019/hash/1e8a19426224ca89e83cef47f1e7f53b-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=32|arxiv=1910.07467 }} which is used in the Llama series. Other examples include CapsuleNormTembine, Hamidou, Manzoor Ahmed Khan, and Issa Bamia. 2024. "Mean-Field-Type Transformers" Mathematics 12, no. 22: 3506. https://doi.org/10.3390/math12223506 ScaleNorm,{{Cite journal |last1=Nguyen |first1=Toan Q. |last2=Salazar |first2=Julian |date=2019-11-02 |editor-last=Niehues |editor-first=Jan |editor2-last=Cattoni |editor2-first=Rolando |editor3-last=Stüker |editor3-first=Sebastian |editor4-last=Negri |editor4-first=Matteo |editor5-last=Turchi |editor5-first=Marco |editor6-last=Ha |editor6-first=Thanh-Le |editor7-last=Salesky |editor7-first=Elizabeth |editor8-last=Sanabria |editor8-first=Ramon |editor9-last=Barrault |editor9-first=Loic |title=Transformers without Tears: Improving the Normalization of Self-Attention |url=https://aclanthology.org/2019.iwslt-1.17 |journal=Proceedings of the 16th International Conference on Spoken Language Translation |location=Hong Kong |publisher=Association for Computational Linguistics|doi=10.5281/zenodo.3525484 |arxiv=1910.05895 }} or FixNorm.
= Alternative positional encodings =
Transformers may use other positional encoding methods than sinusoidal.{{cite journal |last1=Dufter |first1=Philipp |last2=Schmitt |first2=Martin |last3=Schütze |first3=Hinrich |date=2022-06-06 |title=Position Information in Transformers: An Overview |journal=Computational Linguistics |volume=48 |issue=3 |pages=733–763 |doi=10.1162/coli_a_00445 |issn=0891-2017 |s2cid=231986066 |doi-access=free|arxiv=2102.11090 }}
The original Transformer paper reported using a learned positional encoding,{{Cite journal |last1=Gehring |first1=Jonas |last2=Auli |first2=Michael |last3=Grangier |first3=David |last4=Yarats |first4=Denis |last5=Dauphin |first5=Yann N. |date=2017-07-17 |title=Convolutional Sequence to Sequence Learning |url=https://proceedings.mlr.press/v70/gehring17a.html |journal=Proceedings of the 34th International Conference on Machine Learning |language=en |publisher=PMLR |pages=1243–1252}} but finding it not superior to the sinusoidal one. Later, {{Citation |last1=Haviv |first1=Adi |title=Transformer Language Models without Positional Encodings Still Learn Positional Information |date=2022-12-05 |arxiv=2203.16634 |last2=Ram |first2=Ori |last3=Press |first3=Ofir |last4=Izsak |first4=Peter |last5=Levy |first5=Omer}} found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
== RoPE ==
{{Anchor|Rotary positional embedding}}RoPE (rotary positional embedding),{{Cite arXiv|last1=Su |first1=Jianlin |last2=Lu |first2=Yu |last3=Pan |first3=Shengfeng |last4=Murtadha |first4=Ahmed |last5=Wen |first5=Bo |last6=Liu |first6=Yunfeng |date=2021-04-01 |title=RoFormer: Enhanced Transformer with Rotary Position Embedding |class=cs.CL |eprint=2104.09864 }} is best explained by considering a list of 2-dimensional vectors . Now pick some angle . Then RoPE encoding is
\begin{pmatrix} \cos m \theta & - \sin m \theta \\
\sin m \theta & \cos m \theta \end{pmatrix}
\begin{pmatrix} x^{(1)}_m \\ x^{(2)}_m \\ \end{pmatrix} = \begin{pmatrix} x^{(1)}_m \cos m\theta - x^{(2)}_m \sin m \theta \\ x^{(2)}_m \cos m\theta + x^{(1)}_m \sin m \theta \\ \end{pmatrix}
Equivalently, if we write the 2-dimensional vectors as complex numbers , then RoPE encoding is just multiplication by an angle:
For a list of -dimensional vectors, a RoPE encoder is defined by a sequence of angles . Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:
\text{RoPE}\big(x, m\big)^T\text{RoPE}\big(y, n\big)
=
\text{RoPE}\big(x, m+k\big)^T\text{RoPE}\big(y, n+k\big)
for any integer .
== ALiBi ==
ALiBi (Attention with Linear Biases){{Cite arXiv|last1=Press |first1=Ofir |last2=Smith |first2=Noah A. |last3=Lewis |first3=Mike |date=2021-08-01 |title=Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation |class=cs.CL |eprint=2108.12409 }} is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is
\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^\mathrm{T}}{\sqrt{d_k}} + s B\right)V
\end{align}Here, is a real number ("scalar"), and is the linear bias matrix defined by
0 & 1 & 2 & 3 & \cdots \\
-1 & 0 & 1 & 2 & \cdots \\
-2 & -1 & 0 & 1 & \cdots \\
-3 & -2 & -1 & 0 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \ddots \\
\end{pmatrix}
in other words, . The idea being that the linear bias matrix is a softened mask. Just as represent full attention paid, and represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
== Relative Position Encodings==
Relative Position Encodings{{Cite arXiv |last1=Shaw |first1=Peter |last2=Uszkoreit |first2=Jakob |last3=Vaswani |first3=Ashish |date=2018 |title=Self-Attention with Relative Position Representations |class=cs.CL |eprint=1803.02155}} is similar to ALiBi, but more generic:
\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^\mathrm{T}}{\sqrt{d_k}} + B\right)V
\end{align}where is a Toeplitz matrix, that is, whenever . This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".{{Citation |last1=Ke |first1=Guolin |title=Rethinking Positional Encoding in Language Pre-training |date=2021-03-15 |arxiv=2006.15595 |last2=He |first2=Di |last3=Liu |first3=Tie-Yan}}
= Efficient implementation =
The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
== KV caching ==
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.{{Cite book |last1=Kwon |first1=Woosuk |title=Proceedings of the 29th Symposium on Operating Systems Principles |last2=Li |first2=Zhuohan |last3=Zhuang |first3=Siyuan |last4=Sheng |first4=Ying |last5=Zheng |first5=Lianmin |last6=Yu |first6=Cody Hao |last7=Gonzalez |first7=Joseph |last8=Zhang |first8=Hao |last9=Stoica |first9=Ion |date=2023-10-23 |publisher=Association for Computing Machinery |isbn=979-8-4007-0229-7 |series=SOSP '23 |location=New York, NY, USA |pages=611–626 |chapter=Efficient Memory Management for Large Language Model Serving with PagedAttention |doi=10.1145/3600006.3613165 |chapter-url=https://dl.acm.org/doi/10.1145/3600006.3613165 |arxiv=2309.06180}}{{Citation |title=vllm-project/vllm |date=2024-06-20 |url=https://github.com/vllm-project/vllm |access-date=2024-06-20 |publisher=vLLM}}{{Cite web |last=Contribution) |first=Woosuk Kwon*, Zhuohan Li*, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Yu, Joey Gonzalez, Hao Zhang, and Ion Stoica (* Equal |date=2023-06-20 |title=vLLM: Easy, Fast, and Cheap LLM Serving with PagedAttention |url=https://blog.vllm.ai/2023/06/20/vllm.html |access-date=2024-06-20 |website=vLLM Blog |language=en}}
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
== FlashAttention ==
FlashAttention{{cite journal |last1=Dao |first1=Tri |last2=Fu |first2=Dan |last3=Ermon |first3=Stefano |last4=Rudra |first4=Atri |last5=Ré |first5=Christopher |date=2022-12-06 |title=FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness |url=https://proceedings.neurips.cc/paper_files/paper/2022/hash/67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html |journal=Advances in Neural Information Processing Systems |volume=35 |pages=16344–16359|arxiv=2205.14135}} is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details.
An improved version, FlashAttention-2,{{cite web |title=Stanford CRFM |url=https://crfm.stanford.edu/2023/07/17/flash2.html |access-date=2023-07-18 |website=crfm.stanford.edu}}{{cite web |date=2023-06-17 |title=FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning |url=https://princeton-nlp.github.io/flash-atttention-2/ |access-date=2023-07-18 |website=Princeton NLP }}{{cite web |title=Introducing Together AI Chief Scientist Tri Dao, as he releases FlashAttention-2 to speed up model training and inference |url=https://together.ai/blog/tri-dao-flash-attention |access-date=2023-07-18 |website=TOGETHER }} was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).{{cite arXiv |last1=Ainslie |first1=Joshua |title=GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints |date=2023-12-23 |eprint=2305.13245 |last2=Lee-Thorp |first2=James |last3=de Jong |first3=Michiel |last4=Zemlyanskiy |first4=Yury |last5=Lebrón |first5=Federico |last6=Sanghai |first6=Sumit|class=cs.CL }}
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8.
== Multi-Query Attention ==
{{Anchor|Multi-query attention|Grouped-query attention}}
File:DeepSeek KV cache comparison between MHA, GQA, MQA, MLA.svg
Multi-Query Attention changes the multiheaded attention mechanism.{{Cite arXiv|last1=Chowdhery |first1=Aakanksha |last2=Narang |first2=Sharan |last3=Devlin |first3=Jacob |last4=Bosma |first4=Maarten |last5=Mishra |first5=Gaurav |last6=Roberts |first6=Adam |last7=Barham |first7=Paul |last8=Chung |first8=Hyung Won |last9=Sutton |first9=Charles |last10=Gehrmann |first10=Sebastian |last11=Schuh |first11=Parker |last12=Shi |first12=Kensen |last13=Tsvyashchenko |first13=Sasha |last14=Maynez |first14=Joshua |last15=Rao |first15=Abhishek |date=2022-04-01 |title=PaLM: Scaling Language Modeling with Pathways |class=cs.CL |eprint=2204.02311 }} Whereas normally,
with Multi-Query Attention, there is just one , thus:
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.{{Citation |last1=Ainslie |first1=Joshua |title=GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints |date=2023-12-23 |arxiv=2305.13245 |last2=Lee-Thorp |first2=James |last3=de Jong |first3=Michiel |last4=Zemlyanskiy |first4=Yury |last5=Lebrón |first5=Federico |last6=Sanghai |first6=Sumit}}
File:DeepSeek_MoE_and_MLA_(DeepSeek-V2).svg{{Citation |author1=DeepSeek-AI |title=DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model |date=19 June 2024 |arxiv=2405.04434 |last2=Liu |first2=Aixin |last3=Feng |first3=Bei |last4=Wang |first4=Bin |last5=Wang |first5=Bingxuan |last6=Liu |first6=Bo |last7=Zhao |first7=Chenggang |last8=Dengr |first8=Chengqi |last9=Ruan |first9=Chong}}.{{Pg|location=Figure 2}}]]
{{Anchor|MLA|Multihead Latent Attention}}
Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.
== Speculative decoding ==
Speculative decoding{{Citation |last1=Leviathan |first1=Yaniv |title=Fast Inference from Transformers via Speculative Decoding |date=2023-05-18 |arxiv=2211.17192 |last2=Kalman |first2=Matan |last3=Matias |first3=Yossi}}{{cite web|url=https://yaofu.notion.site/Towards-100x-Speedup-Full-Stack-Transformer-Inference-Optimization-43124c3688e14cffaf2f1d6cbdf26c6c|title=Towards 100x Speedup: Full Stack Transformer Inference Optimization|first=Yao|last=Fu|date=2023-12-13}} is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token , taking time . However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each is indeed the token with the largest log-likelihood in the -th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens: . This only takes . These tokens are then run through the larger GPT-3 in one go. Suppose that and are verified by GPT-3 as what it would have picked, then those are kept, but is not, so are discarded, and GPT-3 is run on those. This would take , which might be shorter than .
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.{{Citation |last1=Chen |first1=Charlie |title=Accelerating Large Language Model Decoding with Speculative Sampling |date=2023-02-02 |arxiv=2302.01318 |last2=Borgeaud |first2=Sebastian |last3=Irving |first3=Geoffrey |last4=Lespiau |first4=Jean-Baptiste |last5=Sifre |first5=Laurent |last6=Jumper |first6=John}}
File:Multi-Token Prediction (DeepSeek) 01.svg
{{Anchor|Multi-Token Prediction}}In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.{{Citation |last=Gloeckle |first=Fabian |title=Better & Faster Large Language Models via Multi-token Prediction |date=2024-04-30 |url=https://arxiv.org/abs/2404.19737 |publisher=arXiv |doi=10.48550/arXiv.2404.19737 |id=arXiv:2404.19737 |last2=Idrissi |first2=Badr Youbi |last3=Rozière |first3=Baptiste |last4=Lopez-Paz |first4=David |last5=Synnaeve |first5=Gabriel}}{{Citation |last=DeepSeek-AI |title=DeepSeek-V3 Technical Report |date=2024-12-27 |url=https://arxiv.org/abs/2412.19437 |publisher=arXiv |doi=10.48550/arXiv.2412.19437 |id=arXiv:2412.19437 |last2=Liu |first2=Aixin |last3=Feng |first3=Bei |last4=Xue |first4=Bing |last5=Wang |first5=Bingxuan |last6=Wu |first6=Bochao |last7=Lu |first7=Chengda |last8=Zhao |first8=Chenggang |last9=Deng |first9=Chengqi}}
= Sub-quadratic transformers =
Training transformer-based architectures can be expensive, especially for long inputs.{{cite arXiv |eprint=2001.04451 |class=cs.LG |first1=Nikita |last1=Kitaev |first2=Łukasz |last2=Kaiser |title=Reformer: The Efficient Transformer |last3=Levskaya |first3=Anselm |year=2020}} Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows.{{Cite book |last1=Liu |first1=Ze |last2=Lin |first2=Yutong |last3=Cao |first3=Yue |last4=Hu |first4=Han |last5=Wei |first5=Yixuan |last6=Zhang |first6=Zheng |last7=Lin |first7=Stephen |last8=Guo |first8=Baining |chapter=Swin Transformer: Hierarchical Vision Transformer using Shifted Windows |year=2021 |title=2021 IEEE/CVF International Conference on Computer Vision (ICCV) |chapter-url=https://ieeexplore.ieee.org/document/9710580 |publisher=IEEE |pages=9992–10002 |doi=10.1109/ICCV48922.2021.00986 |isbn=978-1-6654-2812-5|arxiv=2103.14030 }} In the audio domain, SepTr decouples the attention in time and frequency domains.{{Cite journal |last1=Ristea |first1=Nicolaea Catalin |last2=Ionescu |first2=Radu Tudor |last3=Khan |first3=Fahad Shahbaz |date=2022-09-18 |title=SepTr: Separable Transformer for Audio Spectrogram Processing |url=https://www.isca-archive.org/interspeech_2022/ristea22_interspeech.html |journal=Interspeech |language=en |publisher=ISCA |pages=4103–4107 |doi=10.21437/Interspeech.2022-249|arxiv=2203.09581 }} Long Range Arena (2020){{cite arXiv |eprint=2011.04006 |class=cs.LG |first1=Yi |last1=Tay |first2=Mostafa |last2=Dehghani |title=Long Range Arena: A Benchmark for Efficient Transformers |date=2020-11-08 |last3=Abnar |first3=Samira |last4=Shen |first4=Yikang |last5=Bahri |first5=Dara |last6=Pham |first6=Philip |last7=Rao |first7=Jinfeng |last8=Yang |first8=Liu |last9=Ruder |first9=Sebastian |last10=Metzler |first10=Donald}} is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
== Alternative attention graphs ==
The standard attention graph is either all-to-all or causal, both of which scales as where is the number of tokens in a sequence.
Reformer (2020){{cite web |date=16 January 2020 |title=Reformer: The Efficient Transformer |url=http://ai.googleblog.com/2020/01/reformer-efficient-transformer.html |url-status=live |archive-url=https://web.archive.org/web/20201022210019/https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html |archive-date=2020-10-22 |access-date=2020-10-22 |website=Google AI Blog}} reduces the computational load from to by using locality-sensitive hashing and reversible layers.{{Cite journal |last1=Gomez |first1=Aidan N |last2=Ren |first2=Mengye |last3=Urtasun |first3=Raquel |last4=Grosse |first4=Roger B |date=2017 |title=The Reversible Residual Network: Backpropagation Without Storing Activations |url=https://proceedings.neurips.cc/paper/2017/hash/f9be311e65d81a9ad8150a60844bb94c-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=30|arxiv=1707.04585 }}
Sparse attention{{Citation |last1=Child |first1=Rewon |title=Generating Long Sequences with Sparse Transformers |date=2019-04-23 |arxiv=1904.10509 |last2=Gray |first2=Scott |last3=Radford |first3=Alec |last4=Sutskever |first4=Ilya}} uses attention graphs that grows slower than . For example, BigBird (2020){{cite web |date=25 March 2021 |title=Constructing Transformers For Longer Sequences with Sparse Attention Methods |url=https://ai.googleblog.com/2021/03/constructing-transformers-for-longer.html |url-status=live |archive-url=https://web.archive.org/web/20210918150757/https://ai.googleblog.com/2021/03/constructing-transformers-for-longer.html |archive-date=2021-09-18 |access-date=2021-05-28 |website=Google AI Blog}} uses random small-world networks which grows as .
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers{{cite arXiv |eprint=2105.14103 |class=cs.LG |first1=Shuangfei |last1=Zhai |first2=Walter |last2=Talbott |title=An Attention Free Transformer |date=2021-09-21 |last3=Srivastava |first3=Nitish |last4=Huang |first4=Chen |last5=Goh |first5=Hanlin |last6=Zhang |first6=Ruixiang |last7=Susskind |first7=Josh}} reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
== Random Feature Attention ==
Random Feature Attention (2021){{cite arXiv |last1=Peng |first1=Hao |last2=Pappas |first2=Nikolaos |last3=Yogatama |first3=Dani |last4=Schwartz |first4=Roy |last5=Smith |first5=Noah A. |last6=Kong |first6=Lingpeng |date=2021-03-19 |title=Random Feature Attention |class=cs.CL |eprint=2103.02143}} uses Fourier random features:where are independent samples from the normal distribution . This choice of parameters satisfy , or Consequently, the one-headed attention, with one query, can be written as
\text{Attention}(q, K, V) = \text{softmax}\left(\frac{qK^\mathrm{T}}{\sqrt{d_k}}\right)V
\approx \frac{\varphi(q)^T \sum_i e^{\|k_i\|^2/2\sigma^2}\varphi(k_i) v_i^T}{\varphi(q)^T \sum_i e^{\|k_i\|^2/2\sigma^2}\varphi(k_i)}where . Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrix first, then multiply it with the query. In essence, we have managed to obtain a more precise version of
Performer (2022){{cite arXiv |last1=Choromanski |first1=Krzysztof |last2=Likhosherstov |first2=Valerii |last3=Dohan |first3=David |last4=Song |first4=Xingyou |last5=Gane |first5=Andreea |last6=Sarlos |first6=Tamas |last7=Hawkins |first7=Peter |last8=Davis |first8=Jared |last9=Belanger |first9=David |last10=Colwell |first10=Lucy |last11=Weller |first11=Adrian |date=2020-09-30 |title=Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers |class=cs.LG |eprint=2006.03555}} uses the same Random Feature Attention, but are first independently sampled from the normal distribution , then they are Gram-Schmidt processed.
= Multimodality =
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning.{{Cite journal |last1=Lu |first1=Kevin |last2=Grover |first2=Aditya |last3=Abbeel |first3=Pieter |last4=Mordatch |first4=Igor |date=2022-06-28 |title=Frozen Pretrained Transformers as Universal Computation Engines |url=https://ojs.aaai.org/index.php/AAAI/article/view/20729 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |language=en |volume=36 |issue=7 |pages=7628–7636 |doi=10.1609/aaai.v36i7.20729 |issn=2374-3468|doi-access=free }} The LLaVA was a vision-language model composed of a language model (Vicuna-13B){{Cite web |title=Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality {{!}} LMSYS Org |url=https://lmsys.org/blog/2023-03-30-vicuna |access-date=2024-08-11 |website=lmsys.org |language=en}} and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.{{Cite journal |last1=Liu |first1=Haotian |last2=Li |first2=Chunyuan |last3=Wu |first3=Qingyang |last4=Lee |first4=Yong Jae |date=2023-12-15 |title=Visual Instruction Tuning |url=https://proceedings.neurips.cc/paper_files/paper/2023/hash/6dcf277ea32ce3288914faf369fe6de0-Abstract-Conference.html |journal=Advances in Neural Information Processing Systems |language=en |volume=36 |pages=34892–34916}}
Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer{{cite arXiv |eprint=2005.08100 |first1=Anmol |last1=Gulati |first2=James |last2=Qin |title=Conformer: Convolution-augmented Transformer for Speech Recognition |last3=Chiu |first3=Chung-Cheng |last4=Parmar |first4=Niki |last5=Zhang |first5=Yu |last6=Yu |first6=Jiahui |last7=Han |first7=Wei |last8=Wang |first8=Shibo |last9=Zhang |first9=Zhengdong |last10=Wu |first10=Yonghui |last11=Pang |first11=Ruoming |year=2020 |page=|class=eess.AS }} and later Whisper{{cite arXiv |eprint=2212.04356 |first1=Alec |last1=Radford |first2=Jong Wook |last2=Kim |title=Robust Speech Recognition via Large-Scale Weak Supervision |last3=Xu |first3=Tao |last4=Brockman |first4=Greg |last5=McLeavey |first5=Christine |last6=Sutskever |first6=Ilya |year=2022 |page=|class=eess.AS }} follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers{{cite arXiv |eprint=2103.03206 |class=cs.CV |first1=Andrew |last1=Jaegle |first2=Felix |last2=Gimeno |title=Perceiver: General Perception with Iterative Attention |date=2021-06-22 |last3=Brock |first3=Andrew |last4=Zisserman |first4=Andrew |last5=Vinyals |first5=Oriol |last6=Carreira |first6=Joao}}{{cite arXiv |eprint=2107.14795 |class=cs.LG |first1=Andrew |last1=Jaegle |first2=Sebastian |last2=Borgeaud |title=Perceiver IO: A General Architecture for Structured Inputs & Outputs |date=2021-08-02 |last3=Alayrac |first3=Jean-Baptiste |last4=Doersch |first4=Carl |last5=Ionescu |first5=Catalin |last6=Ding |first6=David |last7=Koppula |first7=Skanda |last8=Zoran |first8=Daniel |last9=Brock |first9=Andrew |last10=Shelhamer |first10=Evan |last11=Hénaff |first11=Olivier}} are a variant of Transformers designed for multimodality.
For image generation, notable architectures are DALL-E 1 (2021), Parti (2022),{{Cite web |title=Parti: Pathways Autoregressive Text-to-Image Model |url=https://sites.research.google/parti/ |access-date=2024-08-09 |website=sites.research.google}} Phenaki (2023),{{Cite journal |last1=Villegas |first1=Ruben |last2=Babaeizadeh |first2=Mohammad |last3=Kindermans |first3=Pieter-Jan |last4=Moraldo |first4=Hernan |last5=Zhang |first5=Han |last6=Saffar |first6=Mohammad Taghi |last7=Castro |first7=Santiago |last8=Kunze |first8=Julius |last9=Erhan |first9=Dumitru |date=2022-09-29 |title=Phenaki: Variable Length Video Generation from Open Domain Textual Descriptions |url=https://openreview.net/forum?id=vOEXS39nOF |language=en}} and Muse (2023).{{cite arXiv |last1=Chang |first1=Huiwen |title=Muse: Text-To-Image Generation via Masked Generative Transformers |date=2023-01-02 |eprint=2301.00704 |last2=Zhang |first2=Han |last3=Barber |first3=Jarred |last4=Maschinot |first4=A. J. |last5=Lezama |first5=Jose |last6=Jiang |first6=Lu |last7=Yang |first7=Ming-Hsuan |last8=Murphy |first8=Kevin |last9=Freeman |first9=William T.|class=cs.CV }} Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image.{{Citation |last1=Ramesh |first1=Aditya |title=Zero-Shot Text-to-Image Generation |date=2021-02-26 |arxiv=2102.12092 |last2=Pavlov |first2=Mikhail |last3=Goh |first3=Gabriel |last4=Gray |first4=Scott |last5=Voss |first5=Chelsea |last6=Radford |first6=Alec |last7=Chen |first7=Mark |last8=Sutskever |first8=Ilya}} Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.{{Citation |last1=Yu |first1=Jiahui |title=Scaling Autoregressive Models for Content-Rich Text-to-Image Generation |date=2022-06-21 |arxiv=2206.10789 |last2=Xu |first2=Yuanzhong |last3=Koh |first3=Jing Yu |last4=Luong |first4=Thang |last5=Baid |first5=Gunjan |last6=Wang |first6=Zirui |last7=Vasudevan |first7=Vijay |last8=Ku |first8=Alexander |last9=Yang |first9=Yinfei}} Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.
Applications
The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
- machine translation
- time series prediction
- document summarization
- document generation
- named entity recognition (NER){{cite journal |last1=Kariampuzha |first1=William |last2=Alyea |first2=Gioconda |last3=Qu |first3=Sue |last4=Sanjak |first4=Jaleal |last5=Mathé |first5=Ewy |last6=Sid |first6=Eric |last7=Chatelaine |first7=Haley |last8=Yadaw |first8=Arjun |last9=Xu |first9=Yanji |last10=Zhu |first10=Qian |date=2023 |title=Precision information extraction for rare disease epidemiology at scale |journal=Journal of Translational Medicine |volume=21 |issue=1 |page=157 |doi=10.1186/s12967-023-04011-y |pmc=9972634 |pmid=36855134 |doi-access=free}}
- writing computer code based on requirements expressed in natural language.
- speech-to-text
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
- biological sequence analysis
- video understanding
- protein folding (such as AlphaFold)
- evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level.
See also
- {{annotated link|seq2seq}}
- {{annotated link|Perceiver}}
- {{annotated link|Vision transformer}}
- {{annotated link|Large language model}}
- {{annotated link|BERT (language model)}}
- {{annotated link|Generative pre-trained transformer}}
- {{annotated link|T5 (language model)}}
Notes
{{reflist|group=note}}
References
{{Reflist}}
Further reading
{{refbegin}}
- Alexander Rush, [https://nlp.seas.harvard.edu/2018/04/03/attention.html The Annotated transformer] {{Webarchive|url=https://web.archive.org/web/20210922093841/https://nlp.seas.harvard.edu/2018/04/03/attention.html |date=2021-09-22 }}, Harvard NLP group, 3 April 2018
- {{cite arXiv |last1=Phuong |first1=Mary |last2=Hutter |first2=Marcus |title=Formal Algorithms for Transformers |date=2022 |class=cs.LG |eprint=2207.09238 }}
- {{cite arXiv |last1=Ferrando |first1=Javier |title=A Primer on the Inner Workings of Transformer-based Language Models |date=2024-05-01 |eprint=2405.00208 |last2=Sarti |first2=Gabriele |last3=Bisazza |first3=Arianna |last4=Costa-jussà |first4=Marta R.|class=cs.CL }}
{{refend}}
{{Google AI}}
{{Artificial intelligence navbox}}