Lanuage Modeling Head
Here, the word head refers to the additional neural circuitry that is added on top of the basic transformer architecture when we apply pretrained transformer models to various tasks. The language model head is the circuitry we need to do language modeling. The task of the language modeling head is to take the output of the final transformer layer from the last token and use it to predict the upcoming word at position .
The language modeling head consists of a linear layer and a softmax layer.
Unembedding Layer The unembedding layer is a linear layer which projects the output (the output token embedding at position from the final transformer block ) to the logit vector that will have a single score for each of the possible words in the vocabulary . The logit vector, , has a dimensionality of .
Softmax Layer The logits from the unembedding layer are converted into probabilities over the vocabulary by a softmax layer.
To generate text, a word is decoded from the probabilities obtained from the Softmax layer.