Decoding
The task of choosing a word to generate on the basis of the probabilities that the model assigns to the possible words is called decoding. Decoding from a language model by repeatedly choosing the next word conditioned on the previous choices is called autoregressive generation or causal LM generation. Some of the common methods for decoding are described below:
- Greedy Decoding
- Beam Search
- Random Sampling
- Top-k Sampling
- Nucleus or Top-p Sampling
- Temperature Sampling
Greedy Decoding
In greedy decoding, the word among the possible words for which the model assigns the largest probability is chosen as the next word. It is a greedy algorithm because it makes the choice which is locally optimal, and doesn’t take into concern whether the choice will turn out to have been the best choice.
In practice, it is not used with large language models because the word it chooses are extremely predictable, which makes the resulting text generic and often quite repetitive.
Random Sampling
In random sampling, the next word is chosen by sampling from the distribution . It means that for generating a sequence of words until the end-of-sequence token is reached, if performed. Random sampling also doesn’t work well because even though it mostly generates sensible, high-probable words, there are many odd, low probability words in the tail of the distribution, and even though each one is low probability, if all those rare words are added up, they constitute a large enough portion of the distribution that they get chosen often enough to result in generating weird sentences.
Top-k Sampling
Top- sampling is the generalizatoin of greedy decoding. It first selects the words with top probabilities, renormalizes their distribution and performs random sampling on them. More formally:
- Choose a number of words .
- For each word in the vocabulary , use the language model to compute the likelihood of this word given the context
- Sort the words by their likelihood, and select the top- most probable words.
- Renormalize the scores of the selected words to be a legitimate probability distribution.
- Randomly sample a word from the renormalized distribution.
Nucleus or Top-p Sampling
A problem with top- sampling is that is fixed, but the shape of the probability distribution over words differ in different contexts. An alternative that solves this problem is top- sampling or nucleus sampling which selects the top percent of the probability mass instead of the top words. The hope of top- sampling is that the measure will be more robust in very different cotnexts, dynamically increasing and decreasing the pool of word candidates.
Given a distribution , the distribution is sorted by their likelihoods, and then the top- vocabulary is the smallest set of words such that
Temperature Sampling
In temperature sampling, we reshape the distribution by simply dividing the logit by a temperature parameter before we normalize it by passing it through a softmax. Thus, the probability vector is calculated as