Transformer#
This page discusses transformer architecture. The following picture shows classical schema to explain the transformer.
Encoder/decoder#
The transformer uses encoder/decoder achitecture. The idea behind this achitecture is following:
In the encoder layer, positional encoding is applied. Then, a multi-head attention is used to create a sequence representation.
In the decoder layer, a transformation similar to the one in the encoding is applied to the incomplete output sequence. Then, the multihead attention is used a second time to combine the output of the encoder with the output of the encoded output sequence that was passed as input to the second part. As the result the output can be generated.
Masked attention#
Masked attention is used to prevent the model from using the keys of future elements when computing values for the current sequence. To the matrix that contains all possible \(q_i, k_j\) combinations is added matrix:
Or in more visual represenation:
So under softmax expression takes form
In softmax transformation \(-\infty\) elements are transformed to zero. That’s why:
Where \(s_{ij}\) is the result of a softmax transformation for the \(j\)-th element of the \(i\)-th row.