Neural machine translation is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.
Properties
They require only a fraction of the memory needed by traditional statistical machine translation models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly to maximize the translation performance.
NMT departs from phrase-based statistical approaches that use separately engineered subcomponents. Neural machine translation is not a drastic step beyond what has been traditionally done in statistical machine translation. Its main departure is the use of vector representations for words and internal states. The structure of the models is simpler than phrase-based models. There is no separate language model, translation model, and reordering model, but just a single sequence model that predicts one word at a time. However, this sequence prediction is conditioned on the entire source sentence and the entire already produced target sequence. NMT models use deep learning and representation learning. The word sequence modeling was at first typically done using a recurrent neural network. A bidirectional recurrent neural network, known as an encoder, is used by the neural network to encode a source sentence for a second RNN, known as a decoder, that is used to predict words in the target language. Recurrent neural networks face difficulties in encoding long inputs into a single vector. This can be compensated by an attention mechanism which allows the decoder to focus on different parts of the input while generating each word of the output. There are further Coverage Models addressing the issues in such attention mechanisms, such as ignoring of past alignment information leading to over-translation and under-translation. Convolutional Neural Networks are in principle somewhat better for long continuous sequences, but were initially not used due to several weaknesses. These were successfully compensated for in 2017 by using "attention mechanisms". An attention-based model, the transformer architecture remains the dominant architecture for several language pairs.