Author: Admin

Six Challenges for Neural Machine Translation

Six Challenges for Neural Machine Translation

Reading Time: 1 minute
Abstract

We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrase-based statistical machine translation.

See the paper from here

Quantisation of Neural Machine Translation models

Quantisation of Neural Machine Translation models

Reading Time: 1 minute

When large amounts of training data are available, the quality of Neural MT engines increases with the size of the model. However, larger models imply decoding with more parameters, which makes the engine slower at test time. Improving the trade-off between model compactness and translation quality is an active research topic. One of the ways to achieve more compact models is via quantisation, that is, by requiring each parameter value to occupy a fixed number of bits, thus limiting the computational cost. In this post we take a look at a paper which achieves 4 times more compact Transformer Neural MT models via quantisation into 8 bit values, with no loss in translation quality according to BLEU score.

Read more here

Issue #55 – Word Alignment from Neural Machine Translation

Issue #55 – Word Alignment from Neural Machine Translation

Reading Time: 1 minute

Word alignments were the cornerstone of all previous approaches to statistical MT. You take your parallel corpus, align the words, and build from there. In Neural MT however, word alignment is no longer needed as an input of the system. That being said, research is coming back around to the idea that it remains useful in real-world practical scenarios for tasks such as replacing tags in MT output.

Read more from here

Evaluating machine translation in a low-resource language combination

Evaluating machine translation in a low-resource language combination

Reading Time: 1 minute

Aim
• Main aim:
• Determining which type of MT system (RBMT, PBMT
or NMT) is perceived as more adequate in the
context of a minoritized language such as Galician in
a MT+PE workflow.
3
• Specific aims:
• BLEU automatic evaluation.
• Human evaluation (quality perception survey
conducted among experienced professional posteditors)
• Error analysis framework (MQM)
Evaluating machine translation in a low-resource
language combination: Spanish-Galician

Look at the paper thoroughly from here

Hungarian translators’ perceptions of neural machine translation in the European Commission

Hungarian translators’ perceptions of neural machine translation in the European Commission

Reading Time: 1 minute

In the framework of its investigative mandate, the Office collects information of investigative interest, including personal data, from various sources – public authorities, private entities and natural persons – and exchanges it with Union institutions, bodies, offices and agencies, with competent authorities of Member States and third countries, as well as with international organisations before, during and after the investigation or coordination activities.
(Commission Decision (EU) 2018/1962 )

learn more about their paper from here 

Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation – Mihaela Vela

Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation – Mihaela Vela

Reading Time: 1 minute

CAT Tools
• Important part of modern translation workflow
– Trados Studio
– MemoQ
– DejaVu
– XTM
– MateCAT
– Etc.
• Increase translator‘s productivity
• Improve consistency in translation
• Reduce costs

Read more from this paper from here