Tag: Translation Memory

NEURAL MACHINE TRANSLATION: THE RISING STAR

NEURAL MACHINE TRANSLATION: THE RISING STAR

These days, language industry professionals simply can’t escape hearing about neural machine translation (NMT). However, there still isn’t enough information about the practical facts of NMT for translation buyers, language service providers, and translators. People often ask: is NMT intended for me? How will it change my life?

A Short History and Comparison

At the beginning of time – around the 1970s – the story began with rule-based machine translation (RBMT) solutions. The idea was to create grammatical rule sets for source and target languages, where machine translation is a kind of conversion process between the languages based on these rule sets. This concept works well with generic content, but adding new content, new language pairs, and maintaining the rule set is very time-consuming and expensive.

This problem was solved with statistical machine translation (SMT) around the late ‘80s and early ‘90s. SMT systems create statistical models by analyzing aligned source-target language data (training set) and use them to generate the translation. The advantage of SMT is the automatic learning process and the relatively easy adaptation by simply changing or extending the training set. The limitation of SMT is the training set itself: to create a usable engine, a large database of source-target segments is required. Additionally, SMT is not language independent in the sense that it is highly sensitive to the language combination and has a very hard time dealing with grammatically rich languages.

This is where neural machine translation (NMT) begins to shine: it can look at the sentence as a whole and can create associations between the phrases over an even longer distance within the sentence. The result is a convincing fluency and an improved grammatical correctness compared to SMT.

Statistical MT vs Neural MT

Both SMT and NMT are working on a statistical base and are using source-target language segment pairs as a basis. What’s the difference? What we typically call SMT is actually Phrase Based Statistical Machine Translation (PBSMT), meaning SMT is splitting the source segments into phrases. During the training process, SMT creates a translation model and a language model. The translation model stores the different translations of the phrases and the language model stores the probability of the sequence of phrases on the target side. During the translation phase, the decoder chooses the translation that gives the best result based on these two models. On a phrase or expression level, SMT (or PBSMT) is performing well, but language fluency and grammar is not good.

‘Buch’ is aligned with ‘book’ twice and only once with ‘the’ and ‘a’ – the winner is the ‘Buch’-’book’ combination

Neural Machine Translation, on the other hand, is using neural network-based, deep, machine learning technology. Words or even word chunks are transformed into “word vectors”. This means that ‘dog’ is not only representing the characters d, o and g, but it can contain contextual information from the training data. During the training phase, the NMT system tries to set the parameter weights of the neural network based on the reference values (source-target translation). Words appearing in similar context will get similar word vectors. The result is a neural network which can process source segments and transfer them into target segments. During translation, NMT is looking for a complete sentence, not just chunks (phrases). Thanks to the neural approach, it is not translating words, it’s transferring information and context. This is why fluency is much better than in SMT, but terminology accuracy is sometimes not perfect.

Similar words are closer to each other in a vector space

The Hardware

A popular GPU: NVIDIA Tesla

One big difference between SMT and NMT systems is that NMT requires Graphics Processing Units (GPUs), which were originally designed to help computers process graphics. These GPUs can calculate astonishingly fast – the latest cards have about 3,500 cores which can process data simultaneously. In fact, there is a small ongoing hardware revolution and GPU-based computers are the foundation for almost all deep learning and machine learning solutions. One of the great perks of this revolution is that nowadays, NMT is not only available for large enterprises, but also for small and medium-sized companies as well.

The Software

The main element, or ‘kernel’, of any NMT solution is the so-called NMT toolkit. There are a couple of NMT toolkits available, such as Nematus or openNMT, but the landscape is changing fast and more companies and universities are now developing their own toolkits. Since many of these toolkits are open-source solutions and hardware resources have become more affordable, the industry is experiencing an accelerating speed in toolkit R&D and NMT-related solutions.

On the other hand, as important as toolkits are, they are only one small part of a complex system, which contains frontend, backend, pre-processing and post-processing elements, parsers, filters, converters, and so on. These are all factors for anyone to consider before jumping into the development of an individual system. However, it is worth noting that the success of MT is highly community-driven and would not be where it is today without the open source community.

Corpora

A famous bilingual corpus: the Rosetta Stone

And here comes one of the most curious questions: what are the requirements of creating a well-performing NMT engine? Are there different rules compared to SMT systems? There are so many misunderstandings floating around on this topic that I think it’s a perfect opportunity to go into the details a little bit.

The main rules are nearly the same both for SMT and NMT systems. The differences are mainly that an NMT system is less sensitive and performs better in the same circumstances. As I have explained in an earlier blog post about SMT engine quality, the quality of an engine should always be measured in relation to the particular translation project for which you would like to use it.

These are the factors which will eventually influence the performance of an NMT engine:

Volume

Regardless of you may have heard, volume is still very important for NMT engines just like in the SMT world. There is no explicit rule on entry volumes but what we can safely say is that the bare minimum is about 100,000 segment pairs. There are Globalese users who are successfully using engines created based on 150,000 segments, but to be honest, this is more of an exception and requires special circumstances (like the right language combination, see below). The optimum volume starts around 500,000 segment pairs (2 million words).

Quality

The quality of the training set plays an important role (garbage in, garbage out). Don’t add unqualified content to your engine just to increase the overall size of the training set.

Relevance

Applying the right engine to the right project is the first key to success. An engine trained on automotive content will perform well on car manual translation but will give back disappointing results when you try to use it for web content for the food industry.

This raises the question of whether the content (TMs) should be mixed. If you have enough domain-specific content you shouldn’t necessarily add more out-of-domain data to your engine, but if you have an insufficient volume of domain-specific data then adding generic content (e.g. from public sources) may help improve the quality. We always encourage our Globalese users to try different engine combinations with different training sets.

Content type

Content generated by possible non-native speaking users on a chat forum or marketing material requiring transcreation is always a challenge to any MT system. On the other hand, technical documentation with controlled language is a very good candidate for NMT.

Language combination

Unfortunately, language combination still has an impact on quality. The good news is that NMT has now opened up the option of using machine translation for languages like Japanese, Turkish, or Hungarian –  languages which had nearly been excluded from the machine translation club because of poor results provided by SMT. NMT has also helped solve the problem of long distance dependencies for German and the translation output is much smoother for almost all languages. But English combined with Latin languages still provides better results than, for example, English combined with Russian when using similar volumes and training set quality.

Expectations for the future

Neural Machine Translation is a big step ahead in quality, but it still isn’t magic. Nobody should expect that NMT will replace human translators anytime soon. What you CAN expect is that NMT can be a powerful productivity tool in the translation process and open new service options both for translation buyers and language service providers (see post-editing experience).

Training and Translation Time

When we started developing Globalese NMT, one of the most surprising experiences for us was that the training time was far shorter than we had previously anticipated. This is due to the amazingly fast evolution of hardware and software. With Globalese, we currently have an average training time of 50,000 segments per hour – this means that an average engine with 1 million segments can be trained within one day. The situation is even better when looking at translation times: with Globalese, we currently have an average translation time between 100 and 400 segments per minute, depending on the corpus size, segment length in the translation and training content.

Neural MT Post-editing Experience

One of the great changes neural machine translation brings along is that the overall language quality is much better when compared to the SMT world. This does not mean that the translation is always perfect. As stated by one of our testers: if it is right, then it is astonishingly good quality. The ratio of good and poor translation naturally varies depending on the engine, but good engines can provide about 50% (or even higher) of really good translation target text.

Here are some examples showcasing what NMT post-editors can expect:

DE original:

Der Rechnungsführer sorgt für die gebotenen technischen Vorkehrungen zur wirksamen Anwendung des FWS und für dessen Überwachung.

Reference human translation:

The accounting officer shall ensure appropriate technical arrangements for aneffective functioning of the EWS and its monitoring.

Globalese NMT:

The accounting officer shall ensure the necessary technical arrangements for theeffective use of the EWS and for its monitoring.

As you can see, the output is fluent, and the differences are just preferential ones, more or less. This is highlighting another issue: automated quality metrics like BLEU score are not really sufficient to measure the quality. The example above is only a 50% match in the BLEU score, but if we look at the quality, the rating should be much higher.

Let’s look another example:

EN original

The concept of production costs must be understood as being net of any aid but inclusive of a normal level of profit.

Reference human translation:

Die Produktionskosten verstehen sich ohne Beihilfe, aber einschließlich eines normalen Gewinns.

Globalese NMT:

Der Begriff der Produktionskosten bezieht sich auf die Höhe der Beihilfe, aber einschließlich eines normalen Gewinns.

What is interesting here that the first part of the sentence sounds good, but if you look at the content, the translation is not good. This is an example of a fluent output with a bad translation. This is a typical case in the NMT world and it emphasizes the point that post-editors must examine NMT output differently than they did for SMT – in SMT, bad grammar was a clear indicator that the translation must be post-edited.

Post-editors who used to proof and correct SMT output have to change the way they are working and have to be more careful with proofreading, even if the NMT output looks alright at first glance. Also, services related to light post-editing will change – instead of correcting serious grammatical errors without checking the correctness of translation in order to create some readable content, the focus will shift to sorting out serious mistranslations. The funny thing is that one of the main problems in the SMT world was weak fluency and grammar, and now we have good fluency and grammar as an issue in the NMT world…

And finally:

DE original:

Aufgrund des rechtlichen Status der Beteiligten ist ein solcher Vorgang mit einer Beauftragung des liefernden Standorts und einer Berechnung der erbrachten Leistung verbunden.

Reference human translation:

The legal status of the companies involved in these activities means that this process is closely connected with placing orders at the location that is to supply the goods/services and calculating which goods/services they supply.

Globalese NMT:

Due to the legal status of the person, it may lead to this process at the site of the plant, and also a calculation of the completed technician.

This example shows that unfortunately, NMT can produce bad translations too. As I mentioned before, the ratio of good and bad NMT output you will face in a project always depends on the circumstances. Another weak point of NMT is that it currently cannot handle the terminology directly and it acts as a kind of “black box” with no option to directly influence the results.

Reference: https://bit.ly/2hBGsVh

How to Cut Localization Costs with Translation Technology

How to Cut Localization Costs with Translation Technology

What is translation technology?

Translation technologies are sets of software tools designed to process translation materials and help linguists in their everyday tasks. They are divided in three main subcategories:

Machine Translation (MT)

Translation tasks are performed by machines (computers) either on the basis of statistical models (MT engines execute translation tasks on the basis of accumulated translated materials) or neural models (MT engines are based on artificial intelligence). The computer-translated output is edited by professional human linguists through the process of postediting that may be more or less demanding depending on language combinations and the complexity of materials, as well as the volume of content.

Computer-Aided Translation (CAT)

Computer-aided or computer-assisted translation is performed by professional human translators who use specific CAT or productivity software tools to optimize their process and increase their output.

Providing a perfect combination of technological advantages and human expertise, CAT software packages are the staple tools of the language industry. CAT tools are essentially advanced text editors that break the source content into segments, and split the screen into source and target fields which in and of itself makes the translator’s job easier. However, they also include an array of advanced features that enable the optimization of the translation/localization process, enhance the quality of output and save time and resources. For this reason, they are also called productivity tools.

Figure 1 – CAT software in use

The most important features of productivity tools include:

  • Translation Asset Management
  • Advanced grammar and spell checkers
  • Advanced source and target text search
  • Concordance search.

Standard CAT tools include Across Language ServerSDL Trados StudioSDL GroupShare, SDL PassolomemoQMemsource CloudWordfastTranslation Workspace and others, and they come both in forms of installed software and cloud solutions.

Quality Assurance (QA)

Quality assurance tools are used for various quality control checks during and after the translation/localization process. These tools use sophisticated algorithms to check spelling, consistency, general and project-specific style, code and layout integrity and more.

All productivity tools have built-in QA features, but there are also dedicated quality assurance tools such as Xbench and Verifika QA.

What is a translation asset?

We all know that information has value and the same holds true for translated information. This is why previously translated/localized and edited textual elements in a specific language pair are regarded as translation assets in the language industry – once translated/localized and approved, textual elements do not need to be translated again and no additional resources are spent. These elements that are created, managed and used with productivity tools include:

Translation Memories (TM)

Translation memories are segmented databases containing previously translated elements in a specific language pair that can be reused and recycled in further projects. Productivity software calculates the percentage of similarity between the new content for translation/localization and the existing segments that were previously translated, edited and proofread, and the linguist team is able to access this information, use it and adapt it where necessary. This percentage has a direct impact on costs associated with a translation/localization project and the time required for project completion, as the matching segments cost less and require less time for processing.

Figure 2 – Translation memory in use (aligned sample from English to German)

Translation memories are usually developed during the initial stages of a translation/localization project and they grow over time, progressively cutting localization costs and reducing the time required for project completion. However, translation memories require regular maintenance, i.e. cleaning for this very reason, as the original content may change and new terminology may be adopted.

In case when an approved translation of a document exists, but it was performed without productivity tools, translation memories can be produced through the process of alignment:

Figure 3 – Document alignment example

Source and target documents are broken into segments that are subsequently matched to produce a TM file that can be used for a project.

Termbases (TB)

Termbases or terminology bases (TB) are databases containing translations of specific terms in a specific language pair that provide assistance to the linguist team and assure lexical consistency throughout projects.

Termbases can be developed before the project, when specific terminology translations have been confirmed by all stakeholders (client, content producer, linguist), or during the project, as the terms are defined. They are particularly useful in the localization of medical devices, technical materials and software.

Glossaries

Unlike termbases, glossaries are monolingual documents explaining specific terminology in either source or target language. They provide further context to linguists and can be used for the development of terminology bases.

Benefits of Translation Technology

The primary purpose of all translation technology is the optimization and unification of the translation/localization process, as well as providing the technological infrastructure that facilitates work and full utilization of the expertise of professional human translators.

As we have already seen, translation memories, once developed, provide immediate price reduction (that varies depending on the source materials and the amount of matching segments, but may run up to 20% in the initial stages and it may only grow over time), but the long-term, more subtle benefits of the smart integration of translation technology are the ones that really make a difference and they include:

Human Knowledge with Digital Infrastructure

While it has a limited application, machine translation still does not yield satisfactory results that can be used for commercial purposes. All machine translations need to be postedited by professional linguists and this process is known to take more time and resources instead of less.

On the other hand, translation performed in productivity tools is performed by people, translation assets are checked and approved by people, specific terminology is developed in collaboration with the client, content producers, marketing managers, subject-field experts and all other stakeholders, eventually providing a perfect combination of human expertise, feel and creativity, and technological solutions.

Time Saving

Professional human linguists are able to produce more in less time. Productivity software, TMs, TBs and glossaries all reduce the valuable hours of research and translation, and enable linguists to perform their tasks in a timely manner, with technological infrastructure acting as a stylistic and lexical guide.

This eventually enables the timely release of a localized product/service, with all the necessary quality checks performed.

Consistent Quality Control

The use of translation technology itself represents real-time quality control, as linguists rely on previously proofread and quality-checked elements, and maintain the established style, terminology and quality used in previous translations.

Brand Message Consistency

Translation assets enable the consistent use of a particular tonestyle and intent of the brand in all translation/localization projects. This means that the specific features of a corporate message for a particular market/target group will remain intact even if the linguist team changes on future projects.

Code / Layout Integrity Preservation

Translation technology enables the preservation of features of the original content across translated/localized versions, regardless of whether the materials are intended for printing or online publishing.

Different solutions are developed for different purposes. For example, advanced cloud-based solutions for the localization of WordPress-powered websites enable full preservation of codes and other technical elements, save a lot of time and effort in advance and optimize complex multilingual localization projects.

Wrap-up

In a larger scheme of things, all these benefits eventually spell long-term cost/time savings and a leaner translation/localization process due to their preventive functions that, in addition to direct price reduction, provide consistencyquality control and preservation of the integrity of source materials.

Reference: https://goo.gl/r5kmCJ

Adaptive MT – Trados 2017 New Feature

Adaptive MT – Trados 2017 New Feature


SDL Trados Studio 2017 includes new generation of machine translation.

How does it work?

It allows users to adapt SDL Language Cloud machine translation with their own preferred style. There is a free plan and it offers these features:

  • 400,000 machine translated characters per month.
  • only access to the baseline engines, so this means no industry or vertically trained engines.
  • 5 termbases, or dictionaries, which can be used to “force” the engine to use the translation you want for certain words/phrases.
  • 1 Adaptive engine.
  • Translator… this is basically a similar feature to FreeTranslation.com except it’s personalized with your Engine(s) and your termbases.

How does it help?

  • Faster translation with smarter MT suggestions.
  • Easy to use and get started.
  • Completely secure – no data is collected or shared.
  • Unique MT output, personal to you.
  • Access directly within Studio 2017.
  • No translation memory needed to train the MT.
  • Automatic, real time learning – no pre-training required.

What are the available language pairs?

Uptill now, Adaptive MT is available in these language pairs:

English <-> French
English <-> German
English <-> Italian
English <-> Spanish
English <-> Dutch
English <-> Portuguese
English <-> Japanese
English <-> Chinese

For reference: https://www.sdltrados.com/products/trados-studio/adaptivemt/

Heartsome TM Editor… is now FREE

Heartsome TM Editor… is now FREE

Are you looking for a good Translation Memory editor, and free of charge? You need to perform TM maintenance tasks, including editing large TMX files, clean translation memories in batches, cleaning tags in translation memories, and Quality Assurance of translation memories. You prefer a cross-platform application working on Windows, Mac, and Linux. Heartsome TMX Editor can be your good choice.

Read More Read More

Opening Trados 2007 TMW Translation Memories in Trados Studio or Other Tools

Opening Trados 2007 TMW Translation Memories in Trados Studio or Other Tools

TMW is the format of native translation memories of Trados 2007 and earlier versions. You may receive TMW translation memories (actually five files: *.iix, *.mdf, *.mtf, *.mwf, and *.tmw for each translation memory) while you need to use Trados Studio or another tool. Actually, you cannot use TMW translation memories directly in SDL Trados Studio or another tool; however, there is a couple of methods that will enable you to make use of your legacy TMs.

Read More Read More

Translating SDL Trados projects in memoQ

Translating SDL Trados projects in memoQ

SDL Trados is one of the more popular translation tools besides memoQ. memoQ provides interoperability with SDL Trados 2007 and SDL Trados Studio 2009. Using memoQ you can accept jobs in SDL Trados Tageditor’s TTX format, SDL Trados Translator’s Workbench’s bilingual DOC/RTF format, or SDL Trados 2011 SDLXLIFF files and packages. SDL Trados 2007 does not accept all segmentation and can crash on files segmented by other translation tools, therefore prior to opening a file it is advised to pre-segment the file using a demo or paid-up version of SDL Trados 2007. You can do this by opening Translator’s Workbench, creating or opening an empty translation memory, clicking Tools/Translate and enabling the Segment unknown sentences checkbox, then running a pre-translation. If you don’t pre-segment the files, memoQ will import an empty file by default. You can click Add document as and select Import unsegmented content, however, be careful with this – we cannot guarantee that SDL Trados will accept the file translated this way. Thousands of translators and companies are using memoQ to process SDL Trados jobs. Many language service providers are using the memoQ server to add teamwork capabilities while translating SDL Trados jobs. This is a reliable solution.

Translation memories from SDL Trados can be imported in TMX format. If you use TMX 1.4b, and your translation memories come from a tagged document such as HTML or XML, memoQ will also perform a tag conversion which goes beyond what’s described in the standard. This tag conversion is specifically targeted at converting SDL Trados tags into memoQ tags.

memoQ, just like SDL Trados Studio 2009, supports XLIFF as a bilingual format, and the two systems are interoperable through XLIFF. You cannot export a memoQ file in SDL Trados Studio 2009 into the underlying format such as Microsoft Word, and you cannot export an SDLXLIFF file in memoQ into Microsoft Word either.

In a server scenario you cannot expect memoQ to connect to an SDL Trados server. Server technologies are, unfortunately, not interoperable. This is, however, a rare scenario and most translation companies are not expected to translate online.

memoQ-prepared projects can also be processed by SDL Trados 2007 and SDL Trados Studio 2009 through XLIFF.

Source: http://kilgray.com/faq/translating-sdl-trados-projects-memoq

Among others, memoQ is known for its interoperability with other CAT tools. Join this webinar and experience how easily you can translate SDL Trados® and Wordfast® and other translation package formats.

Watch those two videos to learn more about memoQ interchangeability features:

http://vimeo.com/36840546

http://vimeo.com/75015202