Category: Learning Zone

Machine Translation From the Cold War to Deep Learning

Machine Translation From the Cold War to Deep Learning

In the beginning

The story begins in 1933. Soviet scientist Peter Troyanskii presented “the machine for the selection and printing of words when translating from one language to another” to the Academy of Sciences of the USSR. The invention was super simple — it had cards in four different languages, a typewriter, and an old-school film camera.

The operator took the first word from the text, found a corresponding card, took a photo, and typed its morphological characteristics (noun, plural, genitive) on the typewriter. The typewriter’s keys encoded one of the features. The tape and the camera’s film were used simultaneously, making a set of frames with words and their morphology.

Despite all this, as often happened in the USSR, the invention was considered “useless”. Troyanskii died of Stenocardia after trying to finish his invention for 20 years. No one in the world knew about the machine until two Soviet scientists found his patents in 1956.

It was at the beginning of the Cold War. On January 7th 1954, at IBM headquarters in New York, the Georgetown–IBM experiment started. The IBM 701 computer automatically translated 60 Russian sentences into English for the first time in history.

However, the triumphant headlines hid one little detail. No one mentioned the translated examples were carefully selected and tested to exclude any ambiguity. For everyday use, that system was no better than a pocket phrasebook. Nevertheless, this sort of arms race launched: Canada, Germany, France, and especially Japan, all joined the race for machine translation.

The race for machine translation

The vain struggles to improve machine translation lasted for forty years. In 1966, the US ALPAC committee, in its famous report, called machine translation expensive, inaccurate, and unpromising. They instead recommended focusing on dictionary development, which eliminated US researchers from the race for almost a decade.

Even so, a basis for modern Natural Language Processing was created only by the scientists and their attempts, research, and developments. All of today’s search engines, spam filters, and personal assistants appeared thanks to a bunch of countries spying on each other.

Rule-based machine translation (RBMT)

The first ideas surrounding rule-based machine translation appeared in the 70s. The scientists peered over the interpreters’ work, trying to compel the tremendously sluggish computers to repeat those actions. These systems consisted of:

  • Bilingual dictionary (RU -> EN)
  • A set of linguistic rules for each language (For example, nouns ending in certain suffixes such as -heit, -keit, -ung are feminine)

That’s it. If needed, systems could be supplemented with hacks, such as lists of names, spelling correctors, and transliterators.

PROMPT and Systran are the most famous examples of RBMT systems. Just take a look at the Aliexpress to feel the soft breath of this golden age.

But even they had some nuances and subspecies.

Direct Machine Translation

This is the most straightforward type of machine translation. It divides the text into words, translates them, slightly corrects the morphology, and harmonizes syntax to make the whole thing sound right, more or less. When the sun goes down, trained linguists write the rules for each word.

The output returns some kind of translation. Usually, it’s quite crappy. It seems that the linguists wasted their time for nothing.

Modern systems do not use this approach at all, and modern linguists are grateful.

Transfer-based Machine Translation

In contrast to direct translation, we prepare first by determining the grammatical structure of the sentence, as we were taught at school. Then we manipulate whole constructions, not words, afterwards. This helps to get quite decent conversion of the word order in translation. In theory.

In practice, it still resulted in verbatim translation and exhausted linguists. On the one hand, it brought simplified general grammar rules. But on the other, it became more complicated because of the increased number of word constructions in comparison with single words.

Interlingual Machine Translation

In this method, the source text is transformed to the intermediate representation, and is unified for all the world’s languages (interlingua). It’s the same interlingua Descartes dreamed of: a meta-language, which follows the universal rules and transforms the translation into a simple “back and forth” task. Next, interlingua would convert to any target language, and here was the singularity!

Because of the conversion, Interlingua is often confused with transfer-based systems. The difference is the linguistic rules specific to every single language and interlingua, and not the language pairs. This means, we can add a third language to the interlingua system and translate between all three. We can’t do this in transfer-based systems.

It looks perfect, but in real life it’s not. It was extremely hard to create such universal interlingua — a lot of scientists have worked on it their whole lives. They’ve not succeeded, but thanks to them we now have morphological, syntactic, and even semantic levels of representation. But the only Meaning-text theory costs a fortune!

The idea of intermediate language will be back. Let’s wait awhile.

As you can see, all RBMT are dumb and terrifying, and that’s the reason they are rarely used unless for specific cases (like the weather report translation, and so on). Among the advantages of RBMT, often mentioned are its morphological accuracy (it doesn’t confuse the words), reproducibility of results (all translators get the same result), and the ability to tune it to the subject area (to teach economists or terms specific to programmers, for example).

Even if anyone were to succeed in creating an ideal RBMT, and linguists enhanced it with all the spelling rules, there would always be some exceptions: all the irregular verbs in English, separable prefixes in German, suffixes in Russian, and situations when people just say it differently. Any attempt to take into account all the nuances would waste millions of man hours.

And don’t forget about homonyms. The same word can have a different meaning in a different context, which leads to a variety of translations. How many meanings can you catch here: I saw a man on a hill with a telescope?

Languages did not develop based on a fixed set of rules — a fact which linguists love. They were much more influenced by the history of invasions in past three hundred years. How could you explain that to a machine?

Forty years of the Cold War didn’t help in finding any distinct solution. RBMT was dead.

Example-based Machine Translation (EBMT)

Japan was especially interested in fighting for machine translation. There was no Cold War, but there were reasons: very few people in the country knew English. It promised to be quite an issue at the upcoming globalization party. So the Japanese were extremely motivated to find a working method of machine translation.

Rule-based English-Japanese translation is extremely complicated. The language structure is completely different, and almost all words have to be rearranged and new ones added. In 1984, Makoto Nagao from Kyoto University came up with the idea of using ready-made phrases instead of repeated translation.

Let’s imagine that we have to translate a simple sentence — “I’m going to the cinema.” And let’s say we’ve already translated another similar sentence — “I’m going to the theater” — and we can find the word “cinema” in the dictionary.

All we need is to figure out the difference between the two sentences, translate the missing word, and then not screw it up. The more examples we have, the better the translation.

I build phrases in unfamiliar languages exactly the same way!

EBMT showed the light of day to scientists from all over the world: it turns out, you can just feed the machine with existing translations and not spend years forming rules and exceptions. Not a revolution yet, but clearly the first step towards it. The revolutionary invention of statistical translation would happen in just five years.

Statistical Machine Translation (SMT)

In early 1990, at the IBM Research Center, a machine translation system was first shown which knew nothing about rules and linguistics as a whole. It analyzed similar texts in two languages and tried to understand the patterns.

The idea was simple yet beautiful. An identical sentence in two languages split into words, which were matched afterwards. This operation repeated about 500 million times to count, for example, how many times the word “Das Haus” translated as “house” vs “building” vs “construction”, and so on.

If most of the time the source word was translated as “house”, the machine used this. Note that we did not set any rules nor use any dictionaries — all conclusions were done by machine, guided by stats and the logic that “if people translate that way, so will I.” And so statistical translation was born.

The method was much more efficient and accurate than all the previous ones. And no linguists were needed. The more texts we used, the better translation we got.

There was still one question left: how would the machine correlate the word “Das Haus,” and the word “building” — and how would we know these were the right translations?

The answer was that we wouldn’t know. At the start, the machine assumed that the word “Das Haus” equally correlated with any word from the translated sentence. Next, when “Das Haus” appeared in other sentences, the number of correlations with the “house” would increase. That’s the “word alignment algorithm,” a typical task for university-level machine learning.

The machine needed millions and millions of sentences in two languages to collect the relevant statistics for each word. How did we get them? Well, we decided to take the abstracts of the European Parliament and the United Nations Security Council meetings — they were available in the languages of all member countries and were now available for download at UN Corporaand Europarl Corpora.

Word-based SMT

In the beginning, the first statistical translation systems worked by splitting the sentence into words, since this approach was straightforward and logical. IBM’s first statistical translation model was called Model one. Quite elegant, right? Guess what they called the second one?

Model 1: “the bag of words”

Model one used a classical approach — to split into words and count stats. The word order wasn’t taken into account. The only trick was translating one word into multiple words. For example, “Der Staubsauger” could turn into “Vacuum Cleaner,” but that didn’t mean it would turn out vice versa.

Here’re some simple implementations in Python: shawa/IBM-Model-1.

Model 2: considering the word order in sentences

The lack of knowledge about languages’ word order became a problem for Model 1, and it’s very important in some cases.

Model 2 dealt with that: it memorized the usual place the word takes at the output sentence and shuffled the words for the more natural sound at the intermediate step. Things got better, but they were still kind of crappy.

Model 3: extra fertility

New words appeared in the translation quite often, such as articles in German or using “do” when negating in English. “Ich will keine Persimonen” → “I donot want Persimmons.” To deal with it, two more steps were added to Model 3.

  • The NULL token insertion, if the machine considers the necessity of a new word
  • Choosing the right grammatical particle or word for each token-word alignment

Model 4: word alignment

Model 2 considered the word alignment, but knew nothing about the reordering. For example, adjectives would often switch places with the noun, and no matter how good the order was memorized, it wouldn’t make the output better. Therefore, Model 4 took into account the so-called “relative order” — the model learned if two words always switched places.

Model 5: bugfixes

Nothing new here. Model 5 got some more parameters for the learning and fixed the issue with conflicting word positions.

Despite their revolutionary nature, word-based systems still failed to deal with cases, gender, and homonymy. Every single word was translated in a single-true way, according to the machine. Such systems are not used anymore, as they’ve been replaced by the more advanced phrase-based methods.

Phrase-based SMT

This method is based on all the word-based translation principles: statistics, reordering, and lexical hacks. Although, for the learning, it split the text not only into words but also phrases. These were the n-grams, to be precise, which were a contiguous sequence of n words in a row.

Thus, the machine learned to translate steady combinations of words, which noticeably improved accuracy.

The trick was, the phrases were not always simple syntax constructions, and the quality of the translation dropped significantly if anyone who was aware of linguistics and the sentences’ structure interfered. Frederick Jelinek, the pioneer of the computer linguistics, joked about it once: “Every time I fire a linguist, the performance of the speech recognizer goes up.”

Besides improving accuracy, the phrase-based translation provided more options in choosing the bilingual texts for learning. For the word-based translation, the exact match of the sources was critical, which excluded any literary or free translation. The phrase-based translation had no problem learning from them. To improve the translation, researchers even started to parse the news websites in different languages for that purpose.

Starting in 2006, everyone began to use this approach. Google Translate, Yandex, Bing, and other high-profile online translators worked as phrase-based right up until 2016. Each of you can probably recall the moments when Google either translated the sentence flawlessly or resulted in complete nonsense, right? The nonsense came from phrase-based features.

The good old rule-based approach consistently provided a predictable though terrible result. The statistical methods were surprising and puzzling. Google Translate turns “three hundred” into “300” without any hesitation. That’s called a statistical anomaly.

Phrase-based translation has become so popular, that when you hear “statistical machine translation” that is what is actually meant. Up until 2016, all studies lauded phrase-based translation as the state-of-the-art. Back then, no one even thought that Google was already stoking its fires, getting ready to change our whole image of machine translation.

Syntax-based SMT

This method should also be mentioned, briefly. Many years before the emergence of neural networks, syntax-based translation was considered “the future or translation,” but the idea did not take off.

The proponents of syntax-based translation believed it was possible to merge it with the rule-based method. It’s necessary to do quite a precise syntax analysis of the sentence — to determine the subject, the predicate, and other parts of the sentence, and then to build a sentence tree. Using it, the machine learns to convert syntactic units between languages and translates the rest by words or phrases. That would have solved the word alignment issue once and for all.

The problem is, the syntactic parsing works terribly, despite the fact that we consider it solved a while ago (as we have the ready-made libraries for many languages). I tried to use syntactic trees for tasks a bit more complicated than to parse the subject and the predicate. And every single time I gave up and used another method.

Let me know in the comments if you succeed using it at least once.

Neural Machine Translation (NMT)

A quite amusing paper on using neural networks in machine translation was published in 2014. The Internet didn’t notice it at all, except Google — they took out their shovels and started to dig. Two years later, in November 2016, Google made a game-changing announcement.

The idea was close to transferring the style between photos. Remember apps like Prisma, which enhanced pictures in some famous artist’s style? There was no magic. The neural network was taught to recognize the artist’s paintings. Next, the last layers containing the network’s decision were removed. The resulting stylized picture was just the intermediate image that network got. That’s the network’s fantasy, and we consider it beautiful.

If we can transfer the style to the photo, what if we try to impose another language to a source text? The text would be that precise “artist’s style,” and we would try to transfer it while keeping the essence of the image (in other words, the essence of the text).

Imagine I’m trying to describe my dog — average size, sharp nose, short tail, always barks. If I gave you this set of the dog’s features, and if the description was precise, you could draw it, even though you have never seen it.

Now, imagine the source text is the set of specific features. Basically, it means that you encode it, and let the other neural network decode it back to the text, but, in another language. The decoder only knows its language. It has no idea about of the features’ origin, but it can express them in, for example, Spanish. Continuing the analogy, it doesn’t matter how you draw the dog — with crayons, watercolor or your finger. You paint it as you can.

Once again — one neural network can only encode the sentence to the specific set of features, and another one can only decode them back to the text. Both have no idea about the each other, and each of them knows only its own language. Recall something? Interlingua is back. Ta-da.

The question is, how do we find those features? It’s obvious when we’re talking about the dog, but how to deal with the text? Thirty years ago scientists already tried to create the universal language code, and it ended in a total failure.

Nevertheless, we have deep learning now. And that’s its essential task! The primary distinction between the deep learning and classic neural networks lays precisely in the ability to search for those specific features, without any idea of their nature. If the neural network is big enough, and there are a couple of thousand video cards at hand, it’s possible to find those features in the text as well.

Theoretically, we can pass the features gotten from the neural networks to the linguists, so that they can open brave new horizons for themselves.

The question is, what type of neural network should be used for encoding and decoding? Convolutional Neural Networks (CNN) fit perfectly for pictures since they operate with independent blocks of pixels.

But there are no independent blocks in the text — every word depends on its surroundings. Text, speech, and music are always consistent. So recurrent neural networks (RNN) would be the best choice to handle them, since they remember the previous result — the prior word, in our case.

Now RNNs are used everywhere — Siri’s speech recognition (it’s parsing the sequence of sounds, where the next depends on the previous), keyboard’s tips (memorize the prior, guess the next), music generation, and even chatbots.

In two years, neural networks surpassed everything that had appeared in the past 20 years of translation. Neural translation contains 50% fewer word order mistakes, 17% fewer lexical mistakes, and 19% fewer grammar mistakes. The neural networks even learned to harmonize gender and case in different languages. And no one taught them to do so.

The most noticeable improvements occurred in fields where direct translation was never used. Statistical machine translation methods always worked using English as the key source. Thus, if you translated from Russian to German, the machine first translated the text to English and then from English to German, which leads to a double loss.

Neural translation doesn’t need that — only a decoder is required so it can work. That was the first time that direct translation between languages with no сommon dictionary became possible.

The conclusion and the future

Everyone’s still excited about the idea of “Babel fish” — instant speech translation. Google has made steps towards it with its Pixel Buds, but in fact, it’s still not what we were dreaming of. The instant speech translation is different from the usual translation. You need to know when to start translating and when to shut up and listen. I haven’t seen suitable approaches to solve this yet. Unless, maybe, Skype…

And here’s one more empty area: all the learning is limited to the set of parallel text blocks. The deepest neural networks still learn at parallel texts. We can’t teach the neural network without providing it with a source. People, instead, can complement their lexicon with reading books or articles, even if not translating them to their native language.

If people can do it, the neural network can do it too, in theory. I found only one prototype attempting to incite the network, which knows one language, to read the texts in another language in order to gain experience. I’d try it myself, but I’m silly. Ok, that’s it.

Reference: https://bit.ly/2HCmT6v

Print Friendly, PDF & Email
The GDPR for translators: all you need to know (and do!)

The GDPR for translators: all you need to know (and do!)

1. What is the General Data Protection Regulation?

The General Data Protection Regulation, in short GDPR, is a European regulatory framework that is designed to harmonize data privacy laws across Europe. Preparation of the GDPR took four years and the regulation was finally approved by the EU Parliament on 14 April 2016. Afterwards there was a striking silence all over Europe, but with the enforcement date set on 25 May 2018 companies have worked increasingly hard in the past months to make sure that they uphold the requirements of the regulation.

The GDPR replaces the Data Protection Directive 95/46/EC. It was designed to protect and empower the data privacy of all European citizens and to reshape the way organizations approach data privacy. While the term GDPR is used all over the world, many companies have their own designation. For instance, in the Netherlands the term is translated as ‘Algemene Verordening Gegevensbescherming’ (AVG).
More information about the GDPR can be found on the special portal created by the European Union.

2. To whom does the GDPR apply?

The GDPR applies to the processing of personal data by controllers and processors in the EU. It does not matter whether the processing takes place in the EU or not. It is, however, even more extensive as it also applies to the processing of personal data of data subjects in the EU by a controller or processor who is not established in the EU when they offer goods or services to EU citizens (irrespective of whether payment is required). Finally, the GDPR applies to the monitoring of behaviour that takes place within the EU as well. If a business outside the EU processes the data of EU citizens, it is required to appoint a representative in the EU.
So in short, the GDPR applies to every instance that

  • processes personal data from EU citizens (whether they process these data in the EU or not),
  • monitors behaviour that takes place in the EU.

In fact, this means that companies inside and outside the EU that offer or sell goods or services to EU citizens (paid or not) should apply the principles.

3. Controllers, processors, data subjects?

Yes, it is confusing, but let’s keep it short:

  • Controllers are parties that control the data.
  • Processors are parties that process the data, such as third parties that process the data for … ehm controllers.
  • Data subjects are parties whose data are controlled and processed by … you guessed it.

A controller is the entity that determines the purposes, conditions and means of processing personal data. The processor processes the personal data on behalf of the controller.

4. Sounds like a business horror. Can I opt out?

Not in any easy way. Oh wait, you can by moving outside the EU, getting rid of your European clients and clients with translation jobs about their European clients, and only focus on everything that is not EU related. But staying safe is much easier for the future, although it offers considerable hassle for the time being.

5. What happens if I do not take it seriously?

Of course the European Union thought about that before you did and they included a generous clause: if you breach the GDPR, you can be fined up to 4% of your annual global turnover or €20 Million (whichever is greater). This is the maximum fine that can be imposed for the most serious infringements, like insufficient customer consent to process data or violating the core of Privacy by Design concepts.
There is a tiered approach to fines. For instance a company can be fined 2% if it does not have its records in order (article 28), if it does not notify the supervising authority and data subject (remember?) about a breach or if it does not conduct an impact assessment.

6. So adhering to the GDPR is a no-brainer?

Yes indeed. Although you certainly should use your brains. Until now it was easy to impress all parties involved by using long and unreadable contracts, but the GDPR finally puts an end to that. Companies will no longer be able to use long unintelligible terms and conditions full of legalese. They need to ask consent for processing data and the request for consent must be given in an understandable and accessible form. Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. Apart from that, all data subjects (just to check) should be able to withdraw their consent as easily as they gave it.

7. So I need to involve all people for whom I process data?

Yes. You need to ask their consent, but you need to give them access to the data you hold about them as well. EU citizens from whom you collect or process data, have a few rights:

  • Right to access
    People can ask your confirmation as to whether or not personal data concerning them is being processed. They can also ask where these data are processed and for what purpose. If someone makes use of their right to access, you need to provide a copy of the personal data in an electronic format. And yes, that should happen free of charge.
  • Right to be Forgotten
    The right to be forgotten entitles the people you collect data from to require you to erase their personal data, cease further dissemination of the data, and potentially have third parties halt processing of the data. There are a few conditions however: article 17 states that the data should no longer be relevant to the original purposes for processing, or a data subject should have withdrawn his or her consent.
  • Data Portability
    The GDPR introduces the concept of data portability. This grants persons a right to receive the personal data they have previously provided about themselves in a ‘commonly us[able] and machine readable format‘. EU citizens can than transmit that data to another controller.

8. What are these personal data you are talking about?

The GDPR pivots around the concept of ‘personal data’. This is any information related to a natural person that can be used to directly or indirectly identify the person. You might think about a person’s name, photo, email address, bank details, posts on social networking websites, medical information, or a computer IP address.

9. How does this affect my translation business?

As a freelance translator or translation agency you are basically a processor. (And if you are an EU citizen you are a data subject as well, but let’s keep that out of the scope of this discussion.)
The actual impact of the GDPR on your translation business differs greatly. If you are a technical translator or literary translator, chances are that you do not process the personal data of the so-called ‘data subjects’. In that case compliance should not be a heavy burden, although you should, of course, make sure that everything is in order.
However, if you are a medical translator for instance, translating personal health records, or if you are a sworn translator, translating certificates and other personal stuff, you have somewhat more work to do.

10. Great, you made it perfectly clear. How to proceed?

The best approach to ensure compliance with the GDPR is to follow a checklist. You might chose this 5-step guide for instance. However, if that sounds too easy you might use this 10-page document with complex language to show off your GDPR skills. You will find a short summary below:

1. Get insight into your data
Understand which kind of personal data you own and look at where the data comes from, how you collected it and how you plan to use it.

2. Ask explicit consent to collect data
People need to give free, specific, informed and unambiguous consent. If someone does not respond, does not opt in themselves or is inactive, you should not consider them as having given consent. This also means you should re-consider the ways you ask for consent: chances are that your current methods to get the necessary consent are not GDPR compliant.

3. Communicate how and why you collect data
Tell your clients how you collect data, why you do that and how long you plan to retain the data. Do not forget to include which personal data you collect, how you do that, for which purpose you process them, which rights the person in question has, in what way they can complain and what process you use to send their data to third parties.
NOTE: This needs thorough consideration if you make use of the cloud (i.e. Dropbox or Google Drive) to share translations with clients or if you use cloud-based CAT tools for translation.

4. Show that you are GDPR compliant
The GDPR requires you to show that you are compliant. So identify the legal basis for data processing, document your procedures and update your privacy policy.
NOTE: If you are outsourcing translation jobs to other translators, you should sign a data processing agreement (DPA) with them.

5. Make sure you have a system to remove personal data
Imagine what happens when someone makes use of their right to access or to be forgotten. If you do not have their data readily available, you will waste your time finding it and risking still not being compliant. So make sure you have an efficient system to fulfil the rights of all those people whose data you are processing.

So, the GDPR is no joke

It is definitely not funny for any of us, but we need to comply. To be compliant or not to be compliant: that is the question. The easiest way is to do that is the required Privacy Impact Assessment, so you know which data you collect or process and what the weak links and bottlenecks are. Following an easy guide will then help to establish the necessary controls. Opting out is not an option, but making sure your data subjects (still know what they are?) are opting into is.

Reference: https://bit.ly/2L3GVZL

Print Friendly, PDF & Email
A Gentle Introduction to Neural Machine Translation

A Gentle Introduction to Neural Machine Translation

One of the earliest goals for computers was the automatic translation of text from one language to another.

Automatic or machine translation is perhaps one of the most challenging artificial intelligence tasks given the fluidity of human language. Classically, rule-based systems were used for this task, which were replaced in the 1990s with statistical methods. More recently, deep neural network models achieve state-of-the-art results in a field that is aptly named neural machine translation.

In this post, you will discover the challenge of machine translation and the effectiveness of neural machine translation models.

After reading this post, you will know:

  • Machine translation is challenging given the inherent ambiguity and flexibility of human language.
  • Statistical machine translation replaces classical rule-based systems with models that learn to translate from examples.
  • Neural machine translation models fit a single model rather than a pipeline of fine-tuned models and currently achieve state-of-the-art results.

Let’s get started.

What is Machine Translation?

Machine translation is the task of automatically converting source text in one language to text in another language.

In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language.

— Page 98, Deep Learning, 2016.

Given a sequence of text in a source language, there is no one single best translation of that text to another language. This is because of the natural ambiguity and flexibility of human language. This makes the challenge of automatic machine translation difficult, perhaps one of the most difficult in artificial intelligence:

The fact is that accurate translation requires background knowledge in order to resolve ambiguity and establish the content of the sentence.

— Page 21, Artificial Intelligence, A Modern Approach, 3rd Edition, 2009.

Classical machine translation methods often involve rules for converting text in the source language to the target language. The rules are often developed by linguists and may operate at the lexical, syntactic, or semantic level. This focus on rules gives the name to this area of study: Rule-based Machine Translation, or RBMT.

RBMT is characterized with the explicit use and manual creation of linguistically informed rules and representations.

— Page 133, Handbook of Natural Language Processing and Machine Translation, 2011.

The key limitations of the classical machine translation approaches are both the expertise required to develop the rules, and the vast number of rules and exceptions required.

What is Statistical Machine Translation?

Statistical machine translation, or SMT for short, is the use of statistical models that learn to translate text from a source language to a target language gives a large corpus of examples.

This task of using a statistical model can be stated formally as follows:

Given a sentence T in the target language, we seek the sentence S from which the translator produced T. We know that our chance of error is minimized by choosing that sentence S that is most probable given T. Thus, we wish to choose S so as to maximize Pr(S|T).

— A Statistical Approach to Machine Translation, 1990.

This formal specification makes the maximizing of the probability of the output sequence given the input sequence of text explicit. It also makes the notion of there being a suite of candidate translations explicit and the need for a search process or decoder to select the one most likely translation from the model’s output probability distribution.

Given a text in the source language, what is the most probable translation in the target language? […] how should one construct a statistical model that assigns high probabilities to “good” translations and low probabilities to “bad” translations?

— Page xiii, Syntax-based Statistical Machine Translation, 2017.

The approach is data-driven, requiring only a corpus of examples with both source and target language text. This means linguists are not longer required to specify the rules of translation.

This approach does not need a complex ontology of interlingua concepts, nor does it need handcrafted grammars of the source and target languages, nor a hand-labeled treebank. All it needs is data—sample translations from which a translation model can be learned.

— Page 909, Artificial Intelligence, A Modern Approach, 3rd Edition, 2009.

Quickly, the statistical approach to machine translation outperformed the classical rule-based methods to become the de-facto standard set of techniques.

Since the inception of the field at the end of the 1980s, the most popular models for statistical machine translation […] have been sequence-based. In these models, the basic units of translation are words or sequences of words […] These kinds of models are simple and effective, and they work well for man language pairs

— Syntax-based Statistical Machine Translation, 2017.

The most widely used techniques were phrase-based and focus on translating sub-sequences of the source text piecewise.

Statistical Machine Translation (SMT) has been the dominant translation paradigm for decades. Practical implementations of SMT are generally phrase-based systems (PBMT) which translate sequences of words or phrases where the lengths may differ

— Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Although effective, statistical machine translation methods suffered from a narrow focus on the phrases being translated, losing the broader nature of the target text. The hard focus on data-driven approaches also meant that methods may have ignored important syntax distinctions known by linguists. Finally, the statistical approaches required careful tuning of each module in the translation pipeline.

What is Neural Machine Translation?

Neural machine translation, or NMT for short, is the use of neural network models to learn a statistical model for machine translation.

The key benefit to the approach is that a single system can be trained directly on source and target text, no longer requiring the pipeline of specialized systems used in statistical machine learning.

Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.

— Neural Machine Translation by Jointly Learning to Align and Translate, 2014.

As such, neural machine translation systems are said to be end-to-end systems as only one model is required for the translation.

The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text.

— Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Encoder-Decoder Model

Multilayer Perceptron neural network models can be used for machine translation, although the models are limited by a fixed-length input sequence where the output must be the same length.

These early models have been greatly improved upon recently through the use of recurrent neural networks organized into an encoder-decoder architecture that allow for variable length input and output sequences.

An encoder neural network reads and encodes a source sentence into a fixed-length vector. A decoder then outputs a translation from the encoded vector. The whole encoder–decoder system, which consists of the encoder and the decoder for a language pair, is jointly trained to maximize the probability of a correct translation given a source sentence.

— Neural Machine Translation by Jointly Learning to Align and Translate, 2014.

Key to the encoder-decoder architecture is the ability of the model to encode the source text into an internal fixed-length representation called the context vector. Interestingly, once encoded, different decoding systems could be used, in principle, to translate the context into different languages.

… one model first reads the input sequence and emits a data structure that summarizes the input sequence. We call this summary the “context” C. […] A second mode, usually an RNN, then reads the context C and generates a sentence in the target language.

— Page 461, Deep Learning, 2016.

Encoder-Decoders with Attention

Although effective, the Encoder-Decoder architecture has problems with long sequences of text to be translated.

The problem stems from the fixed-length internal representation that must be used to decode each word in the output sequence.

The solution is the use of an attention mechanism that allows the model to learn where to place attention on the input sequence as each word of the output sequence is decoded.

Using a fixed-sized representation to capture all the semantic details of a very long sentence […] is very difficult. […] A more efficient approach, however, is to read the whole sentence or paragraph […], then to produce the translated words one at a time, each time focusing on a different part of he input sentence to gather the semantic details required to produce the next output word.

— Page 462, Deep Learning, 2016.

The encoder-decoder recurrent neural network architecture with attention is currently the state-of-the-art on some benchmark problems for machine translation. And this architecture is used in the heart of the Google Neural Machine Translation system, or GNMT, used in their Google Translate service.

… current state-of-the-art machine translation systems are powered by models that employ attention.

— Page 209, Neural Network Methods in Natural Language Processing, 2017.

Although effective, the neural machine translation systems still suffer some issues, such as scaling to larger vocabularies of words and the slow speed of training the models. There are the current areas of focus for large production neural translation systems, such as the Google system.

Three inherent weaknesses of Neural Machine Translation […]: its slower training and inference speed, ineffectiveness in dealing with rare words, and sometimes failure to translate all words in the source sentence.

— Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Reference: https://bit.ly/2Cx8zxI

Print Friendly, PDF & Email
How to become a localization project manager

How to become a localization project manager

Excerpts from an article with the same title, written by Olga Melnikova in Multilingual Magazine.  Olga Melnikova is a project manager at Moravia and an adjunct professor at the Middlebury Institute of International Studies. She has ten years of experience in the language industry. She holds an MA in translation and localization management and two degrees in language studies.

I decided to talk to people who have been in the industry for a while, who have seen it evolve and know where it’s going. My main question was: what should a person do to start a localization project manager career? I interviewed several experts who shared their vision and perspectives — academics, industry professionals and recruiters. I spoke with Mimi Moore, account manager at Anzu Global, a recruiting company for the localization industry; Tucker Johnson, managing director of Nimdzi Insights; Max Troyer, translation and localization management program coordinator at MIIS, and Jon Ritzdorf, senior solution architect at Moravia and an adjunct professor at the University of Maryland and at MIIS. All of them are industry veterans and have extensive knowledge and understanding of its processes.

Why localization project management?

The first question is: Why localization project management? Why is this considered a move upwards compared to the work of linguists who are the industry lifeblood? According to Renato Beninatto and Tucker Johnson’s The General Theory of the Translation Company, “project
management is the most crucial function of the LSP. Project management has the potential to most powerfully impact an LSP’s ability to add value to the language services value chain.” “Project managers are absolutely the core function in a localization company,” said Johnson. “It is important to keep in mind that language services providers do not sell translation, they sell services. Project managers are responsible for coordinating and managing all of the resources that need to be coordinated in order to deliver to the client: they are managing time, money, people and technology.


Nine times out of ten, Johnson added, the project manager is the face of the company to the client. “Face-to-face contact and building the relationship are extremely important.” This is why The General Theory of the Translation Company regards project management to be one of the core functions of any language service provider (LSP). This in no way undermines the value of all the other industry players, especially
linguists who do the actual translation work. However, the industry cannot do without PMs because “total value is much higher than the original translations. This added value is at the heart of the language services industry.” This is why clients are happy to pay higher prices to work with massive multiple services providers instead of working directly with translators.

Who are they?

The next question is, how have current project managers become project managers? “From the beginning, when the industry started 20 years
ago, there were no specialized training programs for project managers,” Troyer recounted. “So there were two ways. One is you were a translator, but wanted to do something else — become an editor, for example, or start to manage translators. The other route was people working in a business that goes global. So there were two types of people who would become project managers — former translators or people who were assigned localization as a job task.”

According to Ritzdorf, this is still the case in many companies. “I am working with project managers from three prospective clients right now, all of whom do not have a localization degree and are all in localization positions. Did they end up there because they wanted to? Maybe not. They did not end up there because they said ‘Wow, I really want to become a head of localization.’ They just ended up there by accident, like a lot of people do.”

“There are a lot of people who work in a company and who have never heard of localization, but guess what? It is their job now to do localization, and they have to figure it out all by themselves,” Moore confirmed. “When the company decides to go international, they have to find somebody to manage that,” said Ritzdorf.

Regionalization


The first to mention regionalization was Ritzdorf, and then other interviewees confrmed it exists. Ritzdorf lives on the East Coast of the
United States, but comes to the West Coast to teach at MIIS, so he sees the differences. “There are areas where localization is a thing, which means when you walk into a company, they actually know about localization. Since there are enough people who understand what localization is, they want someone with a background in it.” Silicon Valley is a great example, said Ritzdorf. MIIS is close; there is a

localization community that includes organizations like Women in Localization; and there are networking events like IMUG. “People live and
breathe localization. However, there is a totally different culture in other regions, which is very fragmented. There are tons of little companies in other parts of the US, and the situation there is different. If I am a small LSP owner in Wisconsin or Ohio, what are my chances of finding someone with a degree or experience to fill a localization position for a project manager? Extremely low. This is why I may hire a candidate who has an undergraduate degree in French literature, for example. Or in linguistics, languages — at least something.”

The recruiters’ perspective


Nimdzi Insights conducted an interesting study about hiring criteria for localization project manager positions (Figure 1). Some 75 respondents (both LSPs and clients) were asked how important on a scale of 1 to 5 a variety of qualifications are for project management positions. Te responses show a few trends. Top priorities for clients are previous localization experience and a college degree, followed by years of experience and proficiency in more than one language. Top criteria for LSPs are reputation and a college degree, also followed
by experience and proficiency in more than one language.

Moore said that when clients want to hire a localization project manager, the skills they are looking for are familiarity with computer assisted translation (CAT) tools “and an understanding of issues that can arise during localization — like quality issues, for example. Compared to
previous years, more technical skills are required by both clients and vendors: CAT tools, WorldServer, machine translation knowledge, sometimes WordPress or basic engineering. When I started, they were nice-to-haves, but certainly not mandatory.”

Technical skill is not enough, however. “Both hard and soft skills are important. You need hard skills because the industry has become a lot more technical as far as software, tools and automation are concerned. You need soft skills to deal with external and internal stakeholders, and one of the main things is working under pressure because you are juggling so many things.

Moore also mentioned some red flags that would cause Anzu not to hire a candidate. “Sometimes an applicant does not demonstrate good English skills in phone interviews. Having good communication skills is important for a client-facing position. Also, people sometimes exaggerate their skills or experience. Another red flag is if the person has a bad track record (if they change jobs every nine months, for example).” ‘

Anzu often hires for project management contract positions in large companies. “Clients usually come to us when they need a steady stream of contractors (three or six months), then in three or six months there will be other contractors. Te positions are usually project managers or testers. If you already work fulltime, a contract position may not be that attractive. However, if you are a newcomer or have just graduated, and you want to get some experience, then it is a great opportunity. You would spend three, six or 12 months at a company, and it is a very good line on the résumé.”

Do you need a localization degree? 

There is no firm answer to the question of whether or not you need a degree. If you don’t know what you should do, it can certainly help. Troyer discussed how the localization program at MIIS has evolved to ft current real-world pressures. “The program was first started in 2004, and it started small. We were first giving CAT tools, localization project management and software localization courses. This is
the core you need to become a project manager. Ten the program evolved and we introduced the introduction and then advanced levels to many courses. There are currently four or five courses focusing on translation
technology.” Recent additions to the curriculum include advanced JavaScript classes, advanced project management and program management. Natural language processing and computational linguistics will be added down the road. “The industry is driving this move because students will need skills to go in and localize Siri into many languages,” said Troyer.

The program at MIIS is a two-year master’s. It can be reduced to one year for those who already have experience. There are other degrees
available, as well as certification programs offered by institutions such as the University of Washington and The Localization Institute.

Moore said that though a localization degree is not a must, it has a distinct advantage. A lot of students have internships that give them experience. They also know tools, which makes their résumés better fit clients’ job descriptions.

However, both Troyer and Ritzdorf said you don’t necessarily need a degree. “If you have passion for languages and technology, you can get the training on your own,” said Troyer. “Just teach yourself these skills, network on your own and try to break into the industry.”

The future of localization project management

Automation, artificial intelligence and machine learning are affecting all industries, and localization is not an exception. However, all the interviewees forecast that there will be more localization jobs in the future.

According to Johnson, there is high project management turnover on the vendor side because if a person is a good manager, they never stay in this position for more than five years. “After that, they either get a job on the client’s side to make twice as much money and have a much easier job, or their LSP has to promote them to senior positions such as group manager or program director.”

“There is a huge opportunity to stop doing things that are annoying,” said Troyer. “Automation will let professionals work on the human side
of things and let the machines run 
day-to-day tasks. Letting the machine send files back and forth will allow humans to spend more time looking at texts and thinking about what questions a translator can ask. This will give them more time for building a personal relationship with the client. We are taking these innovations into consideration for the curriculum, and I often spend time during classes asking, ‘How can you automate this?’”

Moore stated that “we have seen automation change workflows over the last ten years and reduce the project manager’s workload, with files being automatically moved through each step in the localization process. Also, automation and machine translation go hand-in-hand to make the process faster, more efficient and cost-effective.”

Print Friendly, PDF & Email
NEURAL MACHINE TRANSLATION: THE RISING STAR

NEURAL MACHINE TRANSLATION: THE RISING STAR

These days, language industry professionals simply can’t escape hearing about neural machine translation (NMT). However, there still isn’t enough information about the practical facts of NMT for translation buyers, language service providers, and translators. People often ask: is NMT intended for me? How will it change my life?

A Short History and Comparison

At the beginning of time – around the 1970s – the story began with rule-based machine translation (RBMT) solutions. The idea was to create grammatical rule sets for source and target languages, where machine translation is a kind of conversion process between the languages based on these rule sets. This concept works well with generic content, but adding new content, new language pairs, and maintaining the rule set is very time-consuming and expensive.

This problem was solved with statistical machine translation (SMT) around the late ‘80s and early ‘90s. SMT systems create statistical models by analyzing aligned source-target language data (training set) and use them to generate the translation. The advantage of SMT is the automatic learning process and the relatively easy adaptation by simply changing or extending the training set. The limitation of SMT is the training set itself: to create a usable engine, a large database of source-target segments is required. Additionally, SMT is not language independent in the sense that it is highly sensitive to the language combination and has a very hard time dealing with grammatically rich languages.

This is where neural machine translation (NMT) begins to shine: it can look at the sentence as a whole and can create associations between the phrases over an even longer distance within the sentence. The result is a convincing fluency and an improved grammatical correctness compared to SMT.

Statistical MT vs Neural MT

Both SMT and NMT are working on a statistical base and are using source-target language segment pairs as a basis. What’s the difference? What we typically call SMT is actually Phrase Based Statistical Machine Translation (PBSMT), meaning SMT is splitting the source segments into phrases. During the training process, SMT creates a translation model and a language model. The translation model stores the different translations of the phrases and the language model stores the probability of the sequence of phrases on the target side. During the translation phase, the decoder chooses the translation that gives the best result based on these two models. On a phrase or expression level, SMT (or PBSMT) is performing well, but language fluency and grammar is not good.

‘Buch’ is aligned with ‘book’ twice and only once with ‘the’ and ‘a’ – the winner is the ‘Buch’-’book’ combination

Neural Machine Translation, on the other hand, is using neural network-based, deep, machine learning technology. Words or even word chunks are transformed into “word vectors”. This means that ‘dog’ is not only representing the characters d, o and g, but it can contain contextual information from the training data. During the training phase, the NMT system tries to set the parameter weights of the neural network based on the reference values (source-target translation). Words appearing in similar context will get similar word vectors. The result is a neural network which can process source segments and transfer them into target segments. During translation, NMT is looking for a complete sentence, not just chunks (phrases). Thanks to the neural approach, it is not translating words, it’s transferring information and context. This is why fluency is much better than in SMT, but terminology accuracy is sometimes not perfect.

Similar words are closer to each other in a vector space

The Hardware

A popular GPU: NVIDIA Tesla

One big difference between SMT and NMT systems is that NMT requires Graphics Processing Units (GPUs), which were originally designed to help computers process graphics. These GPUs can calculate astonishingly fast – the latest cards have about 3,500 cores which can process data simultaneously. In fact, there is a small ongoing hardware revolution and GPU-based computers are the foundation for almost all deep learning and machine learning solutions. One of the great perks of this revolution is that nowadays, NMT is not only available for large enterprises, but also for small and medium-sized companies as well.

The Software

The main element, or ‘kernel’, of any NMT solution is the so-called NMT toolkit. There are a couple of NMT toolkits available, such as Nematus or openNMT, but the landscape is changing fast and more companies and universities are now developing their own toolkits. Since many of these toolkits are open-source solutions and hardware resources have become more affordable, the industry is experiencing an accelerating speed in toolkit R&D and NMT-related solutions.

On the other hand, as important as toolkits are, they are only one small part of a complex system, which contains frontend, backend, pre-processing and post-processing elements, parsers, filters, converters, and so on. These are all factors for anyone to consider before jumping into the development of an individual system. However, it is worth noting that the success of MT is highly community-driven and would not be where it is today without the open source community.

Corpora

A famous bilingual corpus: the Rosetta Stone

And here comes one of the most curious questions: what are the requirements of creating a well-performing NMT engine? Are there different rules compared to SMT systems? There are so many misunderstandings floating around on this topic that I think it’s a perfect opportunity to go into the details a little bit.

The main rules are nearly the same both for SMT and NMT systems. The differences are mainly that an NMT system is less sensitive and performs better in the same circumstances. As I have explained in an earlier blog post about SMT engine quality, the quality of an engine should always be measured in relation to the particular translation project for which you would like to use it.

These are the factors which will eventually influence the performance of an NMT engine:

Volume

Regardless of you may have heard, volume is still very important for NMT engines just like in the SMT world. There is no explicit rule on entry volumes but what we can safely say is that the bare minimum is about 100,000 segment pairs. There are Globalese users who are successfully using engines created based on 150,000 segments, but to be honest, this is more of an exception and requires special circumstances (like the right language combination, see below). The optimum volume starts around 500,000 segment pairs (2 million words).

Quality

The quality of the training set plays an important role (garbage in, garbage out). Don’t add unqualified content to your engine just to increase the overall size of the training set.

Relevance

Applying the right engine to the right project is the first key to success. An engine trained on automotive content will perform well on car manual translation but will give back disappointing results when you try to use it for web content for the food industry.

This raises the question of whether the content (TMs) should be mixed. If you have enough domain-specific content you shouldn’t necessarily add more out-of-domain data to your engine, but if you have an insufficient volume of domain-specific data then adding generic content (e.g. from public sources) may help improve the quality. We always encourage our Globalese users to try different engine combinations with different training sets.

Content type

Content generated by possible non-native speaking users on a chat forum or marketing material requiring transcreation is always a challenge to any MT system. On the other hand, technical documentation with controlled language is a very good candidate for NMT.

Language combination

Unfortunately, language combination still has an impact on quality. The good news is that NMT has now opened up the option of using machine translation for languages like Japanese, Turkish, or Hungarian –  languages which had nearly been excluded from the machine translation club because of poor results provided by SMT. NMT has also helped solve the problem of long distance dependencies for German and the translation output is much smoother for almost all languages. But English combined with Latin languages still provides better results than, for example, English combined with Russian when using similar volumes and training set quality.

Expectations for the future

Neural Machine Translation is a big step ahead in quality, but it still isn’t magic. Nobody should expect that NMT will replace human translators anytime soon. What you CAN expect is that NMT can be a powerful productivity tool in the translation process and open new service options both for translation buyers and language service providers (see post-editing experience).

Training and Translation Time

When we started developing Globalese NMT, one of the most surprising experiences for us was that the training time was far shorter than we had previously anticipated. This is due to the amazingly fast evolution of hardware and software. With Globalese, we currently have an average training time of 50,000 segments per hour – this means that an average engine with 1 million segments can be trained within one day. The situation is even better when looking at translation times: with Globalese, we currently have an average translation time between 100 and 400 segments per minute, depending on the corpus size, segment length in the translation and training content.

Neural MT Post-editing Experience

One of the great changes neural machine translation brings along is that the overall language quality is much better when compared to the SMT world. This does not mean that the translation is always perfect. As stated by one of our testers: if it is right, then it is astonishingly good quality. The ratio of good and poor translation naturally varies depending on the engine, but good engines can provide about 50% (or even higher) of really good translation target text.

Here are some examples showcasing what NMT post-editors can expect:

DE original:

Der Rechnungsführer sorgt für die gebotenen technischen Vorkehrungen zur wirksamen Anwendung des FWS und für dessen Überwachung.

Reference human translation:

The accounting officer shall ensure appropriate technical arrangements for aneffective functioning of the EWS and its monitoring.

Globalese NMT:

The accounting officer shall ensure the necessary technical arrangements for theeffective use of the EWS and for its monitoring.

As you can see, the output is fluent, and the differences are just preferential ones, more or less. This is highlighting another issue: automated quality metrics like BLEU score are not really sufficient to measure the quality. The example above is only a 50% match in the BLEU score, but if we look at the quality, the rating should be much higher.

Let’s look another example:

EN original

The concept of production costs must be understood as being net of any aid but inclusive of a normal level of profit.

Reference human translation:

Die Produktionskosten verstehen sich ohne Beihilfe, aber einschließlich eines normalen Gewinns.

Globalese NMT:

Der Begriff der Produktionskosten bezieht sich auf die Höhe der Beihilfe, aber einschließlich eines normalen Gewinns.

What is interesting here that the first part of the sentence sounds good, but if you look at the content, the translation is not good. This is an example of a fluent output with a bad translation. This is a typical case in the NMT world and it emphasizes the point that post-editors must examine NMT output differently than they did for SMT – in SMT, bad grammar was a clear indicator that the translation must be post-edited.

Post-editors who used to proof and correct SMT output have to change the way they are working and have to be more careful with proofreading, even if the NMT output looks alright at first glance. Also, services related to light post-editing will change – instead of correcting serious grammatical errors without checking the correctness of translation in order to create some readable content, the focus will shift to sorting out serious mistranslations. The funny thing is that one of the main problems in the SMT world was weak fluency and grammar, and now we have good fluency and grammar as an issue in the NMT world…

And finally:

DE original:

Aufgrund des rechtlichen Status der Beteiligten ist ein solcher Vorgang mit einer Beauftragung des liefernden Standorts und einer Berechnung der erbrachten Leistung verbunden.

Reference human translation:

The legal status of the companies involved in these activities means that this process is closely connected with placing orders at the location that is to supply the goods/services and calculating which goods/services they supply.

Globalese NMT:

Due to the legal status of the person, it may lead to this process at the site of the plant, and also a calculation of the completed technician.

This example shows that unfortunately, NMT can produce bad translations too. As I mentioned before, the ratio of good and bad NMT output you will face in a project always depends on the circumstances. Another weak point of NMT is that it currently cannot handle the terminology directly and it acts as a kind of “black box” with no option to directly influence the results.

Reference: https://bit.ly/2hBGsVh

Print Friendly, PDF & Email
DQF: What is it? and How it works?

DQF: What is it? and How it works?

What does DQF stand for?

DQF stands for the Dynamic Quality Framework. Quality is considered Dynamic as translation quality requirements change depending on the content type, the purpose of the content and its audience.

Why is DQF the industry benchmark?

DQF has been co-created since January 2011 by over fifty companies and organizations. Contributors include translation buyers, translation service providers, and translation technology suppliers. Practitioners continue to define requirements and best practices as they evolve through regular meetings and events.

How does DQF work?

DQF provides a commonly agreed approach to select the most appropriate translation quality evaluation model(s) and metrics depending on specific quality requirements. The underlying process, technology and resources affect the choice of quality evaluation model. DQF Content Profiling, Guidelines and Knowledge base are used when creating or refining a quality assurance program. DQF provides shared language, guidance on process and standardized metrics to help users execute quality programs more consistently and effectively. Improving efficiency within organizations and through supply chains. The result is increased customer satisfaction and a more credible quality assurance function in the translation industry.

The Content Profiling feature is used to help select the most appropriate quality evaluation model for specific requirements. This leads to the Knowledge base where you find best practices, metrics, step-by-step guides, reference templates, and use cases. The Guidelines are publicly available summaries for parts of the Knowledge base as well as related topics.

What is included in DQF?

1. Content Profiling and Knowledge base

The DQF Content Profiling Wizard is used to help select the most appropriate quality evaluation model for specific requirements. In the Knowledge Base you find supporting best practices, metrics, step-by-step guides, reference templates, use cases and more.

2. Tools

A set of tools that allows users to do different types of evaluations: adequacy, fluency, error review, productivity measurement, MT ranking and comparison. The DQF tools can be used in the cloud, offline or indirectly through the DQF API.

3. Quality Dashboard

The Quality Dashboard is available as an industry-shared platform. In the dashboard, evaluation and productivity data is visualized in a flexible reporting environment. Users can create customized reports or filter data to be reflected in the charts. Both internal and external benchmarking is supported, offering the possibility to monitor one’s own development and to compare results to industry highs, lows and averages.

4. API

The DQF API allows users to assess productivity, efficiency and quality on the fly while in the translation production mode. Developers and integrators are invited to use the API and connect with DQF from within their TMS or CAT tool environments.

References: Taus

Print Friendly, PDF & Email
Localizing Slogans: When Language Translation Gets Tricky

Localizing Slogans: When Language Translation Gets Tricky

A slogan. It seems pretty straightforward. Translating a few words, or even a sentence, shouldn’t be all that complicated, right?
And yet we’ve seen countless examples of when localizing slogans has gone awry—from big global brands—illustrating just how tricky translating slogans can be.
Anybody recall Pepsi’s “Come alive with the Pepsi generation” tagline being translated into “Pepsi brings your ancestors back from the grave” in Chinese?
While humorous, this language translation misfortune can be costly—and not just in a monetary sense. We’re talking time-to-market and brand reputation costs, too.

Why slogans pose language translation difficulties

The very nature of slogans makes them challenging to translate. Many times slogans are very creative, playing on cultural idioms and puns.
There often isn’t a direct translation that can take on the exact meaning of your slogan. And, in fact, linguists may experience translation difficulties in attempting to complete the translation word for word.
Local nuances come into play as well. Some words may have entirely different meanings than your source language and can be misinterpreted. Just think of product names that are often used in slogans. The Chevy Nova name was criticized in Latin America because “Nova” directly translates into “doesn’t go.”
Also, different cultures have unique emotional reactions to given words. Take McDonald’s and its famous slogan “I’m lovin’ it.” The fast food mogul localized this slogan to “Me encanta” or “I really like it,” so the mantra was more culturally appropriate for Spanish-speaking countries, where love is a strong word and only used in certain situations.
Because of the language translation difficulties involved, you may need a more specialized form of translation to ensure that your slogan makes a positive impact in your international markets.

How to approach localizing slogans

First and foremost, communication is vital throughout the entire localization process. When approaching slogans, we’ll collaborate with your marketing experts—whether internal or outside creative agencies—as well as your in-country linguists with marketing expertise.

Having in-country linguists’ work on your slogan is absolutely critical. These language translation experts are fully immersed in the target culture. They are cognizant of cultural nuances, slang and idioms, which ensures that your slogan will make sense—and go over well—in your target locales.

We’ll review the concepts in the tagline or slogan as a team and identify any challenging words or phrases and assess how to approach it. Oftentimes, a direct translation won’t work. We may need to localize it in a way that’s more appropriate, such as the McDonald’s “Me encanta” example above.

If it poses much difficulty, then we may need to turn to transcreation services.

Transcreation process and your slogan

The transcreation process is a specialized version of language translation that’s a highly involved and creative process.

Copywriter linguists will identify your brand qualities and portray those in a way that perfectly resonates with your target audience. Think of it as a mix of “translation” and “creation.” It’s not a word-for-word translation, but rather re-creating an idea or message so it fosters an emotional connection in a different culture.

Looking at a quick example, Nike’s celebrated slogan “Just do it” had no meaningful translation in Chinese. So instead, the message was transcreated to mean “Use sports” or “Have sport,” which had a more prominent impact in that culture.

Localizing slogans, or more specifically, your slogan, correctly can mean a stronger global brand reputation—driving revenue and increased market share worldwide. Taking a hasty, nonchalant approach can mean just the opposite. And you may find yourself having to spend time and resources rectifying what comes with a language translation error.

 Reference: https://bit.ly/2GSx36x
Print Friendly, PDF & Email
Edit Distance in Translation Industry

Edit Distance in Translation Industry

In computational linguistics, edit distance or Levenshtein distance, is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other.  The edit distance between (a, b) is the minimum-weight series of edit operations that transforms a into b. One of the simplest sets of edit operations is that defined by Levenshtein in 1966 which are:

1- Insertion.

2- Deletion

3- Substitution.

In Levenshtein’s original definition, each of these operations has unit cost (except that substitution of a character by itself has zero cost), so the Levenshtein distance is equal to the minimum number of operations required to transform a to b.

For example, the Levenshtein distance between “kitten” and “sitting” is 3. A minimal edit script that transforms the former into the latter is:

  • kitten – sitten (substitution of “s” for “k”).
  • sitten –  sittin (substitution of “i” for “e”).
  • sittin –  sitting (insertion of “g” at the end).

What are the application of edit distance in translation industry?

1- Spell Checkers

Edit distance is applied where automatic spelling correction can determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question.

2- Machine Translation Evaluation and Post Editing

Edit distance can be used to compare a postedited file to the machine translated output that was the starting point for the postediting. When you calculate the edit distance, you are calculating the “effort” that the posteditor made to improve the quality of the machine translation to a certain level. Starting from the source content and same MT output, if you perform a light postediting and a full postediting, the edit distance for each task will be different, and the human quality level is expected to have a higher edit distance, because more changes are needed. This means that you are measuring light and full postediting using the edit distance.

Therefore, the edit distance is a kind of “word count” measure of the effort, similar in a way to the word count used to quantify the work of translators throughout the localization industry. It also helps in evaluating the quality of MT engine by comparing the raw MT to the post edited version by a human translator.

3- Fuzzy Match

In translation memories, edit distance is the technique of finding strings that match a pattern approximately (rather than exactly). Translation memories provide suggestions to translators, and fuzzy matches are used to measure the effort made to improve those suggestions.

Print Friendly, PDF & Email