Tag: Quality Assurance

A New Way to Measure NMT Quality

A New Way to Measure NMT Quality

Neural Machine Translation (NMT) systems produce very high quality translations, and are poised to radically change the professional translation industry. These systems require quality feedback / scores on an ongoing basis. Today, the prevalent method is via Bilingual Evaluation Understudy (BLEU), but methods like this are no longer fit for purpose.

A better approach is to have a number of native speakers assess NMT output and rate the quality of each translation. One Hour Translation (OHT) is doing just that: our new NMT index is released in late April 2018 and fully available for the translation community to use.

A new age of MT

NMT marks a new age in automatic machine translation. Unlike technologies developed over the past 60 years,  the well-trained and tested NMT systems that are available today,  have the potential to replace human translators.

Aside from processing power, the main factors that impact NMT performance are:

  •      the amount and quality of initial training materials, and
  •      an ongoing quality-feedback process

For a NMT system to work well, it needs to be properly trained, i.e. “fed” with hundreds of thousands (and in some cases millions) of correct translations. It also requires feedback on the quality of the translations it produces.

NMT is the future of translation. It is already much better than previous MT technologies, but issues with training and quality assurance are impeding progress.

NMT is a “disruptive technology” that will change the way most translations are performed. It has taken over 50 years, but machine translation can now be used to replace human translators in many cases.

So what is the problem?

While NMT systems could potentially revolutionize the translation market, their development and adoption are hampered by the lack of quality input, insufficient means of testing the quality of the translations and the challenge of providing translation feedback.

These systems also require a lot of processing power, an issue which should be solved in the next few years, thanks to two main factors. Firstly, Moore’s law, which predicts that processing power doubles every 18 months, also applies to NMT, meaning that processing power will continue to increase exponentially. Secondly, as more companies become aware of the cost benefit of using NMT, more and more resources will be allocated for NMT systems.

Measuring quality is a different and more problematic challenge. Today, algorithms such as BLEU, METEOR, and TER try to predict automatically what a human being would say about the quality of a given machine translation. While these tests are fast, easy, and inexpensive to run (because they are simply software applications), their value is very limited. They do not provide an accurate quality score for the translation, and they fail to estimate what a human reviewer would say about the translation quality (a quick scan of the text in question by a human would reveal the issues with the existing quality tests).

Simply put, translation quality scores generated by computer programs that predict what a human would say about the translation are just not good enough.

With more major corporations including Google, Amazon, Facebook, Bing, Systran, Baidu, and Yandex joining the game, producing an accurate quality score for NMT translations becomes a major problem that has a direct negative impact on the adoption of NMT systems.

There must be a better way!

We need a better way to evaluate NMT systems, i.e. something that replicates the original intention more closely and can mirror what a human would say about the translation.

The solution seems simple: instead of having some software try to predict what a human would say about the translation, why not just ask enough people to rate the quality of each translation? While this solution is simple, direct, and intuitive, doing it right and in a way that is statistically significant means running numerous evaluation projects at one time.

NMT systems are highly specialized, meaning that if a system has been trained using travel and tourism content, testing it with technical material will not produce the best results. Thus, each type of material has to be tested and scored separately. In addition, the rating must be done for every major language pair, since some NMT engines perform better in particular languages. Furthermore, to be statistically significant, at least 40 people need to rate each project per language, per type of material, per engine. Besides that, each project should have at least 30 strings.

Checking one language pair with one type of material translated with one engine is relatively straightforward: 40 reviewers each check and rate the same neural machine translation consisting of about 30 strings. This approach produces relatively solid (statistically significant) results, and repeating it over time also produces a trend, i.e. making it possible to find out whether or not the NMT system is getting better.

The key to doing this one isolated evaluation is selecting the right reviewers and making sure they do their job correctly. As one might expect, using freelancers for the task requires some solid quality control procedures to make sure the answers are not “fake” or “random.”

At that magnitude (one language, one type of material, one NMT engine, etc), the task is manageable, even when run manually. It becomes more difficult when an NMT vendor, user, or LSP wants to test 10 languages and 10 different types of material with 40 reviewers each. In this case, each test requires between 400 reviewers (1 NMT engine x 1 type of material x 10 language pairs x 40 reviewers) and 4,000 reviewers (1 NMT engine x 10 types of material x 10 language pairs x 40 reviewers).

Running a human based quality score is a major task, even for just one NMT vendor. It requires up to 4,000 reviewers working on thousands of projects.

This procedure is relevant for every NMT vendor who wants to know the real value of their system and obtain real human feedback for the translations it produces.

The main challenge is of course finding, testing, screening, training, and monitoring thousands of reviewers in various countries and languages — monitoring their work while they handle tens of thousands of projects in parallel.

The greater good – industry level quality score

Looking at the greater good,  what is really needed is a standardised NMT quality score for the industry to employ, measuring all of the various systems using the same benchmark, strings, and reviewers, in order to compare like for like performance. Since the performance of NMT systems can vary dramatically between different types of materials and languages, a real human-based comparison using the same group of linguists and the same source material is the only way to produce real comparative results. Such scores will be useful both for the individual NMT vendor or user and for the end customer or LSP trying to decide which engine to use.

To produce the same tests on an industry-relevant level is a larger undertaking. Using 10 NMT engines, 10 types of material, 10 language pairs and 40 reviewers, the parameters of the project can be outlined as follows:

  •      Assuming the top 10 language pairs are evaluated, ie EN > ES, FR, DE, PT-BR, AR, RU, CN, JP, IT and KR;
  •      10 types of material – general, legal, marketing, finance, gaming, software, medical, technical, scientific, and tourism;
  •      10 leading (web-based) engines – Google, Microsoft (Bing), Amazon, DeepL, Systran, Baidu, Promt, IBM Watson, Globalese and Yandex;
  •      40 reviewers rating each project;
  •      30 strings per test; and
  •      12 words on average per string

This comes to a total of 40,000 separate tests (10 language pairs x 10 types of material x 10 NMT engines x 40 reviewers), each with at least 30 strings, i.e. 1,200,000 strings of 12 words each, resulting in an evaluation of approximately 14.4 million words. This evaluation is needed to create just one instance (!) of a real, comparative, human-based NMT quality index.

The challenge is clear: to produce just one instance of a real viable and useful NMT score, 4,000 linguists need to evaluate 1,200,000 strings equating to well over 14 million words!

The magnitude of the project, the number of people involved and the requirement to recruit, train, and monitor all the reviewers, as well as making sure, in real time, that they are doing the job correctly, are obviously daunting tasks, even for large NMT players, and certainly for traditional translation agencies.

Completing the entire process within a reasonable time (e.g. less than one day), so that the results are “fresh” and relevant makes it even harder.

There are not many translation agencies with the capacity, technology, and operational capability to run a project of that magnitude on a regular basis.

This is where One Hour Translation (OHT) excels. They have recruited, trained, and tested thousands of linguists in over 50 languages, and already run well over 1,000,000 NMT rating and testing projects for our customers. By the end of April 2018, they published the first human-based NMT quality index (initially covering several engines and domains and later expanding), with the goal of promoting the use of NMT across the industry.

A word about the future

In the future, a better NMT quality index can be built using the same technology NMT is built on, i.e. deep-learning neural networks. Building a Neural Quality system is just like building a NMT system. The required ingredients are high quality translations, high volume, and quality rating / feedback.

With these ingredients, it is possible to build a deep-learning, neural network based quality control system that will read the translation and score it like a human does. Once the NMT systems are working smoothly and a reliable, human based, quality score/feedback developed, , the next step will be to create a neural quality score.

Once a neural quality score is available, it will be further possible to have engines improve each other, and create a self-learning and self-improving translation system by linking the neural quality score to the NMT  (obviously it does not make sense to have a closed loop system as it cannot improve without additional external data).

With additional external translation data, this system will “teach itself” and learn to improve without the need for human feedback.

Google has done it already. Its AI subsidiary, DeepMind, developed AlphaGo, a neural network computer program that beat the world’s (human) Go champion. AlphaGo is now improving, becoming better and better, by playing against itself again and again – no people involved.

Reference: https://bit.ly/2HDXbTf

DQF: What is it? and How it works?

DQF: What is it? and How it works?

What does DQF stand for?

DQF stands for the Dynamic Quality Framework. Quality is considered Dynamic as translation quality requirements change depending on the content type, the purpose of the content and its audience.

Why is DQF the industry benchmark?

DQF has been co-created since January 2011 by over fifty companies and organizations. Contributors include translation buyers, translation service providers, and translation technology suppliers. Practitioners continue to define requirements and best practices as they evolve through regular meetings and events.

How does DQF work?

DQF provides a commonly agreed approach to select the most appropriate translation quality evaluation model(s) and metrics depending on specific quality requirements. The underlying process, technology and resources affect the choice of quality evaluation model. DQF Content Profiling, Guidelines and Knowledge base are used when creating or refining a quality assurance program. DQF provides shared language, guidance on process and standardized metrics to help users execute quality programs more consistently and effectively. Improving efficiency within organizations and through supply chains. The result is increased customer satisfaction and a more credible quality assurance function in the translation industry.

The Content Profiling feature is used to help select the most appropriate quality evaluation model for specific requirements. This leads to the Knowledge base where you find best practices, metrics, step-by-step guides, reference templates, and use cases. The Guidelines are publicly available summaries for parts of the Knowledge base as well as related topics.

What is included in DQF?

1. Content Profiling and Knowledge base

The DQF Content Profiling Wizard is used to help select the most appropriate quality evaluation model for specific requirements. In the Knowledge Base you find supporting best practices, metrics, step-by-step guides, reference templates, use cases and more.

2. Tools

A set of tools that allows users to do different types of evaluations: adequacy, fluency, error review, productivity measurement, MT ranking and comparison. The DQF tools can be used in the cloud, offline or indirectly through the DQF API.

3. Quality Dashboard

The Quality Dashboard is available as an industry-shared platform. In the dashboard, evaluation and productivity data is visualized in a flexible reporting environment. Users can create customized reports or filter data to be reflected in the charts. Both internal and external benchmarking is supported, offering the possibility to monitor one’s own development and to compare results to industry highs, lows and averages.

4. API

The DQF API allows users to assess productivity, efficiency and quality on the fly while in the translation production mode. Developers and integrators are invited to use the API and connect with DQF from within their TMS or CAT tool environments.

References: Taus

SDL and TAUS Integration Offers Brands Industry Benchmarking Framework

SDL and TAUS Integration Offers Brands Industry Benchmarking Framework

SDL, a leader in global content management, translation and digital experience, today announced an integration between SDL Translation Management System (TMS), and the TAUS Dynamic Quality Framework (DQF), a comprehensive set of tools that help brands benchmark the quality, productivity and efficiency of translation projects against industry standards.

The SDL TMS integration with TAUS DQF enables everyone involved in the translation supply chain – from translators, reviewers and managers – to improve the performance of their translation projects by learning from peers and implementing industry best-practice. Teams can also use TAUS’ dynamic tools and models to assess and compare the quality of their translations output – both human and machine – with the industry’s average for errors, fluency and post-editing productivity.

This enables brands to maintain quality – at extreme scale – and eliminate inefficiencies in the way content is created, managed, translated, and delivered to global audiences.

“One marketing campaign alone could involve translating over 50 pieces of content – and that’s just in one language. Imagine the complexity involved in translating content into over a dozen languages?” said Jim Saunders, Chief Product Officer, SDL. “Brands need a robust way to ensure quality when dealing with such high volumes of content. Our ongoing integrations with TAUS DQF tackle this challenge by fostering a knowledge-filled environment that creates improved ways to deliver and translate content.”

“Translating large volumes of content quickly can present enormous quality issues, and businesses are increasingly looking to learn from peers – and implement best-practices that challenge traditional approaches,” said TAUS Director, Jaap van der Meer. “Our development teams have worked closely with SDL to develop an integration that encourages companies not just to maintain high standards, but innovate and grow their business.”

The TAUS DQF offers a comprehensive set of tools, best practices, metrics, reports and data to help the industry set benchmarking standards. Its Quality Dashboard is available as an industry-shared platform, where evaluation and productivity data is presented in a flexible reporting environment. SDL TMS, now integrated within the TAUS DQF, is used by many Fortune 500 companies across most industries.

SDL already provides TAUS-ready packages for enterprise with our other language solutions. Customers of SDL WorldServer benefit from a connector to the TAUS DQF platform, enabling project managers to add and track a project’s productivity on the TAUS Quality Dashboard. Users can access both SDL WorldServer and SDL TMS through their SDL Trados Studio desktop, making it easy to share projects with the TAUS platform.

All SDL’s integrations with TAUS are designed to help centralize and manage a brand’s translation operations, resulting in lower translation costs, higher-quality translations and more efficient translation processes.

Reference: https://bit.ly/2EslqhA

Localizing Slogans: When Language Translation Gets Tricky

Localizing Slogans: When Language Translation Gets Tricky

A slogan. It seems pretty straightforward. Translating a few words, or even a sentence, shouldn’t be all that complicated, right?
And yet we’ve seen countless examples of when localizing slogans has gone awry—from big global brands—illustrating just how tricky translating slogans can be.
Anybody recall Pepsi’s “Come alive with the Pepsi generation” tagline being translated into “Pepsi brings your ancestors back from the grave” in Chinese?
While humorous, this language translation misfortune can be costly—and not just in a monetary sense. We’re talking time-to-market and brand reputation costs, too.

Why slogans pose language translation difficulties

The very nature of slogans makes them challenging to translate. Many times slogans are very creative, playing on cultural idioms and puns.
There often isn’t a direct translation that can take on the exact meaning of your slogan. And, in fact, linguists may experience translation difficulties in attempting to complete the translation word for word.
Local nuances come into play as well. Some words may have entirely different meanings than your source language and can be misinterpreted. Just think of product names that are often used in slogans. The Chevy Nova name was criticized in Latin America because “Nova” directly translates into “doesn’t go.”
Also, different cultures have unique emotional reactions to given words. Take McDonald’s and its famous slogan “I’m lovin’ it.” The fast food mogul localized this slogan to “Me encanta” or “I really like it,” so the mantra was more culturally appropriate for Spanish-speaking countries, where love is a strong word and only used in certain situations.
Because of the language translation difficulties involved, you may need a more specialized form of translation to ensure that your slogan makes a positive impact in your international markets.

How to approach localizing slogans

First and foremost, communication is vital throughout the entire localization process. When approaching slogans, we’ll collaborate with your marketing experts—whether internal or outside creative agencies—as well as your in-country linguists with marketing expertise.

Having in-country linguists’ work on your slogan is absolutely critical. These language translation experts are fully immersed in the target culture. They are cognizant of cultural nuances, slang and idioms, which ensures that your slogan will make sense—and go over well—in your target locales.

We’ll review the concepts in the tagline or slogan as a team and identify any challenging words or phrases and assess how to approach it. Oftentimes, a direct translation won’t work. We may need to localize it in a way that’s more appropriate, such as the McDonald’s “Me encanta” example above.

If it poses much difficulty, then we may need to turn to transcreation services.

Transcreation process and your slogan

The transcreation process is a specialized version of language translation that’s a highly involved and creative process.

Copywriter linguists will identify your brand qualities and portray those in a way that perfectly resonates with your target audience. Think of it as a mix of “translation” and “creation.” It’s not a word-for-word translation, but rather re-creating an idea or message so it fosters an emotional connection in a different culture.

Looking at a quick example, Nike’s celebrated slogan “Just do it” had no meaningful translation in Chinese. So instead, the message was transcreated to mean “Use sports” or “Have sport,” which had a more prominent impact in that culture.

Localizing slogans, or more specifically, your slogan, correctly can mean a stronger global brand reputation—driving revenue and increased market share worldwide. Taking a hasty, nonchalant approach can mean just the opposite. And you may find yourself having to spend time and resources rectifying what comes with a language translation error.

 Reference: https://bit.ly/2GSx36x
New Frontiers in Linguistic Quality Evaluation

New Frontiers in Linguistic Quality Evaluation

When it comes to translating content, everyone wants to find the highest quality translation at the lowest price. A recent report on the results of an industry survey of over 550 respondents revealed that translation quality is over four times more important than cost. For translation buyers, finding a language service provider (LSP) that can consistently deliver high-quality is significantly more important than price. For LSPs, finding linguists who can deliver cost-effective, high quality translation is key to customer happiness.

Meeting quality expectations is even more difficult with the demand for higher volume. The Common Sense Advisory Global Market Survey 2018 of the top 100 LSPs predicts a trend toward continued growth. Three-quarters of those surveyed reported increased revenue and 80 percent reported an increase in work volume. That’s why improving and automating the tools and processes for evaluating linguistic quality are more important than ever. LSPs and enterprise localization groups need to look for quality evaluation solutions that are scalable and agile enough to meet the growing demand.

Evaluating the quality of translation is a two-step process. The first step involves Quality Assurance (QA) systems and tools used by the translator to monitor and correct the quality of their work, and the second step is Translation Quality Assessment (TQA) or Linguistic Quality Evaluation (LQE), which evaluates quality using a  model of defined values, parameters, and scoring based on representative sampling.

The Current State of Linguistic Quality Evaluation & Scoring

Many enterprise-level localization departments have staff specifically dedicated to evaluating translation quality. The challenge for these quality managers is creating and maintaining an easy-to-use system for efficiently scoring vendor quality.

Today, even the most sophisticated localization departments resort to spreadsheets and labor-intensive manual processes. The most commonly used LQE and scoring methods rely on offline, sequential processing. A March 2017 Common Sense Advisory, Inc. brief, “Translation Quality and In-Context Review Tools,” observed that the most widely used translation quality scorecards “suffer from a lack of integration.”

“Many LSPs continue to rely on in-house spreadsheet-based scorecards. They may be reluctant to switch to other tools that require process changes or that would raise direct costs. Unfortunately, these simple error-counting tools are typically inefficient[1] because they don’t tie in with other production tools and cannot connect errors to specific locations in translated content. In addition, they are seldom updated to reflect TQA best practices, and it is common for providers to use scorecards that were developed many years earlier with unclear definitions and procedures.”

In an age of digital transformation and real-time cloud technology, LQE is overdue for an automated, integrated solution.

Reducing manual processes = reducing human error

One critical step to ensure quality translation is to reduce the number of manual processes and to automate evaluation as much as possible. There is a direct correlation between the number of manual processes and the increased likelihood of errors. These usually occur when cutting and pasting from the content management system (CMS) into spreadsheets and back again.

Evaluation scorecards, typically managed with spreadsheets, are very labor intensive. The spreadsheets usually include columns for languages, projects, sample word counts, categories, and error types. They also can include complex algorithms for scoring severity. To evaluate quality segment by segment requires copying and pasting what was corrected, the severities of each, etc.

To perform sample testing, localization quality managers extract some percentage of the total project to examine. If the project contains thousands of documents, they may use an equation –ten percent of the total word count, for example. They will then export those documents, load them into ApSIC Xbench, Okapi Checkmate, or some other tool for checking quality programmatically, and open a spreadsheet to enter quality feedback and/or issues. When the quality evaluation is complete, it is cut and pasted back into the CAT Tool, often with annotations.

LSPs resort to these less than desirable scoring methods, because there haven’t been any tools on the market to create or administer a quality program at scale, until now.

The New Frontier of Linguistic Quality Evaluation

Centralized quality management inside a TMS

Top-tier cloud translation management system (TMS) platforms now have the ability to make assessing vendor quality easier and more automated with LQE and scoring inside the TMS. It can be purchased as a TMS add-on or clients can outsource quality evaluation and assessment to LSPs offering quality services using this innovative LQE technology.

The centralized storage of information and the agile change management that a full API and cloud technology can provide eliminates the need to rely on error-prone manual processes. It centralizes quality management, supports flexible and dynamic scoring, and incorporates LQE as a seamless part of the workflow.

Currently, localization quality managers have to go into the TMS to get their sample, bulk select and download the information. With integrated LQE, there are no offline tasks to slow down the evaluation process or that can lead to human error. Quality evaluation is easily added to the workflow template by selecting from a list of published quality programs. From there, tasks are automatically assigned, and quality evaluation is performed in an integrated CAT tool/workbench, including running programmatic quality checks on the translated content.

Creating an LQE program inside the TMS

Creating and setting up a quality program can be challenging and time consuming, but it will ensure that everyone identifies quality issues the same way, which will simplify and improve communication over what constitutes quality. It requires a sophisticated level of experience. Those who aren’t particularly skilled at LQE run the risk of costly inefficiencies and unreliable reporting.

The latest LQE software has the ability to base a quality program on an industry standard, such as the TAUS Dynamic Quality Framework (DQF) or the EU Multidimensional Quality Metrics (MQM). Because these standards can be overly complex and may contain more error types than needed, the software allows you to create a custom quality program by selecting elements of each.

Define error types, categories and severities

Inside the TMS, quality managers can create and define the core components of their quality program by defining error types, categories, and severities.

Severity levels range from major–errors that can affect product delivery or legal liability–to minor errors that don’t impact comprehension, but could have been stated more clearly. An error-rate model counts the errors resulting in a percentage score, starting at 100% and deducting for points lost. It is important to differentiate between how serious the error is, so a numerical multiplier is added to account for severity. The less common rubric model begins at zero and points are added if the translation meets specific requirements, for example, awarding points for adherence to terminology and style guides.

Publishing

After creating your quality program, you need to think about how you are going to publish and distribute the quality program. Change management can become a nightmare if the program isn’t centralized. A cloud-based program allows you to publish, change, and unpublish quickly, so if you make an adjustment to a severity level designation, you have the ability to notify all users of the change immediately.

A cloud LQE app lets you keep prior versions of quality programs for historical reference, so translations will be held to the standards that applied at the time of translation, and not necessarily the most current standard. If your TMS doesn’t include this functionality, consider publishing your quality program on a wiki or in one of the many options for cloud-storage. This provides a centralized place that everyone is referring back to, instead of an offline spreadsheet.

Flexible and dynamic scoring

Scorecards, as CSA mentioned, need to be dynamic–based by content type, domain, connector, etc.–to manage translation in and out of the translation technology. Not all content requires the same quality level. A discussion forum or blog post may not need the level of review that a legal document or customer-facing brochure might require. The new frontier in flexible and dynamic scoring contains an algorithm that can set up scorecards automatically depending on content type.

The algorithm also lets you establish a standardized word count as a baseline for comparing quality scores among documents of different sizes. This gives you an apples-to-apples comparison, because the same number of errors should be viewed differently in a 500-word document than in a 5,000-word sample. To create an accurate and efficient weighting or total error point system, flexibility is important.

Feedback loop

The most critical component for improving quality is for feedback to be accessible by all parties involved: linguists and translators, reviewers, quality managers, and clients. When all parties have access to feedback, it improves communication and reduces the discussion that occurs when debating the subjective elements of scoring. When you have clear communication and scoring that is continually represented, it helps reviewers provide the appropriate feedback, quickly and easily.

Continuous, real-time feedback also creates an opportunity for improvement that is immediate. In offline scoring, a linguist may continue making the same mistake in several other projects before learning about the error. Cloud LQE enables real-time feedback that not only corrects an issue, but also trains linguists to improve the quality for the next (or even current) project.

The transparency this provides moves the entire process toward more objectivity, and the more objective the feedback, the less discussion is required to get clarification when a quality issue arises.

Quality reporting

Once linguistic quality evaluation has been done, you want to be able to review the data for quality reporting purposes. Cloud LQE allows reporting to be shared, so that clients can see issues affecting quality over time. You can track quality over time, by project and by locale, for all targets. Easy-to-read pie charts display the number of quality issues in each category such as terminology, style, language, and accuracy. This lets you monitor trends over time and to use that objective data for insights into improving quality delivery.

Conclusion

The new frontier in LQE is a cloud-based solution that improves user experience by streamlining quality evaluation. It reduces ambiguity, improves communication, and creates an objective platform to discuss and resolve quality issues.

With a single app for managing quality, LSPs and enterprise quality managers can streamline project set up and don’t have to rely on labor-intensive spreadsheets to describe or score the quality program. The minimal effort required to set up an online program is more than offset by the efficiency gains. You don’t have to move from Microsoft Excel to Word, then to a computer-assisted translation (CAT) tool, it’s now all in one place.

Efficiency of communication is also improved, making it easier for everyone to be on the same page when it comes to creation, scoring, publishing, and feedback. Improved quality data collection and reporting lets you monitor trends over time and use the objective data to inform your strategic decision making to improve translation quality.

As the CSA  industry survey discovered, it’s not the price of translation, it’s the quality, so now may be the time to go boldly into this new, LQE frontier.

Reference: https://bit.ly/2ItLRWF

Writefull: Improve Your Writing Skills

Writefull: Improve Your Writing Skills

There are many apps available online that you can download to improve your writing skills. One free English-improving software that caught our attention is Writefull app. Relatively new in the market, Writefull is a lightweight, feature-rich app with an intuitive user interface. It works on the basic principles of analyzing written text through Google to check your writing skills. Here is a detailed tutorial on how to use Writefull application.

Read More Read More

Terminology Sharing with GoldenDict & multiQA

Terminology Sharing with GoldenDict & multiQA

Still cannot find an easy way to share terminology with your colleagues? Exchanging glossaries via email everyday is not convenient. Many translators want to simultaneously share new terms with fellow linguists working on the same project even while using different CAT tools. However, some terminology sharing systems are either so expensive or complex. multiQA offers an out-of-the-box method for terminology collaboration.

Read More Read More

Acrolinx: Content Quality Control

Acrolinx: Content Quality Control

Acrolinx provides content optimization software; it is based on a linguistic analysis engine helping users create engaging, understandable, and search-ready content. Acrolinx offers a client-server architecture that analyzes content to give users feedback and metrics on content quality

Read More Read More

Translation Quality Assurance Tools

Translation Quality Assurance Tools

The most conventional definition of translation quality is that the translated text should be grammatically correct, have correct spelling and punctuation and sound as if it was originally written by a native speaker of the target language. We will refer to all quality assurance tasks performed to ensure this type of quality as linguistic. Obviously most of these tasks require human intervention and are hard to be automated.

Read More Read More