Tag: Quality Assurance

DQF: What is it? and How it works?

DQF: What is it? and How it works?

What does DQF stand for?

DQF stands for the Dynamic Quality Framework. Quality is considered Dynamic as translation quality requirements change depending on the content type, the purpose of the content and its audience.

Why is DQF the industry benchmark?

DQF has been co-created since January 2011 by over fifty companies and organizations. Contributors include translation buyers, translation service providers, and translation technology suppliers. Practitioners continue to define requirements and best practices as they evolve through regular meetings and events.

How does DQF work?

DQF provides a commonly agreed approach to select the most appropriate translation quality evaluation model(s) and metrics depending on specific quality requirements. The underlying process, technology and resources affect the choice of quality evaluation model. DQF Content Profiling, Guidelines and Knowledge base are used when creating or refining a quality assurance program. DQF provides shared language, guidance on process and standardized metrics to help users execute quality programs more consistently and effectively. Improving efficiency within organizations and through supply chains. The result is increased customer satisfaction and a more credible quality assurance function in the translation industry.

The Content Profiling feature is used to help select the most appropriate quality evaluation model for specific requirements. This leads to the Knowledge base where you find best practices, metrics, step-by-step guides, reference templates, and use cases. The Guidelines are publicly available summaries for parts of the Knowledge base as well as related topics.

What is included in DQF?

1. Content Profiling and Knowledge base

The DQF Content Profiling Wizard is used to help select the most appropriate quality evaluation model for specific requirements. In the Knowledge Base you find supporting best practices, metrics, step-by-step guides, reference templates, use cases and more.

2. Tools

A set of tools that allows users to do different types of evaluations: adequacy, fluency, error review, productivity measurement, MT ranking and comparison. The DQF tools can be used in the cloud, offline or indirectly through the DQF API.

3. Quality Dashboard

The Quality Dashboard is available as an industry-shared platform. In the dashboard, evaluation and productivity data is visualized in a flexible reporting environment. Users can create customized reports or filter data to be reflected in the charts. Both internal and external benchmarking is supported, offering the possibility to monitor one’s own development and to compare results to industry highs, lows and averages.

4. API

The DQF API allows users to assess productivity, efficiency and quality on the fly while in the translation production mode. Developers and integrators are invited to use the API and connect with DQF from within their TMS or CAT tool environments.

References: Taus

SDL and TAUS Integration Offers Brands Industry Benchmarking Framework

SDL and TAUS Integration Offers Brands Industry Benchmarking Framework

SDL, a leader in global content management, translation and digital experience, today announced an integration between SDL Translation Management System (TMS), and the TAUS Dynamic Quality Framework (DQF), a comprehensive set of tools that help brands benchmark the quality, productivity and efficiency of translation projects against industry standards.

The SDL TMS integration with TAUS DQF enables everyone involved in the translation supply chain – from translators, reviewers and managers – to improve the performance of their translation projects by learning from peers and implementing industry best-practice. Teams can also use TAUS’ dynamic tools and models to assess and compare the quality of their translations output – both human and machine – with the industry’s average for errors, fluency and post-editing productivity.

This enables brands to maintain quality – at extreme scale – and eliminate inefficiencies in the way content is created, managed, translated, and delivered to global audiences.

“One marketing campaign alone could involve translating over 50 pieces of content – and that’s just in one language. Imagine the complexity involved in translating content into over a dozen languages?” said Jim Saunders, Chief Product Officer, SDL. “Brands need a robust way to ensure quality when dealing with such high volumes of content. Our ongoing integrations with TAUS DQF tackle this challenge by fostering a knowledge-filled environment that creates improved ways to deliver and translate content.”

“Translating large volumes of content quickly can present enormous quality issues, and businesses are increasingly looking to learn from peers – and implement best-practices that challenge traditional approaches,” said TAUS Director, Jaap van der Meer. “Our development teams have worked closely with SDL to develop an integration that encourages companies not just to maintain high standards, but innovate and grow their business.”

The TAUS DQF offers a comprehensive set of tools, best practices, metrics, reports and data to help the industry set benchmarking standards. Its Quality Dashboard is available as an industry-shared platform, where evaluation and productivity data is presented in a flexible reporting environment. SDL TMS, now integrated within the TAUS DQF, is used by many Fortune 500 companies across most industries.

SDL already provides TAUS-ready packages for enterprise with our other language solutions. Customers of SDL WorldServer benefit from a connector to the TAUS DQF platform, enabling project managers to add and track a project’s productivity on the TAUS Quality Dashboard. Users can access both SDL WorldServer and SDL TMS through their SDL Trados Studio desktop, making it easy to share projects with the TAUS platform.

All SDL’s integrations with TAUS are designed to help centralize and manage a brand’s translation operations, resulting in lower translation costs, higher-quality translations and more efficient translation processes.

Reference: https://bit.ly/2EslqhA

Localizing Slogans: When Language Translation Gets Tricky

Localizing Slogans: When Language Translation Gets Tricky

A slogan. It seems pretty straightforward. Translating a few words, or even a sentence, shouldn’t be all that complicated, right?
And yet we’ve seen countless examples of when localizing slogans has gone awry—from big global brands—illustrating just how tricky translating slogans can be.
Anybody recall Pepsi’s “Come alive with the Pepsi generation” tagline being translated into “Pepsi brings your ancestors back from the grave” in Chinese?
While humorous, this language translation misfortune can be costly—and not just in a monetary sense. We’re talking time-to-market and brand reputation costs, too.

Why slogans pose language translation difficulties

The very nature of slogans makes them challenging to translate. Many times slogans are very creative, playing on cultural idioms and puns.
There often isn’t a direct translation that can take on the exact meaning of your slogan. And, in fact, linguists may experience translation difficulties in attempting to complete the translation word for word.
Local nuances come into play as well. Some words may have entirely different meanings than your source language and can be misinterpreted. Just think of product names that are often used in slogans. The Chevy Nova name was criticized in Latin America because “Nova” directly translates into “doesn’t go.”
Also, different cultures have unique emotional reactions to given words. Take McDonald’s and its famous slogan “I’m lovin’ it.” The fast food mogul localized this slogan to “Me encanta” or “I really like it,” so the mantra was more culturally appropriate for Spanish-speaking countries, where love is a strong word and only used in certain situations.
Because of the language translation difficulties involved, you may need a more specialized form of translation to ensure that your slogan makes a positive impact in your international markets.

How to approach localizing slogans

First and foremost, communication is vital throughout the entire localization process. When approaching slogans, we’ll collaborate with your marketing experts—whether internal or outside creative agencies—as well as your in-country linguists with marketing expertise.

Having in-country linguists’ work on your slogan is absolutely critical. These language translation experts are fully immersed in the target culture. They are cognizant of cultural nuances, slang and idioms, which ensures that your slogan will make sense—and go over well—in your target locales.

We’ll review the concepts in the tagline or slogan as a team and identify any challenging words or phrases and assess how to approach it. Oftentimes, a direct translation won’t work. We may need to localize it in a way that’s more appropriate, such as the McDonald’s “Me encanta” example above.

If it poses much difficulty, then we may need to turn to transcreation services.

Transcreation process and your slogan

The transcreation process is a specialized version of language translation that’s a highly involved and creative process.

Copywriter linguists will identify your brand qualities and portray those in a way that perfectly resonates with your target audience. Think of it as a mix of “translation” and “creation.” It’s not a word-for-word translation, but rather re-creating an idea or message so it fosters an emotional connection in a different culture.

Looking at a quick example, Nike’s celebrated slogan “Just do it” had no meaningful translation in Chinese. So instead, the message was transcreated to mean “Use sports” or “Have sport,” which had a more prominent impact in that culture.

Localizing slogans, or more specifically, your slogan, correctly can mean a stronger global brand reputation—driving revenue and increased market share worldwide. Taking a hasty, nonchalant approach can mean just the opposite. And you may find yourself having to spend time and resources rectifying what comes with a language translation error.

 Reference: https://bit.ly/2GSx36x
New Frontiers in Linguistic Quality Evaluation

New Frontiers in Linguistic Quality Evaluation

When it comes to translating content, everyone wants to find the highest quality translation at the lowest price. A recent report on the results of an industry survey of over 550 respondents revealed that translation quality is over four times more important than cost. For translation buyers, finding a language service provider (LSP) that can consistently deliver high-quality is significantly more important than price. For LSPs, finding linguists who can deliver cost-effective, high quality translation is key to customer happiness.

Meeting quality expectations is even more difficult with the demand for higher volume. The Common Sense Advisory Global Market Survey 2018 of the top 100 LSPs predicts a trend toward continued growth. Three-quarters of those surveyed reported increased revenue and 80 percent reported an increase in work volume. That’s why improving and automating the tools and processes for evaluating linguistic quality are more important than ever. LSPs and enterprise localization groups need to look for quality evaluation solutions that are scalable and agile enough to meet the growing demand.

Evaluating the quality of translation is a two-step process. The first step involves Quality Assurance (QA) systems and tools used by the translator to monitor and correct the quality of their work, and the second step is Translation Quality Assessment (TQA) or Linguistic Quality Evaluation (LQE), which evaluates quality using a  model of defined values, parameters, and scoring based on representative sampling.

The Current State of Linguistic Quality Evaluation & Scoring

Many enterprise-level localization departments have staff specifically dedicated to evaluating translation quality. The challenge for these quality managers is creating and maintaining an easy-to-use system for efficiently scoring vendor quality.

Today, even the most sophisticated localization departments resort to spreadsheets and labor-intensive manual processes. The most commonly used LQE and scoring methods rely on offline, sequential processing. A March 2017 Common Sense Advisory, Inc. brief, “Translation Quality and In-Context Review Tools,” observed that the most widely used translation quality scorecards “suffer from a lack of integration.”

“Many LSPs continue to rely on in-house spreadsheet-based scorecards. They may be reluctant to switch to other tools that require process changes or that would raise direct costs. Unfortunately, these simple error-counting tools are typically inefficient[1] because they don’t tie in with other production tools and cannot connect errors to specific locations in translated content. In addition, they are seldom updated to reflect TQA best practices, and it is common for providers to use scorecards that were developed many years earlier with unclear definitions and procedures.”

In an age of digital transformation and real-time cloud technology, LQE is overdue for an automated, integrated solution.

Reducing manual processes = reducing human error

One critical step to ensure quality translation is to reduce the number of manual processes and to automate evaluation as much as possible. There is a direct correlation between the number of manual processes and the increased likelihood of errors. These usually occur when cutting and pasting from the content management system (CMS) into spreadsheets and back again.

Evaluation scorecards, typically managed with spreadsheets, are very labor intensive. The spreadsheets usually include columns for languages, projects, sample word counts, categories, and error types. They also can include complex algorithms for scoring severity. To evaluate quality segment by segment requires copying and pasting what was corrected, the severities of each, etc.

To perform sample testing, localization quality managers extract some percentage of the total project to examine. If the project contains thousands of documents, they may use an equation –ten percent of the total word count, for example. They will then export those documents, load them into ApSIC Xbench, Okapi Checkmate, or some other tool for checking quality programmatically, and open a spreadsheet to enter quality feedback and/or issues. When the quality evaluation is complete, it is cut and pasted back into the CAT Tool, often with annotations.

LSPs resort to these less than desirable scoring methods, because there haven’t been any tools on the market to create or administer a quality program at scale, until now.

The New Frontier of Linguistic Quality Evaluation

Centralized quality management inside a TMS

Top-tier cloud translation management system (TMS) platforms now have the ability to make assessing vendor quality easier and more automated with LQE and scoring inside the TMS. It can be purchased as a TMS add-on or clients can outsource quality evaluation and assessment to LSPs offering quality services using this innovative LQE technology.

The centralized storage of information and the agile change management that a full API and cloud technology can provide eliminates the need to rely on error-prone manual processes. It centralizes quality management, supports flexible and dynamic scoring, and incorporates LQE as a seamless part of the workflow.

Currently, localization quality managers have to go into the TMS to get their sample, bulk select and download the information. With integrated LQE, there are no offline tasks to slow down the evaluation process or that can lead to human error. Quality evaluation is easily added to the workflow template by selecting from a list of published quality programs. From there, tasks are automatically assigned, and quality evaluation is performed in an integrated CAT tool/workbench, including running programmatic quality checks on the translated content.

Creating an LQE program inside the TMS

Creating and setting up a quality program can be challenging and time consuming, but it will ensure that everyone identifies quality issues the same way, which will simplify and improve communication over what constitutes quality. It requires a sophisticated level of experience. Those who aren’t particularly skilled at LQE run the risk of costly inefficiencies and unreliable reporting.

The latest LQE software has the ability to base a quality program on an industry standard, such as the TAUS Dynamic Quality Framework (DQF) or the EU Multidimensional Quality Metrics (MQM). Because these standards can be overly complex and may contain more error types than needed, the software allows you to create a custom quality program by selecting elements of each.

Define error types, categories and severities

Inside the TMS, quality managers can create and define the core components of their quality program by defining error types, categories, and severities.

Severity levels range from major–errors that can affect product delivery or legal liability–to minor errors that don’t impact comprehension, but could have been stated more clearly. An error-rate model counts the errors resulting in a percentage score, starting at 100% and deducting for points lost. It is important to differentiate between how serious the error is, so a numerical multiplier is added to account for severity. The less common rubric model begins at zero and points are added if the translation meets specific requirements, for example, awarding points for adherence to terminology and style guides.

Publishing

After creating your quality program, you need to think about how you are going to publish and distribute the quality program. Change management can become a nightmare if the program isn’t centralized. A cloud-based program allows you to publish, change, and unpublish quickly, so if you make an adjustment to a severity level designation, you have the ability to notify all users of the change immediately.

A cloud LQE app lets you keep prior versions of quality programs for historical reference, so translations will be held to the standards that applied at the time of translation, and not necessarily the most current standard. If your TMS doesn’t include this functionality, consider publishing your quality program on a wiki or in one of the many options for cloud-storage. This provides a centralized place that everyone is referring back to, instead of an offline spreadsheet.

Flexible and dynamic scoring

Scorecards, as CSA mentioned, need to be dynamic–based by content type, domain, connector, etc.–to manage translation in and out of the translation technology. Not all content requires the same quality level. A discussion forum or blog post may not need the level of review that a legal document or customer-facing brochure might require. The new frontier in flexible and dynamic scoring contains an algorithm that can set up scorecards automatically depending on content type.

The algorithm also lets you establish a standardized word count as a baseline for comparing quality scores among documents of different sizes. This gives you an apples-to-apples comparison, because the same number of errors should be viewed differently in a 500-word document than in a 5,000-word sample. To create an accurate and efficient weighting or total error point system, flexibility is important.

Feedback loop

The most critical component for improving quality is for feedback to be accessible by all parties involved: linguists and translators, reviewers, quality managers, and clients. When all parties have access to feedback, it improves communication and reduces the discussion that occurs when debating the subjective elements of scoring. When you have clear communication and scoring that is continually represented, it helps reviewers provide the appropriate feedback, quickly and easily.

Continuous, real-time feedback also creates an opportunity for improvement that is immediate. In offline scoring, a linguist may continue making the same mistake in several other projects before learning about the error. Cloud LQE enables real-time feedback that not only corrects an issue, but also trains linguists to improve the quality for the next (or even current) project.

The transparency this provides moves the entire process toward more objectivity, and the more objective the feedback, the less discussion is required to get clarification when a quality issue arises.

Quality reporting

Once linguistic quality evaluation has been done, you want to be able to review the data for quality reporting purposes. Cloud LQE allows reporting to be shared, so that clients can see issues affecting quality over time. You can track quality over time, by project and by locale, for all targets. Easy-to-read pie charts display the number of quality issues in each category such as terminology, style, language, and accuracy. This lets you monitor trends over time and to use that objective data for insights into improving quality delivery.

Conclusion

The new frontier in LQE is a cloud-based solution that improves user experience by streamlining quality evaluation. It reduces ambiguity, improves communication, and creates an objective platform to discuss and resolve quality issues.

With a single app for managing quality, LSPs and enterprise quality managers can streamline project set up and don’t have to rely on labor-intensive spreadsheets to describe or score the quality program. The minimal effort required to set up an online program is more than offset by the efficiency gains. You don’t have to move from Microsoft Excel to Word, then to a computer-assisted translation (CAT) tool, it’s now all in one place.

Efficiency of communication is also improved, making it easier for everyone to be on the same page when it comes to creation, scoring, publishing, and feedback. Improved quality data collection and reporting lets you monitor trends over time and use the objective data to inform your strategic decision making to improve translation quality.

As the CSA  industry survey discovered, it’s not the price of translation, it’s the quality, so now may be the time to go boldly into this new, LQE frontier.

Reference: https://bit.ly/2ItLRWF

Writefull: Improve Your Writing Skills

Writefull: Improve Your Writing Skills

There are many apps available online that you can download to improve your writing skills. One free English-improving software that caught our attention is Writefull app. Relatively new in the market, Writefull is a lightweight, feature-rich app with an intuitive user interface. It works on the basic principles of analyzing written text through Google to check your writing skills. Here is a detailed tutorial on how to use Writefull application.

Read More Read More

Terminology Sharing with GoldenDict & multiQA

Terminology Sharing with GoldenDict & multiQA

Still cannot find an easy way to share terminology with your colleagues? Exchanging glossaries via email everyday is not convenient. Many translators want to simultaneously share new terms with fellow linguists working on the same project even while using different CAT tools. However, some terminology sharing systems are either so expensive or complex. multiQA offers an out-of-the-box method for terminology collaboration.

Read More Read More

Acrolinx: Content Quality Control

Acrolinx: Content Quality Control

Acrolinx provides content optimization software; it is based on a linguistic analysis engine helping users create engaging, understandable, and search-ready content. Acrolinx offers a client-server architecture that analyzes content to give users feedback and metrics on content quality

Read More Read More

Translation Quality Assurance Tools

Translation Quality Assurance Tools

The most conventional definition of translation quality is that the translated text should be grammatically correct, have correct spelling and punctuation and sound as if it was originally written by a native speaker of the target language. We will refer to all quality assurance tasks performed to ensure this type of quality as linguistic. Obviously most of these tasks require human intervention and are hard to be automated.

Read More Read More