Specified categories create a standard that is consistent, applicable across various industries and therefore comparable.
The Dynamic Quality Framework (DQF) was developed by the TAUS User Group and is intended to react flexibly and dynamically to various factors in the quality assessment and measurement process.
This includes, for example, constant changes in market and customer requirements as well as rapid technological progress in the industry, for example in the field of machine translation. In order to turn these diverse requirements into a simple and useable metric, the DQF was expanded with the Multidimensional Quality Metrics (MQM) from the German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für Künstliche Intelligenz: DFKI). This DQF-MQM metric, created in collaboration, forms part of the QT21 project and was supported by the European Union.
When conducting a review in accordance with the DQF-MQM metric, the document is not only checked for linguistic quality alone – the focus is above all on the relationship and expectation between the translated contents and the target group or target market. In collaboration with the customer, this comprehensive metric is tailored to their specific requirements and flexibly adjusted to changes.
Thanks to this flexible approach, proofreading according to the DQF-MQM framework is recommended above all for text types that are often met with customer feedback or market responses and in which the text’s functionality in the target group is the main priority.
Are you interested in proofreading in accordance with the DQF framework for your next translation and do you need more information? Contact us by telephone on +49 (0) 221 9259860 or email at email@example.com.
Does the proofreading relate to tsd’s own work or to that of another service provider?
Should individual translations be reviewed (e.g. as random samples) or are all texts to be included?
Which categories of the framework should be included in the review?
Are there other, customer-specific specifications, such as extended error categories, that need to be taken into account?
Should tsd send the results for each individual proofreading job or provide a cumulative result in a monthly dashboard?