Translation Memories and Term Bases were amazing breakthroughs…back in the 1970s. That’s nearly 50 years ago and we are still working with the same archaic technology. Sure we have refined the matching algorithms, introduced adaptive machine translation, but the basic paradigm remains unaltered.
If you want to translate you have two main tools: a translation memory and a glossary. Compared to the technology that is out there, disrupting other industries, we are translating in hunter-gatherer neanderthal style.There is nothing intrinsically wrong with TMs and TBs. They are great tools.
The challenge is that they are static dumps of file and do not properly contemplate the dynamic nature of language workflows.Here is an example. A translator thinks a term should be added to a term-base. They can suggest the term to the CAT tool they are working with, add it to a shared spreadsheet, or communicate this to a project manage among other options. All of them though fall short of completing the full journey of a term.
Here are the 4 easy steps: Birth, translation, approval and publishing.
A term needs to be born, either out of statistical or AI mining and needs to be evaluated by editor for context and relevance. Once it is deemed worthy as a term, it then requires translations. Not just any kind of translation, but well-researched, in context thoughtful translation for a key term.
These translations need to flow through to an expert editor who needs to sign-off on that given translation for that term. Once that is done the term is published.Now that you have that term published for all translations moving forward, how do you handle the entire linguistic corpus managed prior to that term being promoted to glossary level. Let’s say you changed the translation of “pen” to “stylograph”. Now all of the translation memory needs to be updated, strings potentially needs to be republished and all of this is a manual time-consuming error prone process.
We believe that term bases are dynamic entities that can, will and in fact should be changed over time. One of the main reasons why people treat term bases as static entities is because it is so darn complicated to manage the ripple effects of a change over translation memories and the full body of work.
So due to technological limitations, we become lazy and complacent. It’s a nightmare scenario to be continuously updating hundreds of terms across dozens of languages. We don’t need to anymore.
Our terminology management tool, treats terms as living, breathing adaptable entities that will experience change over time. Our tech focuses on keeping track of these workflows and ensuring that the right people are involved at the right level to suggest, translate and sign-off on a term.
We also provide tools so that global changes can be made across multiple TMs more easily than before.However the true vision is that a glossary dynamically cross-references across translation memories, flagging and fixing inconsistencies to ensure a smooth linguistic corpus throughout.
That’s just a teeny tiny example of how knowledge management is so weak and prevents us from truly shining at linguistic management. So much more could be done in terms of string management, translation memory and terminology management if the tech focused actually on solving the use cases based on how people actually work rather than just improving on a 50 year old obsolete feature set.
Written by Gabriel Fairman
Gabriel is the founder and CEO of Bureau Works. He loves change—and eating grass.