TRAINING CASE STUDY: TenCate
TenCate Geosynthetics Experiment with Automatic Voice Translation in the Classroom Confirms Speech Translation as the Future of Global Education
Case: Fred Chuck, TenCate’s Director of Corporate Training & Industry Affairs
After a week-long corporate training, called “Mirafi® University”, and before his brain dump occurred, Fred Chuck, Director of Corporate Training & Industry Affairs for TenCate, took time to share highlights from his use of the Translate Your World automatic voice translation system for the event.
TenCate, a multi-national corporation known for its futuristic materials and fabric for the protection and safety of humans, products, and environment, flew 24 people from South America to the USA. That audience found itself part of an experiment in global communication. For the first time, TenCate offered across-language training using the automatic voice translation system called “TYWI-Live” developed by Translate Your World International (“TYWI”). TYWI is an automatic interpretation system that creates instant captions and translated subtitles from a speaker’s voice, then turns those subtitles into a synthesized computer voice in the translation language. Each member of the audience selects the language they desire, and chooses their preference for subtitles and/or voice.
View Option #1:
“Before I get into my notes, I thank Translate Your World for the use of their automatic speech translation service for our program. The upside of this translation system is that it is phenomenal. The software absorbed our speaking voices and generated instant captions and translated subtitles. Plus we were able to send a computerized voice to their earphones.
There appear to be three steps to voice translation:
(1) recognizing what we say and turning it into text (speech recognition)
(2) translating that text into another language (automatic translation), and
(3) turning the text into an attractive computerized voice (text-to-speech).
View Option # 2:
TYWI gave us several options for recognizing our voices and turning our words into text. Plus several options for automatic translation software (far more than Google) each of which seems to have strength in certain subject matters. Then, there was a choice how the audience would receive the translation: as subtitles on the wall or on their device; as a translated voice on their device, channel earphones, or loudspeaker; or hybrid of the two.
For recognizing our voices, one of the choices was to use Nuance.com’s Dragon Naturally Speaking 13+ either on our laptop or using TYWI’s online version. We opted to put Dragon on one laptop. To that we added a wifi microphone so that we could walk around as we talked. The wifi mic sent our speaking voice to the USB port of the laptop. To use Translate Your World, we turned on Dragon, opened our TYWI webpage, clicked in a web field, and started to talk. The subtitles flowed like crazy all day long.
On the first day I was the presenter. We used TYWI to produce subtitles (on separate projector and screen) and an audio feed that we sent to the participants via the Plant-Tours.com transmitter/receiver system. Both systems worked very well. The audience finally settled on subtitles as being the most useful for this particular course, partly because they like to hear our original speaking voice, and partly because the subtitles arrive even faster than the translated voice.
On the second day there were 3 presenters for a total of 8 hours. Overnight we voice-trained Dragon using the Voice Profile feature for which we selected the more detailed script reading (20 minutes) for each of the presenters. The translations became even more accurate than they had been on the first day, and the subtitles kept pace with the presenters.
By having 3 presenters we had an interesting opportunity for comparison. The subtitles translated quite well. They functioned ideally and immediately for one presenter who was accustomed to speaking to international audiences. The other two learned quickly that they were speaking too fast when they saw their subtitles piling up over multiple lines. They began breaking their sentences into phrases or “chunks”, and taking breaths more often. Every time they took a breath, a subtitle would be sent to the screen. So, speaking at “normal speed” definitely had better results than talking at lightening speed.
We all learned to speak deliberately and try to limit our inflections and colloquialisms. As a result, the subtitle translations were in shorter segments, quicker to read, and well translated.
On the third day there were two presenters for 4 hours each. Only one of the presenters had the opportunity to train with the detailed voice profile. So one voice profile training was done quickly during a 15 minute break. Both translations still seemed to run accurately, even for the one who did not train, although periodically there were very long subtitles when the presenters did not breathe often enough.
As to vocabulary, our company makes advanced textiles, composites, geosynthetics and specialized grass for the protection of people and property in both Westernized and Third World countries. The terminology we use can be highly specialized. Without any special preparation, even our ‘technical’ terms translated sufficiently for the group to understand. There is a Personal Dictionary feature that we are excited to try for the next training where we will enter our corporate terminology for even more precision.
Maybe the best comment came from the spokesperson for the Latin/South American attendees at the end of the program. He said that they had been worried about the translation issue and that using TYWI voice translations worked better than they could ever have hoped. Also, they were impressed that we took the extra effort to provide the translations using TYWI.
Next time we will use the “Bilingual subtitles” feature (with English over Spanish on screen). Many people in the audience specifically stated that this would be helpful. Additionally, we want to offer the participants the opportunity to use TYWI on their own laptops, as well. This would permit them to actually talk into TYWI themselves and take an interactive part in the conversation, in addition to chatting back comments and questions as text in their native language for us to read in English.
Based upon what we experienced during this multi-day training university, using automatic speech translation technology is the future of global education. Or, as TYWI would say:
“→ La tecnología es el futuro de la educación global.”