In the United States today, over one in five households uses a language other than English, and people from all over the world seek care in American emergency rooms, clinics, and hospitals.
The result? Nurses, doctors, and hospital staff regularly encounter a language barrier when treating patients.
This language barrier is frustrating for patients and providers, and in health care, it can even be dangerous. Clear communication is critical to delivering good care. If medical and cultural information gets lost in translation when patients are describing their symptoms during intake or when physicians explain treatment procedures, the results can be deadly.
Health care translation is important to reduce risk and increase patient survival rates during health emergencies. In the past, hospitals usually relied on certified medical interpreters, who are professionals specially trained to serve as communicators between patients and medical staff.
However, medical interpreters can be costly additions to a health care team. It’s also impossible to prepare adequately for every possible language using only human interpreters. Because of the sheer number of languages in the world, even if there are one or more interpreters on staff, they likely won’t be able to translate every language encountered by doctors and nurses.
Technology Can Bridge Language Gaps
This is where emerging technology may be able to break down the language barrier. Voice recognition technologies, including those provided by Google Assistant, Google Home, and Google Translate, allow for two-way interpretation and can transcribe and translate single-speaker dictations such as a doctor’s instructions.
During office visits, doctors can speak or type into a digital device, and then patients can read the translated text on the screen or use an embedded speech engine to hear instructions in their own languages. Such technologies enable a medical team to communicate freely and fully without the use of a certified medical interpreter.
While these methods can’t be used for informed consent processes and other areas where precise communication is required, they are already going a long way toward supporting better communication between doctors and patients, without as much need for human translators.
New voice recognition research is currently focusing on technologies that can decipher multiple-speaker conversations. Even in busy or noisy health care environments, these programs could soon translate complex medical diagnoses and patient interactions, according to a recent report in Health IT Analytics.
According to another study on the subject, two other new speech recognition tools called Connectionist Temporal Classification (CTC) and Listen Attend and Spell (LAS) are also getting better at working through “messy” data such as a complicated conversation between multiple people in uncontrolled settings.
Translation technology is already advanced enough to help providers and patients communicate more easily across even the widest language divides. In emergency settings, the nearly instantaneous translation provided by digital translation can save lives. Throughout health care, translation tech can improve each step of the experience, from intake through release.
As the technology improves over time, it can be used in ever more complex and precise settings, and may eventually reach a point where medical information in any language can be instantly and accurately translated.