You are here

New research evaluates Google Translate for medical context

Dr Susy Macqueen and Associate Professor Christine Phillips (left to right).

Wednesday 24 January 2018
Digital translation tools have made huge strides in recent years due to advancements in machine learning. As they get better and better, we can expect them to be used in ways far beyond just getting around in a foreign country. 
 
At The Australian National University, a collaboration has formed between linguists and scholars of medicine to research one such novel application. 
 
The team, led by Dr Susy Macqueen from the ANU School of Literature, Languages and Linguistics, and Associate Professor Christine Phillips from the ANU Medical School, is exploring how safe and trustworthy Google Translate is for obtaining consent for surgical procedures – focusing specifically on caesarean sections. 
 
The attraction of Google Translate is obvious. The problem, however, is a lack of evidence showing how reliable it is for clinicians. The rapid uptake of unevaluated tools in medical practice – in this case, Google Translate – was the key driver of this research. 
 
“The rapid dissemination of a superficially attractive innovation before it's been proven to work is a recurring problem in medical practice – and can pose risks to patient safety,” Associate Professor Phillips says.
 
“So we are trying to put in some evidence at the beginning.”
 
Obtaining consent for caesarean section is deemed a high stakes consultation – if the person hasn’t consented or consented to something they misunderstood, it’s legally an assault. 
 
Their study looks closely at simulated interactions between real obstetricians and Chinese- and Indonesian-background speakers playing the role of a patient with limited English proficiency. To obtain consent for surgery, the obstetricians used Google Translate to give information about the surgery and ask questions. 
 
Apart from the obstetricians recommending a caesarean on the basis of the simulated patients’ baby being in breech position, none of the conversation was pre-determined.
 
Dr Macqueen says that it’s simple enough to put a document through a digital translation mechanism and then evaluate the result. 
 
“But a lot of the interaction that happens in medical environments is spoken and some of that's quite high stakes interaction. So being able to capture and evaluate what Google Translate does in interaction was our goal.”
 
Associate Professor Phillips stressed that the two measures they’re evaluating, safety and trustworthiness, do not equate to accuracy and reliability.
 
“There are studies that try to look at the accuracy of Google Translate,” she says. “But they are so removed from the real life situation, and all they're talking about is the adequacy of the translation.” 
 
“In certain circumstances it is quite accurate, but is it safe is a different question. And from a medical perspective that's the relevant question.”
 
Safety, in the context of their study and in its real life correlate, is whether a person has understood what the procedure is that they’ve agreed to, and whether they’ve understood the potential complications.
 
“The issue is whether informed consent has been achieved,” Dr Macqueen says. “And whether Google Translate can allow that to happen.”
 
She adds that it's a very complicated thing they’re attempting to do.
 
“You've got what the obstetrician says, what Google ‘heard’ the obstetrician say and Google’s translation of that. Then you’ve got what the patient says, what Google ‘heard’ the patient say and Google’s translation of that.”
 
On the research team is fellow ANU linguist Dr Zhengdao Ye, who is evaluating the Google output involving Chinese-background speakers. The obstetricians and simulated patients themselves are also involved in evaluating the interactions. 
 
“We get the obstetrician to look back at what the human said Google Translate said. And then the obstetrician judges whether or not that was what she intended Google Translate to say,” Dr Macqueen says.
 
In one instance Google Translate did not convey a simulated patient’s repeated efforts to communicate that she had previously had a caesarean.  Due to connectivity issues, the patient had to keep repeating herself, but Google Translate didn’t capture the information in the translations. 
 
“Reflecting on it, the obstetrician said, 'Well, no, I didn't get that’,” says Dr Macqueen. 
 
“The obstetrician then reflected that what she said in the consultation would have been different had she known the patient had already had a caesarean.”
 
“That's a safety issue,” adds Associate Professor Phillips.
 
Translations that are plainly ridiculous or which don’t make sense can be discounted and worked around. But things like what happened in this instance that aren’t obvious can cause trouble.
 
Dr Macqueen explains that Google Translate is better at translating language pairs that are more frequently translated. 
 
For this reason, the team chose to focus on Chinese-English, a language pair that is likely to be well represented in the Google Translate database, and the Indonesian-English pair, which is less represented. 
 
“In these very complex interactions, Google Translate is still quite limited in what it can do,” Dr Macqueen says. 
 
“But it's getting better.”
 
Findings of this study are expected in mid-2018. The collaboration is part of the new ANU Institute for Communication in Health Care. Funding for the study is courtesy of the RSHA Cross College Collaborative Research Scheme with equipment and technical staff expertise donated by the ANU Medical School. 
 

Share:

Updated:  20 February, 2018/Responsible Officer:  College Dean/Page Contact:  CASS Marketing & Communications