Google’s Med-PaLM 2, an AI tool designed to answer questions about medical information, has been in testing since April at the Mayo Clinic research hospital, among others. The Wall Street Journal reported this morning. Med-PaLM 2 is a variant of PaLM 2, which was announced at Google I/O in May of this year. PaLM 2 is the language model underlying Google’s Bard.
The newspaper also mentions research that Google made public in May (pdf) showing that Med-PaLM 2 still suffers from some of the accuracy issues we are already accustomed to seeing in large language models. In the study, doctors found more inaccuracies and irrelevant information in answers from Google’s Med-PaLM and Med-PalM 2 than those from other doctors.
Yet Med-PaLM 2 performed more or less as well as the actual doctors in almost every other metric, such as evidence of reasoning, consensus-supported answers, or no sign of misunderstanding.
WSJ reports that customers testing Med-PaLM 2 have control over their data, which is encrypted, and Google cannot access it.
According to Google senior research director Greg Corrado, WSJ says Med-PaLM 2 is still in its infancy. Corrado said that while he wouldn’t want it to be part of his own family’s “health journey,” he believes Med-PaLM 2 “takes the places in healthcare where AI can be useful and expands them by a factor of 10.”
We’ve reached out to Google and Mayo Clinic for more information.