News Summary
- A new study, co-led by Omiye, cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients..
- A new study led by Stanford researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients..
- Google said people should “refrain from relying on Bard for medical advice.”Earlier testing of GPT-4 by physicians at Beth Israel Deaconess Medical Center in Boston found generative AI could serve as a “promising adjunct” in helping human doctors diagnose challenging cases..
- Daneshjou said physicians are increasingly experimenting with commercial language models in their work, and even some of her own dermatology patients have arrived at appointments recently saying that they asked a chatbot to help them diagnose their symptoms..
- Those beliefs are known to have caused medical providers to rate Black patients’ pain lower, misdiagnose health concerns and recommend less relief..
- “There are very real-world consequences to getting this wrong that can impact health disparities,” said Stanford University’s Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology and faculty adviser for the paper..
SAN FRANCISCO (AP) As hospitals and health care systems turn to artificial intelligence to help summarize doctors notes and analyze health records, a new study led by Stanford School of Medicine rese [+7892 chars]