Get the latest tech news

Scientists asked Bing Copilot - Microsoft's search engine and chatbot - questions about commonly prescribed drugs. In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm.


Don't ditch your human GP for Dr Chatbot quite yet We shouldn’t rely on artificial intelligence (AI) for accurate and safe information about medications, because some of the information AI provides can be wrong or potentially harmful, according to German and Belgian researchers. They asked

Patients shouldn’t rely on AI powered search engines and chatbots to always give them accurate and safe information on drugs, conclude researchers in the journal BMJ Quality & Safety, after finding a considerable number of answers were wrong or potentially harmful. While these chatbots can be trained on extensive datasets from the entire internet, enabling them to converse on any topic, including healthcare-related queries, they are also capable of generating disinformation and nonsensical or harmful content, they add. Readability of the answers provided by the chatbot was assessed by calculating the Flesch Reading Ease Score which estimates the educational level required to understand a particular text.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Scientists

Scientists

Photo of death

death

Photo of questions

questions

Related news:

News photo

US scientists shatter high-power uranium beam record, unlock new isotopes | A uranium beam striking a target undergoes fragmentation or fission, producing a range of rare, unstable isotopes with different numbers of neutrons.

News photo

Temu gets more questions from the EU about illegal product risks

News photo

US scientists turn waste streams into jet fuel that reduces carbon emissions by up to 70% | Volatile fatty acids can play a critical role in decarbonizing the aviation industry.