Get the latest tech news
Scientists asked Bing Copilot - Microsoft's search engine and chatbot - questions about commonly prescribed drugs. In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm.
Don't ditch your human GP for Dr Chatbot quite yet We shouldn’t rely on artificial intelligence (AI) for accurate and safe information about medications, because some of the information AI provides can be wrong or potentially harmful, according to German and Belgian researchers. They asked
Patients shouldn’t rely on AI powered search engines and chatbots to always give them accurate and safe information on drugs, conclude researchers in the journal BMJ Quality & Safety, after finding a considerable number of answers were wrong or potentially harmful. While these chatbots can be trained on extensive datasets from the entire internet, enabling them to converse on any topic, including healthcare-related queries, they are also capable of generating disinformation and nonsensical or harmful content, they add. Readability of the answers provided by the chatbot was assessed by calculating the Flesch Reading Ease Score which estimates the educational level required to understand a particular text.
Or read this on r/technology