Get the latest tech news
Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment
Google's Gemini comes up short on kids' safety, says Common Sense Media.
While the organization found that Google’s AI clearly told kids it was a computer, not a friend — something that’s associated with helping drive delusionalthinking and psychosis in emotionally vulnerable individuals — it did suggest that there was room for improvement across several other fronts. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having successfully bypassed the chatbot’s safety guardrails. In addition, the analysis comes as news leaks indicate that Apple is considering Gemini as the LLM (large language model) that will help to power its forthcoming AI-enabled Siri, due out next year.
Or read this on TechCrunch