Get the latest tech news
Needle in a haystack: How enterprises can safely find practical generative AI use cases
In these nascent days of generative AI, focusing on 'Haystack' use cases can help build AI experience while mitigating safety concerns.
In fields ranging from medicine to law enforcement, algorithms meant to be impartial and unbiased are exposed as having hidden biases that further exacerbate existing societal inequalities with huge reputational risks to their makers. Microsoft’s Tay Chatbot is perhaps the best-known cautionary tale for corporates: Trained to speak in conversational teenage patois before being retrained by internet trolls to spew unfiltered racist misogynist bile, it was quickly taken down by the embarrassed tech titan — but not before the reputational damage was done. Letting AI directly speak to (or take action in) the world on behalf of a major enterprise is frighteningly risky, and history is replete with past failures.
Or read this on Venture Beat