Get the latest tech news

Needle in a haystack: How enterprises can safely find practical generative AI use cases


In these nascent days of generative AI, focusing on 'Haystack' use cases can help build AI experience while mitigating safety concerns.

In fields ranging from medicine to law enforcement, algorithms meant to be impartial and unbiased are exposed as having hidden biases that further exacerbate existing societal inequalities with huge reputational risks to their makers. Microsoft’s Tay Chatbot is perhaps the best-known cautionary tale for corporates: Trained to speak in conversational teenage patois before being retrained by internet trolls to spew unfiltered racist misogynist bile, it was quickly taken down by the embarrassed tech titan — but not before the reputational damage was done. Letting AI directly speak to (or take action in) the world on behalf of a major enterprise is frighteningly risky, and history is replete with past failures.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of enterprises

enterprises

Photo of needle

needle

Photo of haystack

haystack

Related news:

News photo

How enterprises are using gen AI to protect against ChatGPT leaks

News photo

Why Writer’s Palmyra LLM is the little AI model that could for enterprises

News photo

Software supply chain security remains a challenge for most enterprises