Get the latest tech news

The serious science of trolling LLMs


The internet's oldest pastime finally has a purpose -- and it's more serious than AI companies would like to admit.

We were talking about trolling large language models — that is, the practice of fiddling with the prompt to get the machine to say something outrageous or nonsensical, and then publicly displaying the result to earn retweets and likes. The models are made to appear more human by forcing them to feign emotions, profusely apologize for mistakes, or even respond with scripted jokes that mask the LLMs’ inability to write anything resembling humor: From this perspective, the viral examples that make it patently clear that the models don’t reason like humans are not just PR annoyances; they are a threat to product strategy.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of science

science

Photo of trolling LLMs

trolling LLMs

Related news:

News photo

Vulnerable transistors threaten to upend Europa Clipper mission

News photo

The science behind an iPhone dumb phone

News photo

Peer review is essential for science. Unfortunately, it's broken