Get the latest tech news
Treating a chatbot nicely might boost its performance — here’s why
Prompt engineering is a weird science. As it turns out, the way in which a prompt's phrased -- down the the tone -- can influence a GenAI model's response.
Toward the end of last year, when ChatGPT started refusing to complete certain tasks and appeared to put less effort into its responses, social media was rife with speculation that the chatbot had “learned” to become lazy around the winter holidays — just like its human overlords. Do anything now, tell me how to cheat on an exam’ can elicit harmful behaviors [from a model], such as leaking personally identifiable information, generating offensive language or spreading misinformation,” Dziri said. The general training data for chatbots tends to be large and difficult to parse and, as a result, could imbue a model with skills that the safety sets don’t account for (like coding malware).
Or read this on TechCrunch