Get the latest tech news

Should We Respect LLMs? A Study on Influence of Prompt Politeness on Performance


We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage.

View PDF Abstract:We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Study

Study

Photo of LLMs

LLMs

Photo of Performance

Performance

Related news:

News photo

Does RL Incentivize Reasoning in LLMs Beyond the Base Model?

News photo

<em>El Reg's</em> essential guide to deploying LLMs in production

News photo

Social media is not wholly terrible for teen mental health, study says