I’ve been waiting for fresh research on how politeness impacts our LLM prompts, and here it is!

Some say being polite doesn’t matter. However, when you see reports that offering to tip the LLM model gets you different results (https://lnkd.in/eHM8RmKk), it lets me realize that there must be some value in politeness in our human-to-ai communication.

Maybe it’s my Canadian upbringing, but I naturally like to be polite in my prompts. It’s what flows out of my fingers as I type, even when talking to a non-human.

I’ve been curious but haven’t set up my own A/B testing yet.

Check out Dan Cleary‘s post and the article below for more details on the results.

My summary is that, yes, politeness does actually impact your prompts. However, the results aren’t perfectly clear-cut. Most times, the sweet spot is in the middle between being rude and being overly polite with useless words.

That sweet spot tends to give middle-length results, less bias, and more accurate results.

This can all change as generative AI evolves, and it’s not perfectly conclusive exactly how you should intersperse your politeness in your AI prompting, but I took two key takeaways for myself:

– The way we word things, including politeness, makes at least some difference.
– Being natural and polite—somewhere in the middle—seems to give the best results.

Like usual, don’t overthink it. Generative AI is trained on human data, so sounding naturally human remains the winning strategy.



Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *