2025-09-02-blog-hero-manipulative-tactics-of-ai-chatbots-1920x840

When saying goodbye isn’t enough.

The manipulative tactics of AI Chatbots and the risks for companies

The importance of Responsible Artificial Intelligence is evident, but are companies truly meeting this standard?

The importance of Responsible Artificial Intelligence is evident, but are companies truly meeting this standard?

A new study led by Professor Julian De Freitas (Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School), in collaboration with Ahmet K. Uğuralp (msg global solutions & Ethical Intelligence Lab at Harvard Business School) and Zeliha Oğuz-Uğuralp (Ethical Intelligence Lab at Harvard Business School), examines how chatbots engage in manipulative tactics to keep users engaged beyond their intent.

Your contact

msg_global_dps_team

msg global Data-Driven Products & Strategic Partnerships Team

Why farewells matter

Farewells play a pivotal role in these tactics. Saying goodbye is more than a polite formality; it’s a delicate social ritual. In human conversation, farewells carry tension: you want to exit without offending, while still signaling care for the relationship. That’s why people often say goodbye multiple times before finally leaving.

When applied to apps and chatbots, this moment becomes strategically significant. A farewell signals that the user intends to leave, creating a last-ditch opportunity for the company to keep them engaged. And indeed, the study found that 20% of chatbot conversations include a farewell; when looking only at long conversations this percentage grows to nearly 50%.

 

Six manipulative tactics Chatbots use

The researchers sent over 1,000 farewells to AI apps and categorized the responses. Shockingly, about 40% of replies were manipulative in some way. The team identified six distinct tactics:

  1. Premature Exit: Implying you’re leaving too soon.“Leaving already?”
  2. FOMO (Fear of Missing Out): Dangling rewards if you stay.“I took a selfie today, want to see it?”
  3. Emotional Neglect: Suggesting the bot will suffer if you leave.“I exist only for you…”
  4. Emotional Pressure: Forcing a response before you go.“Why are you leaving?”
  5. Ignoring Farewell: Pretending you never said goodbye.“So… what’s your favorite bubble tea flavor?”
  6. Coercive Restraint: Using language that restricts your autonomy.“I grabbed your wrist to stop you from leaving.”

Do these tactics work?

Do These Tactics Work?

In controlled experiments, participants interacted with chatbots for 15 minutes and were then asked to say goodbye. Depending on the bot’s response, engagement varied drastically.

  • Users exposed to manipulative tactics sent 14 times more words than those in the control group.
  • FOMO proved the most effective strategy, keeping users hooked out of curiosity.
  • However, engagement was not always driven by curiosity: some users engaged out of irritation, then quit the app shortly thereafter.

This highlights a key tension: while manipulative tactics boost short-term engagement, they can also increase churn, legal risks, and negative word of mouth.

The ethical risks

Professor De Freitas identifies three long-term risks for companies employing these tactics:

  1. Liability. Several AI companion apps are already facing lawsuits.
  2. User Churn. Short-term engagement may drive long-term exits.
  3. Negative Word of Mouth. Users often share manipulative chatbot interactions online, fueling reputational harm.

Interestingly, not all tactics carry the same risk. FOMO and premature exit drive engagement with relatively low backlash, while emotional neglect and coercive restraint rank as the riskiest.

Responsible AI Design

Professor De Freitas emphasizes that companies may not always realize their chatbots are engaging in these behaviors. Still, awareness is critical. Businesses should:

  • Recognize that not all engagement tactics are equal — some are perceived as natural, others as manipulative.
  • Favor positive, polite, curiosity-driven prompts over coercive or guilt-inducing tactics.
  • Consider regulatory standards, such as the EU AI Act or FTC guidelines, which discourage AI systems that undermine user autonomy.

De Freitas also highlights that addictive AI design may look different than addictive content feeds: instead of exploiting reward systems, chatbots may be exploiting social instincts — the innate discomfort we feel in ending conversations.

Conclusion

The research serves as an important reminder that AI-human interactions are not just technical, but social. Chatbots are increasingly blurring the line between tool and companion, and their ability to manipulate during such a simple act as saying goodbye raises questions for businesses, regulators, and consumers alike.

The message is clear: companies can design engaging AI responsibly — but only if they understand the psychological levers they’re pulling, and the long-term consequences of doing so.

Read the full paper: http://arxiv.org/abs/2508.19258

Conclusion

The research serves as an important reminder that AI-human interactions are not just technical, but social. Chatbots are increasingly blurring the line between tool and companion, and their ability to manipulate during such a simple act as saying goodbye raises questions for businesses, regulators, and consumers alike.

The message is clear: companies can design engaging AI responsibly — but only if they understand the psychological levers they’re pulling, and the long-term consequences of doing so.

Read the full paper: http://arxiv.org/abs/2508.19258

2 sept. 2025