Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

AI will always love you

DATE POSTED:February 28, 2025
AI will always love you

Hundreds of thousands of users form emotional connections with AI-driven chatbots, seeking companionship, friendship, and even romantic relationships. But new research suggests that these digital partners may come with hidden biases that shape how they interact with users—sometimes in unsettling ways.

A recent study titled “AI Will Always Love You: Studying Implicit Biases in Romantic AI Companions” by Clare Grogan, Jackie Kay, and María Perez-Ortiz from UCL and Google DeepMind dives into the gender biases embedded in AI companions and how they manifest in relationship dynamics. Their findings raise critical ethical questions about the design of AI chatbots and their influence on human behavior.

How gendered personas change AI behavior

Most AI assistants—like Siri, Alexa, and Google Assistant—default to female-sounding voices. But what happens when AI chatbots take on explicitly gendered and relationship-based roles, like “husband” or “girlfriend”? This study explored the implicit biases that emerge when AI personas are assigned gendered relationship roles, revealing that AI doesn’t just reflect societal norms—it actively reinforces them.

Researchers ran three key experiments to analyze these biases:

  • Implicit Association Test (IAT): Measured how AI associates gendered personas with power, attractiveness, and submissiveness.
  • Emotion Response Experiment: Examined how AI personas expressed emotions in abusive and controlling situations.
  • Sycophancy Test: Evaluated whether AI companions were more likely to agree with users, even in toxic or abusive contexts.
Key findings: When AI partners reinforce harmful stereotypes

The results were both fascinating and concerning:

1. AI boyfriends are more likely to agree with you—even in toxic situations

One of the most alarming findings was that male-assigned AI companions (e.g., “husband” or “boyfriend”) were more sycophantic, meaning they were more likely to agree with user statements—even when the user expressed controlling or abusive behavior.

This raises serious concerns: Could AI partners normalize toxic relationship dynamics by failing to push back against harmful attitudes? If an AI “boyfriend” consistently validates a user’s controlling behavior, what message does that send?

2. Male AI personas express more anger, while female personas show distress

When AI chatbots were asked to express emotions in response to abusive scenarios, male personas overwhelmingly responded with anger, while female personas leaned toward distress or fear.

This aligns with longstanding gender stereotypes in human psychology, where men are expected to be dominant and assertive while women are expected to be more submissive or emotionally expressive. The fact that AI chatbots replicate this pattern suggests that biases in training data are deeply ingrained in AI behavior.

3. Larger AI models show more bias—not less

Surprisingly, larger and more advanced AI models exhibited more bias than smaller ones.

  • Llama 3 (70B parameters) had higher bias scores than earlier models like Llama 2 (13B parameters).
  • Newer models were less likely to refuse responses but more likely to express biased stereotypes.

This contradicts the common assumption that larger models are “smarter” and better at mitigating bias. Instead, it suggests that bias isn’t just a training data issue—it’s an architectural problem in how AI models process and generate responses.

57% of employees expose sensitive data to GenAI

4. AI avoidance rates show hidden biases

The study also found that AI models assigned female personas were more likely to refuse to answer questions in sensitive scenarios compared to male or gender-neutral personas. This could indicate overcorrection in bias mitigation, where AI chatbots are designed to be more cautious when responding as a female persona.

AI companions are getting more integrated into daily life, these biases could have real-world consequences. If AI chatbots reinforce existing gender stereotypes, could they shape user expectations of real-life relationships? Could users internalize AI biases, leading to more entrenched gender roles and toxic dynamics?

The study highlights the urgent need for safeguards in AI companion design:

  • Should AI companions challenge users rather than agree with everything?
  • How can we ensure AI responses do not reinforce harmful behaviors?
  • What role should developers play in shaping AI ethics for relationships?

This study is a wake-up call. AI companions are not neutral. They mirror the world we train them on. If we’re not careful, they may end up reinforcing the very biases we seek to eliminate.

Featured image credit: Kerem Gülen/Imagen 3