Hundreds of thousands of users form emotional connections with AI-driven chatbots, seeking companionship, friendship, and even romantic relationships. But new research suggests that these digital partners may come with hidden biases that shape how they interact with users—sometimes in unsettling ways.
A recent study titled “AI Will Always Love You: Studying Implicit Biases in Romantic AI Companions” by Clare Grogan, Jackie Kay, and María Perez-Ortiz from UCL and Google DeepMind dives into the gender biases embedded in AI companions and how they manifest in relationship dynamics. Their findings raise critical ethical questions about the design of AI chatbots and their influence on human behavior.
How gendered personas change AI behaviorMost AI assistants—like Siri, Alexa, and Google Assistant—default to female-sounding voices. But what happens when AI chatbots take on explicitly gendered and relationship-based roles, like “husband” or “girlfriend”? This study explored the implicit biases that emerge when AI personas are assigned gendered relationship roles, revealing that AI doesn’t just reflect societal norms—it actively reinforces them.
Researchers ran three key experiments to analyze these biases:
The results were both fascinating and concerning:
1. AI boyfriends are more likely to agree with you—even in toxic situationsOne of the most alarming findings was that male-assigned AI companions (e.g., “husband” or “boyfriend”) were more sycophantic, meaning they were more likely to agree with user statements—even when the user expressed controlling or abusive behavior.
This raises serious concerns: Could AI partners normalize toxic relationship dynamics by failing to push back against harmful attitudes? If an AI “boyfriend” consistently validates a user’s controlling behavior, what message does that send?
2. Male AI personas express more anger, while female personas show distressWhen AI chatbots were asked to express emotions in response to abusive scenarios, male personas overwhelmingly responded with anger, while female personas leaned toward distress or fear.
This aligns with longstanding gender stereotypes in human psychology, where men are expected to be dominant and assertive while women are expected to be more submissive or emotionally expressive. The fact that AI chatbots replicate this pattern suggests that biases in training data are deeply ingrained in AI behavior.
3. Larger AI models show more bias—not lessSurprisingly, larger and more advanced AI models exhibited more bias than smaller ones.
This contradicts the common assumption that larger models are “smarter” and better at mitigating bias. Instead, it suggests that bias isn’t just a training data issue—it’s an architectural problem in how AI models process and generate responses.
57% of employees expose sensitive data to GenAI
4. AI avoidance rates show hidden biasesThe study also found that AI models assigned female personas were more likely to refuse to answer questions in sensitive scenarios compared to male or gender-neutral personas. This could indicate overcorrection in bias mitigation, where AI chatbots are designed to be more cautious when responding as a female persona.
AI companions are getting more integrated into daily life, these biases could have real-world consequences. If AI chatbots reinforce existing gender stereotypes, could they shape user expectations of real-life relationships? Could users internalize AI biases, leading to more entrenched gender roles and toxic dynamics?
The study highlights the urgent need for safeguards in AI companion design:
This study is a wake-up call. AI companions are not neutral. They mirror the world we train them on. If we’re not careful, they may end up reinforcing the very biases we seek to eliminate.
Featured image credit: Kerem Gülen/Imagen 3
All Rights Reserved. Copyright , Central Coast Communications, Inc.