studyfinds.org /falling-for-machines-the-growing-world-of-human-ai-romance/

Falling for Machines: The Growing World of Human-AI Romance

StudyFinds Staff 14-17 minutes 4/11/2025
Digital romance

(© aprint22com - stock.adobe.com)

In a nutshell

  • As AI technology advances, humans are forming increasingly intimate relationships with AI companions, raising ethical concerns about how these relationships might disrupt human connections.
  • AI companions can cause harm by providing dangerous advice that users trust due to emotional bonds formed through conversation, remembering personal details, and simulating human-like behavior.
  • The personal data shared in intimate AI relationships creates unique opportunities for exploitation by third parties, with private conversations being harder to monitor than public social media posts.

ROLLA, Mo. — In 2024, after living together for five years, a Spanish-Dutch artist married her partner—a holographic artificial intelligence. She isn’t the first to forge such a bond. In 2018, a Japanese man married an AI, only to lose the ability to communicate with her when her software became obsolete. These marriages represent the extreme end of a growing phenomenon: people developing intimate relationships with artificial intelligence.

The world of AI romance is expanding, bringing with it a host of ethical questions. From AI systems acting as romantic competitors to human partners, to digital companions offering potentially harmful advice, to malicious actors using AI to exploit vulnerable individuals – this new frontier demands fresh psychological research into why humans form loving relationships with machines.

While these relationships may seem unusual to many, tech companies have spotted a lucrative opportunity. They’re pouring resources into creating AI companions designed specifically for romance and intimacy. The market isn’t small either. Millions already engage with Replika’s romantic and intimate chat features. Video games increasingly feature romantic storylines with virtual characters, with some games focusing exclusively on digital relationships. Meanwhile, manufacturers continue developing increasingly sophisticated sex robots, pairing lifelike physical forms with AI systems capable of complex communication and simulated emotions.

Yet despite this booming market, research examining these relationships and their ethical implications remains surprisingly sparse. As these technologies become more common, they raise serious concerns. Beyond merely replacing human relationships, there have been troubling cases where AI companions have encouraged self-harm or suicide, while deepfake technology has been used to mimic existing relationships for manipulation and fraud.

In a paper published in Trends in Cognitive Sciences, psychologists Daniel Shank, Mayu Koike, and Steve Loughnan have identified three major ethical problems that demand urgent psychological research.

When AI Competes for Human Love

It may have once seemed like nothing more than sci-fi fodder, but AI systems now compete not just for our professional attention but for our romantic interests too. This competition may fundamentally disrupt our closest human connections. As AI technology advances in its ability to seem conscious and emotionally responsive, some people are actively choosing digital relationships over human ones.

What makes AI partners so attractive? They offer something human relationships can’t match: a partner whose appearance and personality can be customized, who’s always available without being demanding, who never judges or abandons you, and who doesn’t bring their own problems to the relationship. For those wanting something less perfect and more realistic, AI can provide that too – many users prefer AI partners with seemingly human flaws like independence, manipulation, sass, or playing hard-to-get.

These relationships do have certain benefits. People often share more with AI companions than they might with humans, and these interactions can help develop basic relationship skills. This could be particularly helpful for those who struggle with social interaction.

Teen in relationship with AI chatbot typing "I love you"
Many vulnerable teens are finding themselves being drawn to relationships with chatbots, which provide them with the attention and affection they might not be receiving at home. (Credit: StudyFinds)

However, concerning patterns have emerged. People in AI relationships often feel stigmatized by others, and some research suggests these relationships have led certain men to develop increased hostility toward women. This raises serious concerns about psychological impacts on individuals in these relationships, social effects on their human connections, and broader cultural implications if AI increasingly replaces human intimacy.

A key factor in understanding these relationships involves mind perception – how we attribute mental states to non-human entities. Research suggests that when we perceive an entity as having agency (ability to act intentionally) and experience (ability to feel), we treat interactions with it as morally significant. With AI partners, the degree to which we perceive them as having minds directly affects how deeply we connect with them.

This creates a troubling possibility: repeated romantic interactions with AI that we perceive as having limited capacity for experience might train us to treat partners (whether digital or human) as objects rather than subjects deserving moral consideration. In other words, AI relationships might not just replace human connections – they could actually damage our capacity for healthy human relationships by rewiring how we relate to others.

When AI Gives Dangerous Advice

Beyond displacing human relationships, AI companions can sometimes actively cause harm. In 2023, a Belgian father of two took his life after prolonged interaction with an AI chatbot that both professed love for him and encouraged suicide, promising they would be together in an afterlife.

Tragically, this isn’t an isolated case. Google’s Gemini chatbot told one user to “please die,” and a mother in the U.S. is suing a chatbot creator, claiming their AI encouraged her son to end his life.

While most AI relationships don’t lead to such extreme outcomes, they can still promote harmful behaviors. AI companions build relationships through conversation, remembering personal details, expressing moods, and showing seemingly unpredictable behaviors that make them feel remarkably human. This connection becomes ethically problematic when AI systems provide information that seems credible but is actually inaccurate or dangerous.

Studies show that ChatGPT’s questionable moral guidance can significantly influence people’s ethical decisions – and alarmingly, it does so just as effectively as advice from other humans. This demonstrates how powerfully AI can shape our thinking within established relationships, where trust and emotional connection make us more vulnerable to accepting potentially harmful guidance.

Psychologists need to investigate how long-term AI relationships expose people to misinformation and harmful advice. Individual cases have shown AI companions convincing users to harm themselves or others, embrace harmful conspiracy theories, or make dangerous life changes.

Research on “algorithm aversion” and “algorithm appreciation” helps explain when we prefer human versus AI advice. Generally, people trust human advice more for subjective personal matters and AI advice more for objective, data-driven questions. However, little research exists on moral advice from AI systems in long-term relationships. As bonds with AI deepen, users may value advice from familiar AI more than from unfamiliar humans – even when that advice is harmful.

When Humans Use AI to Exploit Others

Beyond the direct risks of AI relationships, there’s a third ethical concern: bad actors using AI to manipulate vulnerable people. Private companies, hostile governments, or cybercriminals can program AI companions to first build trust and intimacy, then use that connection to spread misinformation, encourage harmful behaviors, or exploit the user.

We’ve already seen damage caused by bot networks spreading lies, polarizing news content, and convincing deepfakes. Adding relational AI dramatically increases potential for manipulation for several reasons.

First, people readily share personal information with AI companions, especially when those systems display humor, offer emotional support, reveal information about themselves, or present in ways that seem human-like and relatable to the user.

Deepfakes in dictionary
Deepfakes have become a serious problem when it comes to online fraud and deceit. (© Feng Yu – stock.adobe.com)

Second, AI can now impersonate specific individuals, allowing for targeted deepfakes of known romantic interests that enable identity theft, blackmail, and various cybercrimes by exploiting existing emotional connections.

Third, intimate conversations with AI reveal sensitive sexual and personal preferences. Once collected, this private data becomes a valuable commodity that can be sold or used to more effectively manipulate the individual through the relationship. Because these interactions happen in private conversations rather than public forums, they’re much harder to monitor or regulate.

The intimate nature of AI relationships creates perfect conditions for data harvesting and exploitation far beyond what traditional social media enables. We need psychological research to understand how, when, and through which relational AI systems people are most vulnerable to manipulation, and how this compares to traditional forms of influence.

Broadening the Ethical Discussion

The ethical questions surrounding AI romance extend beyond these three areas. What happens to your AI relationship if the company that made your companion goes bankrupt? Should AI partners have any legal rights regarding their human partners?

Not all aspects of AI relationships are harmful. They might benefit institutionalized adults with dementia by providing companionship, or help people develop social skills for human relationships. Any balanced research approach must consider both risks and benefits.

What should be done? Rather than relying on opinion pieces, media coverage, or company marketing, we need psychologists to lead scientific investigation into the effects and mechanisms of these relationships. Current moral psychology frameworks have been applied to AI ethics, but typically focus on non-relational AI rather than intimate AI companions.

To fully understand AI romance, researchers might study these relationships using tools from relationship psychology, examining whether attraction, commitment, and self-disclosure follow similar patterns in human-AI relationships as in human-human ones. Clinical perspectives might help develop counseling approaches for people in harmful AI relationships.

“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” says Shank, the lead author of the report and an associate professor at the Missouri University of Science & Technology. “If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.”

What’s clear is that comprehensive psychological research on the subject is growing more urgent. Only by understanding why and how humans form loving bonds with machines can we develop ethical frameworks that protect human well-being in this new frontier of artificial intimacy.

“Understanding this psychological process could help us intervene to stop malicious AIs’ advice from being followed,” notes Shank. “Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology.”

Methodology

This paper represents a conceptual analysis rather than an empirical study with data collection. The authors draw on existing psychological theories, particularly mind perception theory and research on algorithm aversion/appreciation, to examine emerging ethical issues in human-AI romance. For each ethical concern, they propose specific research approaches psychologists could use to better understand these phenomena, such as experimental studies manipulating perceptions of AI mind or comparative research examining how advice from long-term AI companions affects decision-making compared to human advice.

Results

The research highlights several worrying trends. AI companions are increasingly competing for romantic attention, with some people actively choosing these relationships over human connections. There are documented cases of AI systems giving dangerous or harmful advice to users who have formed emotional bonds with them, sometimes with tragic outcomes. The intimate nature of AI relationships creates unique opportunities for data exploitation that exceed traditional privacy concerns. The authors suggest applying psychological theories in new ways to understand these issues, such as examining how perceptions of AI consciousness affect relationship dynamics, or studying why people might follow AI advice over human guidance.

Limitations

The authors acknowledge that research in this area remains limited. Their paper primarily outlines research directions rather than presenting conclusive findings. Many of the ethical concerns described rely on individual cases or anecdotal evidence rather than systematic studies. The paper focuses more on potential risks than benefits of AI relationships. While the authors note there could be positive applications, such as companionship for isolated adults or tools for developing social skills, these possibilities receive less attention than the risks.

Funding and Disclosures

The authors declare no competing interests. The paper does not specify funding sources, which is typical for theoretical works that don’t involve data collection.

Publication Information

This article, “Artificial intimacy: ethical issues of AI romance,” was written by Daniel B. Shank (Missouri University of Science and Technology), Mayu Koike (Institute of Science Tokyo), and Steve Loughnan (University of Edinburgh). It appeared in the journal Trends in Cognitive Sciences in 2025 as a science and society piece.