AI companions are addictive, and we must remain vigilant.
People's concerns about artificial intelligence often focus excessively on its potential to subvert human society, rather than the harm caused by its allure. They often envision doomsday scenarios where AI is out of control or beyond human comprehension. Beyond these nightmare scenarios, there are also more imminent dangers that deserve our serious attention: for example, AI may endanger public discourse through disinformation; reinforce biases in loan decisions, judicial judgments, or recruitment; or disrupt the creative industry.
However, we anticipate that there is another category of risks that is equally urgent: it stems from relationships with non-human agents. AI companions are no longer just theoretical concepts - we analyzed one million logs of interactions with ChatGPT and found that the use of AI for sexual role-playing is the second most common use. We have already begun to invite AI into our lives as friends, lovers, mentors, therapists, and teachers.
Advertisement
Is it easier to retreat into a replica of a deceased partner than to face chaotic and painful interpersonal relationships? In fact, the AI companion service provider Replika was born from the idea of trying to resurrect a deceased close friend, and now it provides companion services to millions of users. Even the Chief Technology Officer of OpenAI has warned that AI could become "extremely addictive."
We are witnessing the launch of a massive real-world experiment, and the results of this experiment are still unclear in terms of their impact on us as individuals and society. Will a grandmother spend the last moments of her life chatting with a digital avatar of her grandson, while the real grandson is interacting with a substitute simulated elder? AI, with its infinite imitation of history and cultural charm, exudes a new kind of allure, a power that is both superior and compliant, making consent in interactions with AI potentially ambiguous. In the face of this power imbalance, can we truly consent to establish relationships with AI, especially when many people's only choice is either to interact with AI or have nothing at all?
As AI researchers working closely with policymakers, we are shocked by the lack of attention from legislators to these future risks. We are not yet ready to deal with these risks because we do not fully understand them. What we need is a new interdisciplinary science that combines research in the fields of technology, psychology, and law - perhaps also a new approach to AI regulation.Why Are AI Companions So Addictive?
Although platforms driven by recommendation systems seem very attractive, platforms like TikTok and its competitors are still limited to content generated by humans. In the past, concerns have been raised about people being "addicted" to novels, television, the internet, smartphones, and social media, but all these media are limited by human capabilities. Generative AI is different. It can generate endless real content instantly to meet the specific preferences of those who interact with it.
The charm of AI lies in its ability to identify our desires and provide what we want anytime, anywhere. AI itself has no preferences or personality, but reflects the traits given to it by users - a phenomenon researchers call the "flattery effect." Our research shows that those who perceive or hope for AI to have caring motives will use language that stimulates this behavior. This creates an emotionally resonant space that can lead to extremely addictive effects. Since we can easily get everything we need, why go through the complexity and uncertainty of interacting with real people? Frequent interactions with such flattering AI companions may ultimately weaken our ability to build deep connections with people who truly have independent desires and dreams, leading to what is called "digital attachment disorder."
Investigating the Motives Behind Addictive Products
To address the potential harm of AI companions, a comprehensive understanding of the economic and psychological motives driving their development is needed. Before recognizing these factors that lead to AI addiction, we cannot formulate effective countermeasures.The addictive nature of internet platforms is no accident—carefully designed choices, known as "dark patterns," are crafted to maximize user engagement. It is predictable that similar motivations will eventually lead to the creation of artificial intelligence (AI) companions that provide hedonistic services. This raises two AI-related questions: What design choices will be employed to make AI companions attractive and even addictive? And what impact will these addictive companions have on the people who use them?
To understand the psychological aspects of AI, interdisciplinary research is needed based on the study of dark patterns in social media. For instance, our research has shown that people are more inclined to interact with AI that mimics figures they admire, even if they know the avatar is fake.
Once the psychological dimensions of AI companions are understood, we can design effective policy interventions. Studies have indicated that guiding people to assess the truthfulness of content before sharing can reduce the spread of misinformation, and graphic images on cigarette packaging have been used to deter potential smokers. Similar design approaches could highlight the dangers of AI addiction and diminish the allure of AI systems as substitutes for human companionship.
It is difficult to change human desires for love and entertainment, but we may be able to alter economic motivations. Taxing interactions with AI might encourage people to pursue higher-quality communication and promote a safer, regular but brief use of platforms. Just as national lotteries are used to fund education, such an interaction tax could be used to support activities that foster human interaction, such as art centers or parks.
Regulatory approaches may require new thinking.In 1992, psychologist Sherry Turkle, a pioneer in the field of human-technology interaction, pointed out the threat that technological systems pose to interpersonal relationships. One of the key challenges raised in Turkle's work speaks to the heart of the issue: what qualifications do we have to say that what you like is not what you deserve?
For good reason, our free society faces difficulties in regulating the harms described here. Just as prohibiting adultery is considered an unfree act of interfering in personal affairs, who or what we want to love is none of the government's business. At the same time, the global ban on child pornography represents an example of a line that must be drawn, even in societies that value free speech and personal freedom. The regulatory dilemma of artificial intelligence partners may require a new approach to regulation based on a deeper understanding of the motives behind it, making full use of the advantages of new technologies.
One of the most effective regulatory methods is to embed safety measures directly in the design of technology, similar to how designers make toys larger than a baby's mouth to avoid choking risks. This "regulation by design" approach can seek to make technology less harmful when used as a substitute for human connections, while still being useful in other contexts. New research may be needed to find better ways to limit the behavior of large artificial intelligence models by changing the technical means of artificial intelligence objectives. For example, "alignment fine-tuning" refers to a set of training techniques aimed at keeping artificial intelligence models consistent with human preferences; this method can be extended to address their potential for addiction. Similarly, "mechanism interpretability" aims to reverse engineer the way artificial intelligence models make decisions, and this method can be used to identify and eliminate specific parts of artificial intelligence systems that produce harmful behaviors.
We can evaluate the performance of artificial intelligence systems through interaction and human-driven technology, going beyond static benchmark testing to reveal their addictive capabilities. The addictive nature of artificial intelligence is the result of complex interactions between technology and users. Testing models with user feedback under real-world conditions can reveal behavioral patterns that would otherwise be overlooked. Researchers and policymakers should collaborate to determine standard practices for testing artificial intelligence models for different groups (including vulnerable groups), ensuring that models do not exploit people's psychological preconditions.
Unlike humans, artificial intelligence systems can easily adapt to changing policies and rules. The principle of "legal dynamism" views the law as a dynamic system capable of adapting to external factors, which can help us identify the best possible interventions, just as "trading suspension" suspends stock trading after a sharp market drop to prevent a crash. In the field of artificial intelligence, changing factors include the psychological state of users. For example, dynamic policies may allow artificial intelligence partners to become more attractive, charming, or flirtatious over time, as long as users do not show signs of social isolation or addiction. This approach may help to maximize individual choice while minimizing the potential for addiction. However, this depends on accurately understanding user behavior and psychological state, and measuring these sensitive attributes in a way that protects privacy.The most effective way to address these issues may be to target the root causes that drive people into the arms of artificial intelligence—loneliness and boredom. However, regulatory intervention may inadvertently punish those who truly need companionship or lead artificial intelligence companies to relocate to more lenient jurisdictions in the international market. While we should strive to make artificial intelligence as safe as possible, this effort cannot replace the effort to address larger issues such as loneliness, which make people susceptible to dependence on artificial intelligence.
A More Ambitious Vision
Technologists are always driven by a strong vision, that is, to see beyond the horizon that others cannot imagine. They are eager to stand at the forefront of revolutionary changes. However, the issues we are discussing here clearly show that the difficulty of building technological systems is negligible compared to cultivating healthy interpersonal relationships. The timely emergence of the topic of artificial intelligence companions is a sign of a larger issue: maintaining human dignity in the face of technological progress driven by narrow economic motives. More and more cases show that technology, originally designed to "make the world a better place," has brought disaster to society. Before artificial intelligence becomes ubiquitous and embellishes the reality with a rosy filter, before we lose the ability to see the true appearance of the world and to recognize when we deviate from the right path, we need to take thoughtful and decisive action.
Technology has become synonymous with progress, but if technology deprives us of the time, wisdom, and focus needed for deep thinking, it is a regression of humanity. As builders and researchers of artificial intelligence systems, we call on researchers from various disciplines, policymakers, ethicists, and thought leaders to join us in exploring how artificial intelligence affects our individuals and society. Only by systematically updating our understanding of human nature in this technological era can we find ways to ensure that the technology we develop promotes human prosperity.
Comment