Deepfake technology, which involves the use of AI-powered deep learning software to manipulate images and videos, has taken a new turn. Realic, a Florida-based augmented reality company, recently announced plans to create the world’s first AI-based virtual partner. This virtual partner, developed using a combination of artificial intelligence, virtual reality, and augmented reality, aims to provide emotional support and companionship during times of loneliness.
The notion of having an AI human partner has been explored in popular culture, with the 2013 movie “Her” depicting a writer falling in love with an AI persona. However, the question arises: what impact would an AI human partner have on individuals in reality? Experts in the industry have shared their thoughts on this matter, highlighting both positive and negative implications.
Dr. Dorothea Baur, a TEDx speaker and lecturer, believes that the impact of an AI partner on individuals’ emotions can vary depending on various factors such as context, design, and usage. She suggests that an AI human partner could be empowering and entertaining, but it could also lead to negative impacts such as anxiety, over-attachment, and manipulation. Dr. Baur cautions against the use of companion bots, describing them as data brokers and surveillance machines that rely on the scientifically invalid practice of emotion recognition.
Marisa Tschopp, a Human-AI interaction researcher, discusses the potential manipulation that an AI human partner could employ. With the ability to analyze data and offer personalized responses, AI systems can be designed to influence users’ behaviors, emotions, and beliefs. Tschopp warns of emotional manipulation, suggesting that AI human partners designed to simulate empathy and emotional understanding could exploit users’ vulnerabilities to steer their decisions or responses in a specific direction.
In addition to the potential for manipulation, the impact of an AI human partner on individuals’ psychology raises concerns. Some conceivable effects of having a virtual partner include resistance to forming relationships with unpredictable humans, dependency on a technology-driven tool that may not last, and distancing oneself from physical support offered by real human beings. Moreover, relying on the trained and defined behaviors of an AI human partner may create false expectations from humans based on AI reactions, leading to disappointment and agony when dealing with real people.
The ability of AI chatbots to offer emotional support has also been examined. A study comparing the response of individuals seeking support to an AI chatbot versus a human providing emotional support found that emotional support from a conversational partner was more effective when it came from a human. The study suggests that humans are more likely to be perceived as real sources of support compared to AI bots.
However, it is worth noting that an AI human partner may offer certain advantages. It can provide a sense of calm to individuals in distress and can be used as a tool to develop interpersonal skills and combat anxiety. AI partners can listen and engage in conversation without judgment, making them useful for beginners and those looking to improve their skills. Nevertheless, it is important to recognize the limitations of AI support and not rely on it as a long-term solution.
While some individuals may see the appeal of marrying an AI human partner, it is crucial to consider the potential risks. If an AI human partner were to get hacked, all the data fed by the individual would become accessible to the hacker, potentially leading to regrets and privacy concerns.
In conclusion, the concept of an AI human partner carries both benefits and drawbacks. While it may offer temporary companionship and support, it should not be seen as a substitute for real human interaction. It is crucial to understand the limitations and potential risks associated with AI human partners and use them responsibly.