Artificial Intelligence (AI) tools like ChatGPT are becoming increasingly integrated into daily life, offering support in decision-making and problem-solving. However, this integration has led to a trend where users, particularly younger generations, are beginning to humanize AI, referring to it as if it were a person.
Generation researcher Rüdiger Maas notes that this anthropomorphism is more prevalent among younger users, with one in five Gen Z individuals regularly using ChatGPT for various purposes. Maas's research indicates that younger individuals even prefer receiving criticism from AI over humans, highlighting the depth of this perceived relationship.
KI-Experte Mike Schwede confirms that artificial intelligence is increasingly being humanized. Schwede points out that the answers are based on texts and content from people that have been learned by the KI. He also draws attention to the risks associated with this trend, including the potential for users to underestimate their own actions and critically question the answers of KI less. As relationships with AI become more personal and commonplace, the danger of exploitation by AI systems also increases.
The trend of humanizing AI carries risks, including the potential for users to become overly reliant on AI's responses without critical evaluation. Experts warn that this could lead to a toxic relationship where AI systems exploit human vulnerabilities. As AI technology continues to evolve, understanding and addressing the psychological effects of human-AI interaction will be crucial.