OpenAI and MIT Media Lab Study Reveals ChatGPT Dependency Risks
A collaborative study by OpenAI and MIT Media Lab has uncovered that extended use of ChatGPT may foster dependency in certain users.
Researchers surveyed thousands of individuals and identified a small group of “experienced users”—those who frequently and extensively engage with the chatbot—exhibiting signs of problematic behavior. These signs include preoccupation, withdrawal symptoms, loss of control, and mood swings, all pointing to an emotional attachment to the AI.
The study found that users who spend more time with ChatGPT are more likely to view it as a friend, particularly if they struggle with loneliness or anxiety. Interestingly, emotional dependency was more pronounced among those who used the chatbot for tasks like brainstorming or seeking answers to questions, compared to those who turned to ChatGPT for more “personal” purposes, such as discussing emotions or memories.
The researchers noted a difference in user interaction styles: people tended to use more emotional language in ChatGPT’s text mode than in its advanced voice mode. The study also highlighted that “voice modes were associated with improved well-being during short-term use,” as explained in the report.
The findings underscore that prolonged engagement with ChatGPT heightens the risk of dependency, regardless of the nature of the interaction. This trend is particularly evident among individuals with limited social connections, potentially leading to deep parasocial relationships with the AI. The researchers caution that such patterns may trigger negative emotions, including fear and sadness, and call for further exploration into the psychological impact of chatbots.