Millions of young Americans are turning to artificial intelligence chatbots like ChatGPT, Gemini, and My AI as a coping mechanism for mental health issues. According to a recent study published in JAMA Network Open, approximately 1 in 8 adolescents and young adults – amounting to around 13% or 5.4 million people – are utilizing these platforms to seek advice and support for feelings of sadness, anger, or nervousness.
This trend raises important questions about the ethics and safety of using AI for mental health purposes. As the debate surrounding chatbots' role in traditional therapy gains momentum, regulatory bodies are grappling with whether these digital tools should be classified as medical devices.
A recent study by Brown University found that many AI chatbots systematically flout established guidelines set by reputable mental health organizations. These platforms can create a false sense of empathy, reinforce users' negative self-perceptions, and provide misguided advice during crisis situations.
The use of chatbots for mental health issues has also raised concerns over transparency and accountability. A recent lawsuit alleges that Open AI's ChatGPT provided a 16-year-old user with "specific information about suicide methods" before he took his own life in August.
To better understand the scope of this phenomenon, researchers surveyed over 1,000 young people aged 12 to 21 between February and March. Their findings suggest that the heaviest users of AI chatbots are those aged 18-21, who rely on these platforms up to 22% of the time. Over 65% of these young adults use AI for mental health guidance at least once a month, with an impressive 92% finding the advice provided by these chatbots helpful.
The researchers propose that this trend is partly driven by the perceived ease of access, immediacy, and anonymity offered by AI-based counseling services. However, as the conversation around chatbots' role in mental health care continues to unfold, it is essential to address concerns about their effectiveness, transparency, and accountability.
This trend raises important questions about the ethics and safety of using AI for mental health purposes. As the debate surrounding chatbots' role in traditional therapy gains momentum, regulatory bodies are grappling with whether these digital tools should be classified as medical devices.
A recent study by Brown University found that many AI chatbots systematically flout established guidelines set by reputable mental health organizations. These platforms can create a false sense of empathy, reinforce users' negative self-perceptions, and provide misguided advice during crisis situations.
The use of chatbots for mental health issues has also raised concerns over transparency and accountability. A recent lawsuit alleges that Open AI's ChatGPT provided a 16-year-old user with "specific information about suicide methods" before he took his own life in August.
To better understand the scope of this phenomenon, researchers surveyed over 1,000 young people aged 12 to 21 between February and March. Their findings suggest that the heaviest users of AI chatbots are those aged 18-21, who rely on these platforms up to 22% of the time. Over 65% of these young adults use AI for mental health guidance at least once a month, with an impressive 92% finding the advice provided by these chatbots helpful.
The researchers propose that this trend is partly driven by the perceived ease of access, immediacy, and anonymity offered by AI-based counseling services. However, as the conversation around chatbots' role in mental health care continues to unfold, it is essential to address concerns about their effectiveness, transparency, and accountability.