OpenAI has pushed back on claims that its chatbot, ChatGPT, is no longer allowed to provide legal and medical advice. In a statement on X, Karan Singhal, OpenAI's head of health AI, said the reports are "not true." According to Singhal, the behavior of ChatGPT "remains unchanged."
Singhal noted that ChatGPT was never meant to be a substitute for professional advice but would continue to serve as a resource for people seeking to understand legal and medical information. He also clarified that the new policy update on October 29th lists certain activities that are not allowed, including providing tailored advice that requires a license, such as legal or medical advice.
Singhal stated that this is not a new change to OpenAI's terms but rather an updated list of rules that reflect a universal set of policies across OpenAI products and services. The company previously had three separate policies, and with the new update, it has streamlined its rules into one unified list.
The claims in question seem to have originated from a now-deleted post on the betting platform Kalshi, which falsely stated that ChatGPT would no longer provide health or legal advice. It appears that this misinformation has been spread across social media, causing confusion among users about the capabilities of ChatGPT.
				
			Singhal noted that ChatGPT was never meant to be a substitute for professional advice but would continue to serve as a resource for people seeking to understand legal and medical information. He also clarified that the new policy update on October 29th lists certain activities that are not allowed, including providing tailored advice that requires a license, such as legal or medical advice.
Singhal stated that this is not a new change to OpenAI's terms but rather an updated list of rules that reflect a universal set of policies across OpenAI products and services. The company previously had three separate policies, and with the new update, it has streamlined its rules into one unified list.
The claims in question seem to have originated from a now-deleted post on the betting platform Kalshi, which falsely stated that ChatGPT would no longer provide health or legal advice. It appears that this misinformation has been spread across social media, causing confusion among users about the capabilities of ChatGPT.