OpenAI Says Itโs YOUR FAULT!
which is just what you want to hear if your child takes the wrong advice from ChatGPT
OpenAI has ๐พ๐๐ถ๐ฒ๐๐น๐ updated its Usage Policies, adding a new clause that prohibits:
โProvision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.โ
โ OpenAI Usage Policies, November 2025
At first glance, it looks like progress OpenAI doing something good. But look closer. The model hasnโt changed. ChatGPT still produces medical and legal advice, and it still gets much of it wrong.
Where does the risk sit?
OpenAI has effectively redefined responsibility.
It hasnโt limited what the model says only who can be blamed for listening.
๐ก๐ผ๐, ๐ถ๐ณ ๐๐ผ๐ ๐ฎ๐ฐ๐ ๐ผ๐ป ๐ถ๐, ๐ถ๐โ๐ ๐๐ผ๐๐ฟ ๐ณ๐ฎ๐๐น๐.
If you follow its advice without consulting a licensed professional, youโve breached the terms. The liability becomes yours, not OpenAIโs.
This is not a safeguard. Itโs a disclaimer. Itโs horrible
A single line that converts systemic risk into individual negligence.
It is a familiar legal manoeuvre: when systems cannot be made safe, make users responsible for using them unsafely.
The policy doesnโt stop ChatGPT from producing potentially harmful output. It simply limits who can claim to be harmed by it.


