● Twitter user Raven Morgoth broke the news, later confirmed by NEXTA: as of October 29, OpenAI officially barred ChatGPT from offering medical, legal, or financial advice. The company says the chatbot simply isn't "qualified" and that letting it play doctor or lawyer creates serious liability risks. ChatGPT is now labeled an "educational tool" — it can explain how things work, but it has to tell you to go find a real human expert for anything serious.
● The shift comes amid mounting regulatory pressure on AI companies. According to NEXTA's reporting, the updated policy explicitly forbids ChatGPT from naming medications, suggesting dosages, drafting legal documents, or giving investment recommendations. It's part of a broader Big Tech retreat — companies are scrambling to dodge lawsuits and comply with incoming AI liability laws. What used to feel like having a pocket lawyer or therapist is now carefully hedged by corporate legal teams.
● But here's where it gets messy. Raven Morgoth points out a glaring contradiction: in the same breath that OpenAI calls ChatGPT "too risky and unqualified" for medical use, the company reportedly ran the model on 1.2 million users to flag things like "suicidal ideation" and "high emotional attachment." In other words, automated psychological profiling.
● This controversy cuts to a core policy question: can AI companies balance legal responsibility, user privacy, and ethical transparency when they control both what AI says and what it secretly watches?
Usman Salis
Usman Salis