AI therapy bots fuel delusions and give dangerous advice, Stanford study finds

7 olyellybelly 1 7/12/2025, 6:22:33 AM arstechnica.com ↗

Comments (1)

davydm · 9h ago
Color me unsurprised. It should be common knowledge that they hallucinate and are not suitable for fields requiring accuracy. This is unlikely to change until we drop the llm and work on real agi, like Carmack is doing. Neural nets may not be the problem, but certainly this model-stuffing, combined with the mechanism it works by (token prediction, not understanding) doesn't work in any field requiring accuracy.