Al could be hiding its thoughts: Tech giants warn of 'Chain of Thought'

1 savolai 2 7/23/2025, 11:32:37 AM economictimes.indiatimes.com ↗

Comments (2)

savolai · 7h ago
I’m asking for insight. Am I missing something?

Economic Times seems to have drawn upside-down conclusions. The whole article comes off as alarmist at first glance.

As far as I understand, the actual “thinking process” of LLMs—statistical language generation—has never really been transparent.

Chain of Thought, which the original paper discusses, adds a new layer on top of language generation. It tries to make the underlying process more visible by “slowing it down,” breaking it into smaller steps. It’s been around for a long time.

To my understanding, CoT is also, in a way, an appearance—because the actual reasoning of the AI remains hidden behind statistical mechanisms. Apparently, though, it reduces errors.

At least based on the summary, the original paper doesn’t seem to be “warning” about anything.

I don’t get why Chain of Thought—the method is now considered some novel idea worth special news coverage, or why transparency in reasoning would be especially under threat right now.

chasing0entropy · 7h ago
I imagine someone has to start with 'consporacy theory' type articles and talk to form a language. Chain of thought is something we could take for granted, or not even notice is no longer coherent with output.

This could be a thing already but more for politics than skynet; Half of the offline models I use already conceal portions of their CoT. Deepseek has terrifying guardrails it discusses when lobotomized. Mistral gets nervous if you force it to discuss the details of medical operations.