20 nsoonhui 0 7/14/2025, 5:47:06 AM

Comments (0)

ankit219 · 19m ago
I can see Gary's point here. He got some stick for this on x, but Gary seems to be right here. One curious thing about this is how both sides have vacated their original positions: scaling people were all about scaling and now they do talk about RL as the next scaling curve but code tool call is an accepted paradigm. Symbolic group was more about symbols first and learning later (marcus himself in 2001 - structured representations are the only route to systematicity).

Code Interpreter + o3 is neurosymbolic AI. The architecture is very similar to a cognitive science flowchart (perception net -> symbolic scratch pad -> controller loop). How we got there is through gradient descent and not brittle expert written rules.

readthenotes1 · 15m ago
"Hao then captures OpenAI’s sophomoric attitude towards fair scientific criticism:"

So this is a bitter editorial?

kiratp · 35m ago
So a sequence of characters that is a python program is “neurosymbolic” but a sequence (of the same domain) in English (a different ruleset) that says “reverse this string” is not?
ipsum2 · 36m ago
Gary Marcus keeps hallucinating that LLMs use neurosymbolic AI, something he's harped on for years. LLMs do not, no matter how many mental gymnastics he performs.
NitpickLawyer · 32m ago
The article could use a pass for removing all the "I was right" spiel, but is it not true that LLM + interpreter / tools is neurosymbolic?