PyG 2.0: Scalable Learning on Real World Graphs (arxiv.org)
2 points by PaulHoule 21m ago 1 comments
Better Safe Than Sorry: Model Context Protocol (ioactive.com)
1 points by mooreds 39m ago 0 comments
Interpretability: Understanding how AI models think – Anthropic [video]
1 Topfi 1 8/16/2025, 4:59:04 PM youtube.com ↗
I especially appreciated the honesty regarding what is currently not fully understood in regard to how LLMs arrive at certain output. Makes sense considering Anthropic expends what appears to be some of the most (public) effort concerning in-depth understanding over chasing performance goals of the frontier LLM labs. Would hope this discussion from top level experts may help finally put to rest a common delusion I’ve encountered, at university (whether students or lecturers, whether by those focused on model training or applications in medicine), and in the industry, where some claim to fully understand how LLMs work at every level, which unfortunately, currently no one can. Not holding my breath though, even less for social media comments. "LLMs must work like brains because some output is similar to what humans could produce" is akin to "This artifact looks like a modern thing (if you ignore a significant amount of details not serving your interpretation), therefore we had hyper diffusion/ancient aliens/power plant pyramids"...
On another note, there are few things more nerdy in the traditional meaning of the term than a VC backed multi billion dollar company still relying on a Brother HL-L2400DW for their modest printing needs.