Graph Continuous Thought Machines

1 Sai-dewa 2 7/15/2025, 9:10:16 PM
We propose a method by which a neural graph continuous thought machines dispositional nodes connections may be designed faithful to a human brain. A graph continuous thought machine replaces the synapse and neuron level models with a graph cnn .In some sense, the nodes of the graph at any one time represent the instantiation of the nodes of the dispositional neural model it is part of. Instantiating only those nodes that are currently firing. The GCNN then outputs the next graph as the system searches graph space for solutions as guided by learnt property vectors.The outputs from its neural synchronization matrix then modulate the attention given to inputs as well as to the nodes of the dispositional network. This way it designs The dispositional neural models connections (disposition for particular graphs to be next after others). We then employ neural training modules which are spiking neural networks which have their nodes mapped with keys from a musical keyboard. In particular when exposed to the state of teacher systems the nodes are trained to musically harmonize, while when exposed to the state of the untrained agent they are dissonant. The agent then tries to maximise consonance in the spiking network by using it as a reward signal. By this method the agent is trained to perform like the teacher system. We introduce text conditioned neural training modules, that condition the input on text. We show a method to modulate not just the behavior of the system , but the connectivity of the dispositional network of a GCTM. https://www.researchgate.net/publication/392733228_Text_Conditioned_Self_Architecture_Search_for_Building_Brain_Like_Connectivity_by_Describing_It

Comments (2)

Sai-dewa · 8h ago
have a paper on graph continuous thought machines that replace the synapse model and the neuron models with a graph convolutional network.

The gcnn outputs the next graph in the thought process as guided by learnt property vectors.

What's interesting is that the synchronization matrix regulates the attention given to the nodes as well as the input.

So these nodes may be seen as neurons in their own right. And consecutive graphs have connections between them that sent virtual signals and caused them to spike.

The nodes and potential nodes exist in a dispositional neural network, and only the nodes that are currently activated are instantiated in the gcnn.

So as the outputs from the synchronization matrix modulate attention, a subset of the attended dispositional neurons will represent memory.

While other parts of the dispositional network and parts of the input represent keys that index the next presentation of memory.

In fact only the pre frontal cortex dispositional nodes will contribute to the synchronization matrix.

So the pfc performs read and write operations to memory this way.

Sai-dewa · 8h ago
So the actual connections between dispositional neurons changes as the property vectors are learnt