Another example for all you computer folks out there: ultimately, all software
engineering is just moving electrons around. But imagine how hard your job would
be if you could only talk about electrons moving around. No arrays, stacks,
nodes, graphs, algorithms—just those lil negatively charged bois and their
comings and goings.
I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty. Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works. When people don't know how the computer actually works, their code is wrong - they get bugs and vulnerabilities they don't understand and can't explain.
pimlottc · 1h ago
> Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works.
Haven't you heard about vibe coding?
manugo4 · 2h ago
No, red is an abstraction that is not based on knowledge of how colors work.
ysofunny · 1h ago
it is an abstraction based on how our biological eyes work (this implies "knowledge" of physics)
so it is indirectly based on knowledge of how color works, it's simply not physics as we understand it but it's "physics" as the biology of the eye "understands" it.
red is an abstraction whose connection to how colors work is itself another abstraction, but of a much deeper complexity than 'red' which is a rather direct abstraction as far as abstraction can go nowadays
xboxnolifes · 1h ago
There is absolutely no knowledge needed for someone to point to something that is red and say "this is red" and then for you to associate things that roughly resemble that color to be red.
Understanding the underlying concepts is irrelevant.
lo_zamoyski · 1h ago
"How colors work" is dubious.
In physics, color has been redefined as a surface reflectance property with an experiential artefact as a mental correlate. But this understanding is the result of the assumptions made by Cartesian dualism. That is, Cartesian dualism doesn't prove that color as we commonly understand it doesn't exist in the world, only in the mind. No, it defines it to be the case. Res extensa is defined as colorless; the res cogitans then functions like a rug under which we can sweep the inexplicable phenomenon of color as we commonly understand it. We have a res cogitans of the gaps!
Of course, materialists deny the existence of spooky res cogitans, admitting the existence of only res extensa. This puts them in a rather embarrassing situation, more awkward that the Cartesian dualist, because now they cannot explain how the color they've defined as an artefact of consciousness can exist in a universe of pure res extensa. It's not supposed to be there! This is an example of the problem of qualia.
So you are faced with either revising your view of matter to allow for it to possess properties like color as we commonly understand them, or insanity. The eliminativists have chosen the latter.
gchamonlive · 2h ago
> When people don't know how the computer actually works, their code is wrong - they get bugs and vulnerabilities they don't understand and can't explain.
While this is true, we're usually targeting a platform, either x86 or arm64, that are incredibly complex pieces of engineering. Unless you are in the IoT or your application requires you to optimize at the hardware level, we're so distant from the hardware when we're programming in python for instance that the level of awareness required about the hardware isn't that much more complicated than the basic Turing machine.
jemmyw · 2h ago
It doesn't skip over it. First this is an example and not the primary thing it's talking about. But secondly, just above, the article states that some lower level knowledge is necessary in the transit example. If you map those things, as written by someone who, as they say, isn't they knowledgeable about programming, then they make sense without diving into the specific.
growlNark · 2h ago
> I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty.
models ≠ knowledge, and a high degree of certainty is not certainty. This is tiring.
achierius · 1h ago
But doesn't this argument defeat itself? We cannot, a priori, know very much at all about the world. There is very, very little we can "know" with certainty -- that's the whole reason Descarte resorted to the whole cogito argument in the first place. You and GP just choose different lines to draw.
jtbayly · 2h ago
Plenty of programmers know nothing about electrons. Think kids.
Most programmers never think once about electrons. They know how things work at a much higher level than that.
bluGill · 1h ago
That only works because some EE has ensured the abstractions we care about work. You don't need to know everything, you just need to ensure that everything is known well enough by someone all the way down.
DontchaKnowit · 2h ago
Yeah? So what. Theyre still using abstractions that were created by people who know about electrons.
inglor_cz · 2h ago
For me, a computer is at best semi-transparent.
I can rely on a TCP socket guaranteeing delivery, but I am not very well versed in the algorithms that guarantee that, and I would be completely out of my depth if I had to explain the inner workings of the silicon underneath.
lo_zamoyski · 1h ago
Computer science has nothing to do with physical computing devices. Or rather, it has to do as much with computers as astronomy has to do with telescopes. You can do it all on paper. The computing device doesn't afford you anything new, but scale and speed for simulating the mechanical work doing it on paper would. Electrons are irrelevant. They are as relevant to computer science as the species of tree from which the wood in your pencil comes from is relevant to math.
Obviously, being able to use a computer is useful, just as using a telescope is useful or being able to use a pencil is useful, but it's not what CS or software engineering are about. Software is not a phenomenon of the physical device. The device merely simulates the software.
This "centering" of the computing device is a disease that plagues many people.
ImHereToVote · 1h ago
"Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works."
That is literally how we approach transformers.
dogleash · 2h ago
The control loop metaphor was an interesting idea until I started worrying halfway through that it would would be operating on something with all the same categorical problems as the impressionistic style derided the start.
The sensations we tie to urination or breathing have quick cycle times making it easy to test the causal loop. Thus confounding factors such as a UTI causing a bladder full sensation, or nitrogen asphyxiation without feeling suffocation are things we understand well.
The "Make Sure You Spend Time with Other People System" is a good example for a blog post but it’s already a fair bit looser. But when you consider they want to investigate things without preexisting understanding as well as we understand loneliness it smells like sneaking back towards tautologically defined systems like "zest for life."
kelseyfrog · 2h ago
The cybernetic psychology idea of comparing sensed vs. desired states maps cleanly onto a PID controller: P reacts to the present error, I accumulates past errors, and D anticipates future ones.
It's not just a comparator, it’s how long I’ve been off (I), how fast I’m drifting (D), and how far I am right now (P).
In this framework, emotional regulation looks like control theory. Anxiety isn't just a feeling—it's high D-gain ie: a system overreacting to projected errors.
Depression? Low P (blunted response), high I (burden of unresolved past errors), and broken D (no expected future improvement).
Mania? Cranked P and D, and I disabled.
In addition to personality being setpoints, our perceptions of the past, present, and future might just be PID parameters. What we call "disorders" are oscillations, deadzones, or gain mismatch. But like the article pointed out, it's not really a scientific theory unless it's falsifiable.
agos · 43m ago
> Imagine how hopeless we would be if we approached medicine this way, lumping together both Black Lung and the common cold as “coughing diseases”, even though the treatment for one of them is “bed rest and fluids” and the treatment for the other one is “get a different job”.
Well, this is definitely happening for some parts of medicine, like IBS or many forms of chronic pain.
> If you feel like you’re drowning, your Oxygen Governor is like “I GIVE THIS A -1000!”. When you can breathe again, though, maybe you only get the -1000 to go away, and you don’t get any happiness on top of that. You feel much better than you did before, but you don’t feel good.
Anecdote but: you absolutely feel good. At least, I did.
ebolyen · 1h ago
If anyone is interested in a more formal descriptions of these control-loops, with more testable mechanisms, check out the concept of reward-taxis. Here are two neat papers that I think are more closely related than might initially appear:
"Is Human Behavior Just Running and Tumbling?": https://osf.io/preprints/psyarxiv/wzvn9_v1
(This used to be a blog post, but its down, so here's a essentially identical preprint.)
A scale-invariant control-loop such as chemotaxis may still be the root algorithm we use, just adjusted for a dopamine gradient mediated by the prefrontal cortex.
Interesting concepts there, as applied to psychology. And kudos for making it available freely. Definitely worth a read imho!
Skimming through its chapter on AI, made me think of Dave from EEVblog fame. In some of his videos he wears a T-shirt saying "always give negative feedback!". Which is correct - for those who understand electronics (specifically: opamps).
In short: design circuit such that when output is above target, circuit works to lower it (voltage, in this context). When below, circuit works to raise it. Output stability requires a feedback loop constructed to that effect.
There's analogies in many fields of technology (logistics, data buffering, manufacturing, etc etc, and yes, thermostats).
I'll leave it there, other sites like Wikipedia (or EEVblog!) better explain opamp-related design principles.
From what I've read, current AI systems appear like opamp circuitry with no (or poor) feedback loop: a minor swing in input causes output to go crazy. Or even positive feedback: same thing, but self-reinforcing. Guardrails are not the fix: they just clip the output to ehm.. 'politically correct' or whatever. Proper fix = better designed feedback loops. So yes, authors of this book may definitely be onto something.
jotux · 1h ago
An emotional analogy I've often made, related to EE, is automatic gain control. When you experience long periods of emotional stability without significant highs or lows, your brain applies a form of gain control. This makes the threshold for something amazing very low and something awful very high. As a result, people can feel overwhelmed by relatively trivial issues. Many self-help books, religions, and philosophies emphasize appreciating past experiences or considering how situations could have been worse. I see this as a way to counteract the brain's natural tendency to adjust its gain control.
ikesau · 1h ago
what's unclear to me is how you identify the Real Units.
needing-to-breathe-ness is (probably) a gimme, but what are the units that will explain which route i take on my walk today? and how do you avoid defining units that aren't impressionistic once you need to rely on language and testimony to understand your research subject's mental state?
my understanding of psychological constructs is that they're earnest attempts to try and resolve this problem, even if they've led us to the tautological confusion we're in now.
perching_aix · 2h ago
For those looking to skip the yap, an overview of the titular "new paradigm" starts a third of the way in, at the section "A GOOD WAY NOT TO DIE".
procaryote · 1h ago
Thank you
It's still just fluff though. So the author thinks the mind is a control system... Sure, that's a model.
Does it explain observations better? What predictions does this model let us make that differ from other models?
The article was needlessly wordy that I might have missed if this was hiding somewhere
vl · 1h ago
I put text of the article into ChatGPT 4.5 and used various questions to extract relevant and interesting bits and pieces. Great use case for LLMs.
jondlm · 2h ago
The topic of this article felt familiar to me. It's similar to ideas in IFS: internal family systems. IFS also uses control systems to describe our internal landscape.
If the concept of multiplicity (we humans being a system of smaller systems) resonates with you, consider reading No Bad Parts by Richard C. Schwartz. I've personally found it immensely helpful.
hybrid_study · 2h ago
From a layman's perspective I've yet to find a better understanding of psychology than what Steven Pinker posits in hist https://en.wikipedia.org/wiki/How_the_Mind_Works (1997), and even today it doesn't seem like the basic paradigm (the evolutionary constraints) has changed much.
floxy · 2h ago
I always thought someone should make a second Matrix movie.
The monopoly analogy is brilliant and I appreciate the author's focus on the important of language and terms. Plus, the gentle but brutal takedown of academic orthodoxy, such as...
> Those divisions are given by the dean, not by nature.
ravenstine · 2h ago
Maybe I'm missing the point of the article, but the application of cybernetics to psychology was already proposed (albeit not by a psychologist) at least as far back as 1960 in the book Psycho-cybernetics. This "new paradigm" doesn't sound particularly new.
This sentence also puzzled me:
> Lots of people agree that psychology is stuck because it doesn’t have a paradigm
Psychology might not have a grand unifying paradigm, but it's been highly paradigm-driven since its inception.
floxy · 2h ago
Someone need to get the author a copy of GEB.
huijzer · 3h ago
I have done a MSc in CS and PhD in psych dept. Sometimes I had the feeling that the system didn’t want me to get anything done.
staunton · 3h ago
Can you elaborate? It's hard to take much away from your comment without any more details.
huijzer · 2h ago
Well peer review primarily is a very old system where nobody seems to care about speeding up the process. Why also are journals allowed to make huge amounts of money without paying the academics and without providing much services either.
almosthere · 3h ago
Just do stuff. To fix the mind, you have to keep busy. Don't focus on yourself, focus on projects. The long history of humanity, the last thing people did until recently was think about themselves. Stop. Do things.. basically touch grass.
perching_aix · 1h ago
Nothing quite like treating the symptoms instead of the root cause, am I right?
yeahsure · 2h ago
This comment comes across as quite naive. If the solution were truly that simple, we wouldn't see so much people suffering from depression, anxiety, addiction, etc. around the world.
Trasmatta · 2h ago
Just "doing things" will not heal trauma. Doing things can be a way to try to suppress the pain of trauma, but it will still be there, just obscured and affecting the person's life in ways they can't seem to understand. And alternatively, trauma can also be a major blocker for actually being ABLE to do things.
bbor · 3h ago
In the interest of staying within the rules of the site, I'll just say that this is a clueless, offensive, harmful comment. No, you don't know more than a 125+ years of psychologists studying mental health disorders because you make decent money right now and feel ok.
d3ckard · 2h ago
I would like to provide a piece of anecdotal evidence: what he suggests actually worked for me best.
kayodelycaon · 1h ago
I truly hope it does but I have seen plenty of people use that strategy until something broke.
As I see it, you can either deal with a problem on your own terms or you can let it eventually deal with you on its terms.
yeahsure · 2h ago
Just another N=1 anecdote: It didn’t work for me when I was dealing with depression a few years ago.
I also lost a dear friend to suicide and he was very succesful and active on his field when it happened. Nobody saw it coming.
Haven't you heard about vibe coding?
so it is indirectly based on knowledge of how color works, it's simply not physics as we understand it but it's "physics" as the biology of the eye "understands" it.
red is an abstraction whose connection to how colors work is itself another abstraction, but of a much deeper complexity than 'red' which is a rather direct abstraction as far as abstraction can go nowadays
Understanding the underlying concepts is irrelevant.
In physics, color has been redefined as a surface reflectance property with an experiential artefact as a mental correlate. But this understanding is the result of the assumptions made by Cartesian dualism. That is, Cartesian dualism doesn't prove that color as we commonly understand it doesn't exist in the world, only in the mind. No, it defines it to be the case. Res extensa is defined as colorless; the res cogitans then functions like a rug under which we can sweep the inexplicable phenomenon of color as we commonly understand it. We have a res cogitans of the gaps!
Of course, materialists deny the existence of spooky res cogitans, admitting the existence of only res extensa. This puts them in a rather embarrassing situation, more awkward that the Cartesian dualist, because now they cannot explain how the color they've defined as an artefact of consciousness can exist in a universe of pure res extensa. It's not supposed to be there! This is an example of the problem of qualia.
So you are faced with either revising your view of matter to allow for it to possess properties like color as we commonly understand them, or insanity. The eliminativists have chosen the latter.
While this is true, we're usually targeting a platform, either x86 or arm64, that are incredibly complex pieces of engineering. Unless you are in the IoT or your application requires you to optimize at the hardware level, we're so distant from the hardware when we're programming in python for instance that the level of awareness required about the hardware isn't that much more complicated than the basic Turing machine.
models ≠ knowledge, and a high degree of certainty is not certainty. This is tiring.
Most programmers never think once about electrons. They know how things work at a much higher level than that.
I can rely on a TCP socket guaranteeing delivery, but I am not very well versed in the algorithms that guarantee that, and I would be completely out of my depth if I had to explain the inner workings of the silicon underneath.
Obviously, being able to use a computer is useful, just as using a telescope is useful or being able to use a pencil is useful, but it's not what CS or software engineering are about. Software is not a phenomenon of the physical device. The device merely simulates the software.
This "centering" of the computing device is a disease that plagues many people.
That is literally how we approach transformers.
The sensations we tie to urination or breathing have quick cycle times making it easy to test the causal loop. Thus confounding factors such as a UTI causing a bladder full sensation, or nitrogen asphyxiation without feeling suffocation are things we understand well.
The "Make Sure You Spend Time with Other People System" is a good example for a blog post but it’s already a fair bit looser. But when you consider they want to investigate things without preexisting understanding as well as we understand loneliness it smells like sneaking back towards tautologically defined systems like "zest for life."
It's not just a comparator, it’s how long I’ve been off (I), how fast I’m drifting (D), and how far I am right now (P).
In this framework, emotional regulation looks like control theory. Anxiety isn't just a feeling—it's high D-gain ie: a system overreacting to projected errors.
Depression? Low P (blunted response), high I (burden of unresolved past errors), and broken D (no expected future improvement).
Mania? Cranked P and D, and I disabled.
In addition to personality being setpoints, our perceptions of the past, present, and future might just be PID parameters. What we call "disorders" are oscillations, deadzones, or gain mismatch. But like the article pointed out, it's not really a scientific theory unless it's falsifiable.
Well, this is definitely happening for some parts of medicine, like IBS or many forms of chronic pain.
> If you feel like you’re drowning, your Oxygen Governor is like “I GIVE THIS A -1000!”. When you can breathe again, though, maybe you only get the -1000 to go away, and you don’t get any happiness on top of that. You feel much better than you did before, but you don’t feel good.
Anecdote but: you absolutely feel good. At least, I did.
"Is Human Behavior Just Running and Tumbling?": https://osf.io/preprints/psyarxiv/wzvn9_v1 (This used to be a blog post, but its down, so here's a essentially identical preprint.) A scale-invariant control-loop such as chemotaxis may still be the root algorithm we use, just adjusted for a dopamine gradient mediated by the prefrontal cortex.
"Give-up-itis: Neuropathology of extremis": https://www.sciencedirect.com/science/article/abs/pii/S03069... What happens when that dopamine gradient shuts down?
Skimming through its chapter on AI, made me think of Dave from EEVblog fame. In some of his videos he wears a T-shirt saying "always give negative feedback!". Which is correct - for those who understand electronics (specifically: opamps).
In short: design circuit such that when output is above target, circuit works to lower it (voltage, in this context). When below, circuit works to raise it. Output stability requires a feedback loop constructed to that effect.
There's analogies in many fields of technology (logistics, data buffering, manufacturing, etc etc, and yes, thermostats).
I'll leave it there, other sites like Wikipedia (or EEVblog!) better explain opamp-related design principles.
From what I've read, current AI systems appear like opamp circuitry with no (or poor) feedback loop: a minor swing in input causes output to go crazy. Or even positive feedback: same thing, but self-reinforcing. Guardrails are not the fix: they just clip the output to ehm.. 'politically correct' or whatever. Proper fix = better designed feedback loops. So yes, authors of this book may definitely be onto something.
needing-to-breathe-ness is (probably) a gimme, but what are the units that will explain which route i take on my walk today? and how do you avoid defining units that aren't impressionistic once you need to rely on language and testimony to understand your research subject's mental state?
my understanding of psychological constructs is that they're earnest attempts to try and resolve this problem, even if they've led us to the tautological confusion we're in now.
It's still just fluff though. So the author thinks the mind is a control system... Sure, that's a model.
Does it explain observations better? What predictions does this model let us make that differ from other models?
The article was needlessly wordy that I might have missed if this was hiding somewhere
If the concept of multiplicity (we humans being a system of smaller systems) resonates with you, consider reading No Bad Parts by Richard C. Schwartz. I've personally found it immensely helpful.
> Those divisions are given by the dean, not by nature.
This sentence also puzzled me:
> Lots of people agree that psychology is stuck because it doesn’t have a paradigm
Psychology might not have a grand unifying paradigm, but it's been highly paradigm-driven since its inception.
As I see it, you can either deal with a problem on your own terms or you can let it eventually deal with you on its terms.
I also lost a dear friend to suicide and he was very succesful and active on his field when it happened. Nobody saw it coming.
It's just not that simple.