Animats 1 day ago

That's a book review. Read the actual book.[1]

Notes:

- Prologue:

(Behaviorism) ended up being a terrible way to do psychology, but it was admirable for being an attempt at describing the whole business in terms of a few simple entities and rules. It was precise enough to be wrong, rather than vague to the point of being unassailable, which has been the rule in most of psychology.

- Thermostat:

An intro to control theory, but one which ignores stability. Maxwell's original paper, "On Governors", (1868) is still worth reading. He didn't just discover electromagnetics, he founded control theory. Has the usual problems with applying this to emotions, and the author realizes this.

OK, so living things have a lot of feedback control systems. This is not a new observation. The biological term is "homeostasis", a concept apparently first described in 1849 and named in 1926. (There are claims that this concept dates from Aristotle, who wrote about "habit", but Aristotle didn't really get feedback control. Too early.)

- Motivation:

Pick goals with highest need level, but have some hysteresis to avoid toggling between behaviors too fast.

- Conflict and oscillation:

Author discovers oscillation and stability in feedback systems.

- What is going on?

Author tries to derive control theory.

- Interlude

Norbert Wiener and cybernetics, which was peak fascination with feedback in the 1950s.

- Artificial intelligence

"But humans and all other biological intelligences are cybernetic minimizers, not reward maximizers. We track multiple error signals and try to reduce them to zero. If all our errors are at zero — if you’re on the beach in Tahiti, a drink in your hand, air and water both the perfect temperature — we are mostly comfortable to lounge around on our chaise. As a result, it’s not actually clear if it’s possible to build a maximizing intelligence. The only intelligences that exist are minimizing. There has never been a truly intelligent reward maximizer (if there had, we would likely all be dead), so there is no proof of concept. The main reason to suspect AI is possible is that natural intelligence already exists — us."

Hm. That's worth some thought. An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.

- Animal welfare

Finally, "consciousness". It speaks well of the author that it took this long to bring that up. It's brought up in the context of whether animals are conscious, and, if so, which animals.

- Dynamic methods

Failure modes of multiple feedback systems, plus some pop psychology.

- Other methods

Much like the previous chapter

- Help wanted

"If the proposal is more or less right, then this is the start of a scientific revolution."

Not seeing the revolution here. Most of the ideas here have been seen before. Did I miss something?

Feedback is important, but the author doesn't seem to have done enough of it to have a good understanding.

If you want an intuitive grasp of feedback, play with some op amps set up as an analog computer and watch the output on a scope. Or find a simulator. If The Analog Thing came with a scope (which, at its price point, it should) that would be ideal. Watch control loops with feedback and delay stabilize, oscillate, or limit. There are browser-based tools which do this, but they assume basic electrical engineering knowledge.

[1] https://slimemoldtimemold.com/2025/02/06/the-mind-in-the-whe...

3
pointlessone 1 day ago

> An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.

At the moment it's unknown whether it's a normal function of the system or a result of miscalibration/malfunction (assuming the theory is correct). Existence of these people is only an evidence for the capacity of the system, not a description of the nominal function.

amatic 1 day ago

> Not seeing the revolution here. Most of the ideas here have been seen before. Did I miss something?

The reviewer is a psychologist, with some interesting opinions and criticisms of psychology. My impression is that applying control theory to study human behavior should be the revolutionary thing, for psychology.

Animats 18 hours ago

This is not new ground. See Cybernetics: Or Control_and Communication in the Animal and the Machine (1948) by Norbert Wiener. Wiener wrote a popular version, "The Human Use of Human Beings".[2] There's a whole history of cybernetics as a field. This Wikipedia article has a good summary.[3] The beginnings of neural network work came from cybernetics. As with much of philosophy, areas in which someone got results split off to become fields of their own.

[1] https://en.wikipedia.org/wiki/Cybernetics:_Or_Control_and_Co...

[2] https://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings

[3] https://en.wikipedia.org/wiki/Cybernetics

amatic 17 hours ago

> This is not new ground. See Cybernetics

Control theory and cybernetics were supposed to transform psychology in a much more dramatic and all-encompassing way, as argued by W.T. Powers, for example[1]. In modern psychology, the concept of negative feedback control is treated like a metaphore, a vague connection between machines and living things (with the possible exception of the field of motor control) . If psychology would take the concept seriously, then most research methods in the field would need to be changed. Less null-hypothesis testing, more experiments applying disturbances to selected variables to see if they are controlled by a participant or not. That is the meaning I'm getting from the call to revolution.

[1] https://www.iapct.org/wp-content/uploads/2022/12/Powers1978....

Animats 15 hours ago

Ah. The linked paper goes into that in more detail.

This was a hot idea right after WWII because servomechanisms were finally working. In movies of early WWII naval gunnery, you see people turning cranks to get two arrows on a a dial to match. By late WWII, that's become automatic. Anti-aircraft guns are hitting the target more of the time. Early war air gunner training.[1] Late war air gunner training - the computer does the hard part.[2] Never before had that much mechanized feedback smarts been applied to tough real-world problems.

This sort of thing generated early AI enthusiasm. Machines can think! AGI Real Soon Now! Hence the "cybernetics" movement. That lasted about a decade. They needed another nine orders of magnitude of compute power. Psychology picked up on this concept, but didn't do much with it.

Looks like it's coming around again.

[1] https://www.youtube.com/watch?v=DWYqu1Il9Ps

[2] https://www.youtube.com/watch?v=mJExsIp4yO8

MarkusQ 18 hours ago

<snark>It has been described as such every time it's tried.</snark>

https://en.wikipedia.org/wiki/Psycho-Cybernetics

MarkusQ 18 hours ago

> An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.

The real problem is that the distinction is meaningless. These people could just be described as "minimizing the risk of running out of money" (or paperclips, or whatever). Any "maximizing system" is isomorphic with a "minimizing system" using the inverse of the metric.