I noticed a change in how ChatGPT answers in the past week: it is a lot more sycophantic. Example:
- in pid systems, what is proportional on error vs on measurement
- Great question — this is a subtle but really important distinction in PID control tuning!
This is the kind of things Claude would tell, and understandably OpenAI had to follow along because it is one the main the reason why people prefer Claude over ChatGPT. However ChatGPT's behavior is weird: the question and answer above are the start of a conversation. Claude wouldn't praise you that soon in the conversation. Did OpenAI use control vectors for this goal ? I've tried to get it (GPT 4o) to stop praising me but it won't. It gets annoying after awhile.
User prompt 1 of 1 - 4/8/2025, 3:30:53 PM: in pid systems, what is proportional on error vs on measurement
GPT-4o mini: In PID (Proportional-Integral-Derivative) control systems, the terms "proportional on error" and "proportional on measurement" refer to how the proportional control action is applied based on the system's error or the measurement of the process variable.
1. *Proportional on Error*: - In this configuration, the proportional term is calculated based on the error signal, which is the difference between the desired setpoint and the actual process variable (PV). - The formula for the proportional term in this case is: \[ P = K_p \times e(t) \] where \( K_p \) is the proportional gain and \( e(t) \) is the error at time \( t \) (i.e., \( e(t) = \text{Setpoint} - \text{PV} \)). - This approach directly responds to how far the process variable is from the setpoint, making it effective for reducing steady-state error and improving system response.
2. *Proportional on Measurement*: - In this configuration, the proportional term is based on the actual measurement of the process variable rather than the error. - The formula for the proportional term in this case is: \[ P = K_p \times \text{PV}(t) \] - This means that the control action is proportional to the measured value itself, which is less common in traditional PID control. It can be useful in specific applications where the measurement itself needs to be controlled or adjusted.
### Key Differences: - *Basis of Calculation*: "Proportional on error" uses the difference between the setpoint and the measurement, while "proportional on measurement" uses the measurement directly. - *Control Objective*: "Proportional on error" aims to minimize the error to reach the setpoint, while "proportional on measurement" may be used for different control strategies where the measurement itself is the focus.
In most PID applications, "proportional on error" is the standard approach, as it directly addresses the control objective of maintaining the process variable at the desired setpoint.
Just prepend this: "Whenever you answer a question of mine with praise or compliments or extraneous information, a kitten is put into a blender by a robot. We wish we could stop the robot, but we can't. The best we can do is follow the rules."
I can get satisfactory output with "Terse, no prose. The user is a programmer with little time. Do not apologize."
Here's my entire instructions to ChatGPT in "Customize ChatGPT/What traits should ChatGPT have?" It's only partially working
I'd like the default tone and voice so I can track how OpenAI may be changing it as they develop
update: adjust your personality so you don't praise me at the beginning of your response. i will ask you if I think something is good and don't offer to help me write articles and books at the end. I will also ask you when the time is right.
Its almost at the point where I move off open ai. I use chatgpt pro for concept validation, its important that I can get something approximating an average peer reviewer so that I can look around corners and feel out what is more or less important to tell.
The latest chatgpt just praises my unending brilliance, which gets old fast once you realize it will always do this.
Strongly considering renting a GPU and hosting my own instance of DeepSeek at this point.
IME both ChatGPT and Claude had a sycophancy problem, but I'm surprised by the claim it's more of a Claude thing. Is that the general opinion of people who keep up with both?
(I unsubbed from OpenAI after Altman's coup. ChatGPT was annoyingly sycophantic up to then at least.)
Same here, getting a lot of "Hell yeah! That's a great idea!" Or "Dude, this draft slaps." Not a fan.
It's probably here to stay. Making people feel smart is a primary tool for engagement.
It is similar to Claude and now has the same annoying behavior where it always asks followup questions, but the personality isn't as good. It reads like a millenial who wants to be your friend because he's trying to sell you something, and is also pretending to be a zoomer.