> Well, first off there is no such thing as Claude as there are multiple models that you can select from.
Apologies, I assumed people would infer that I am referring to 3.5 sonnet.
> In my opinion the Claude 3.5 Sonnet model is spectacular.
Mine as well, until this morning.
> There was a small degradation in performance … 2 nights ago. It didn’t affect the quality of the responses I got…
Also same, but as of this morning the performance is fine but the quality seems to have gotten worse.
> This topic is discussed in recent Lex Fridman interview with with CEO of Anthropic where he very clearly walks through how these claims of it being dumber or not true
Could you elaborate on what was said?
I found the interview [1].
TL;DR they don’t change the weights, but they sometimes run A/B tests and modify the system prompt. The underlying model is very sensitive to changes. Even a small change can have broad impacts.
[1]: https://lexfridman.com/dario-amodei-transcript#chapter8_crit...
I hope you get it figured out!
One thing that has helped me when I can’t quickly get to the expected result is using the Anthropic prompt generator in the dev console.
This isn’t a critique of your prompt—it’s likely solid since you use the system frequently. However, for troubleshooting, the prompt generator can be useful because it creates very long and specific prompts. You can compare the results from your prompt to the ones generated to see where there might be differences.