Vetch 5 hours ago

I think it's exploring in-context. Bringing up related ideas and not getting confused by them is pivotal to these models eventually being able to contribute as productive reasoners. These traces will be immediately helpful in a real world iterative loop where you don't already know the answers or how to correctly phrase the questions.

1
int_19h 3 hours ago

This model seems to be really good at this. It's decently smart for an LM this size, but more importantly, it can reliably catch its own bullshit and course-correct. And it keeps hammering at the problem until it actually has a working solution even if it takes many tries. It's like a not particularly bright but very persistent intern. Which, honestly, is probably what we want these models to be.