swatcoder 2 days ago

> The "small" benefits you list are in fact unprecedented and periodically improving (in my experience).

It's only the mechanism that's unprecedented, cementing these new approaches as a state of the art evolution for code completion, automatic summarizing/transcription/translation, image analysis, music generation, etc -- all of which were already commercialized and making regular forward strides for a long while already. You may not have been aware of the state of all those things before, but that doesn't make them unprecedented.

We actually haven't seen many radical or unprecedented acheivements at commercial scale at all yet, with reliability proving to be the the biggest impediment to commercializing anything that can't rely on immediate human supervision.

Even if we get stuck here, where human engagement remains needed, there's a lot of of fun engineering to do and a number of industries we can expect to see reconfigured. So it's not nothing. But there's really no evidence towards revolution or catastrophe just yet.

1
Nevermark 2 days ago

> It's only the mechanism that's unprecedented

I think this is correct and also the point.

Neural networks, deep learning modes, have been reliably improving year to year for a very long time. Even in the 90's on CPUs, the combination of CPU improvements and training algorithm improvements translated into a noticeable upward arc.

However, they were not yet suitable for anything but small boutique problems. The computing power, speed, RAM, etc. just wasn't there until GPU computing took off.

Since then, compounding GPU power, and relatively simple changes in architecture have let deep learning rapidly become relevant in ... well every field where data is relevant. And progress has not just been reliable, but noticeably accelerated every few years over the last two decades.

So while you are right, today's AI varies from interesting, to moderately helpful but not Earth shattering in many areas, that is what happens when a new wave of technology crosses the threshold of usability.

Past example: "Cars are really not much better than horses, and very finicky." But the cars were on a completely different arc of improvement.

The limitations of current AI models aside, their generality of expertise (flawed as it might be), is unprecedented. Multi-modal systems, longer context windows, and systems for improving glitchy behavior are a given, and will make big quality differences. Those are obvious requirements with relatively obvious means.

We are going to get more than that going forward, just as these models have often been surprisingly useful (at much lower levels and narrower contexts) in the far and recent past.

This train has been accelerating for over three and a half decades. It isn't going to suddenly slow down because it just passed "Go". The opposite.