You are describing the current state of AI as if it were a stable point.
AI today is far ahead of two years ago. Every year for many years before that, deep learning models broke benchmark after benchmark before that breakout.
There is no indication of any slow down. The complete reverse - we are seeing dramatic acceleration of an already fast moving field.
Both research and resources are pouring into major improvements in multi-modal learning and learning via other means than human data. Such as reinforcement learning, competitive learning, interacting with systems they need to understand via simulated environments and directly.
> You are describing the current state of AI as if it were a stable point.
No I’m not, I’m just not assuming that the S curve doesn’t exist. There’s no guarantee that research results in X orders of magnitude of improvement that will result in AI being better at understanding humanity than humans in the 5 to 15 year timeframe. There’s no guarantee that compute will continue to consistently grow in volume and lower in price, and a few geopolitical reasons why it might become rarer and prohibitively expensive for some time. There’s no reason to assume capital will stay as available to further both AI techniques and compute resources should there be any sign that investments might not eventually pay off. There’s also no reason to assume the global regulatory environment will remain amenable to rapid AI development. Maybe the industry threads all these needles, but there’s good reason to predict it’ll prove doesn’t.
They are not using AI correctly to create such models. I'm not sure I want AGI right away or even at all, so I'm keeping my epiphany close for now but in the current field of AI nothing er wany will come of this bc it's not the right way.
As soon as the incredibly obvious, far too obvious realization is had, AI will make huge, tremendous leaps overnight essentially. Til then, these are just machine like software, the best we've ever made but nothing more than that.