Two things can both be true. I keep arguing both sides because:
1 Unless you’re aware of near term limits you think AI is going to the stars next year.
2 Architectures change. The only thing that doesn’t change is that we generally push on, temporarily limits are usually overcome and there’s a lot riding on this. It’s not a smart move to bet against progress over the medium term. This is also where the real benefits and risks lie.
Is AI in general more like going to space, or string theory? One is hard but doable. Other is a tar pit for money and talent. We are all currently placing our bets.
point 2 is the thing that i think is most important to point out:
"architectures change"
sure, that's a fact. let me apply this to other fields:
"there could be a battery breakthrough that gives electric cars a 2,000 mile range." "researchers could discover a new way to build nanorobots that attacks cancer directly and effectively cures all versions of it." "we could invent a new sort of aviation engine that is 1,000x more fuel efficient than the current generation."
i mean, yeah, sure. i guess.
the current hype is built on LLMs, and being charitable "LLMs built with current architecture." there are other things in the works, but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth. nothing has appeared that had the initial "wow" factor of the early versions of suno, or gpt, or dall-e, or sora, or whatever else.
this is clearly and plainly a tech bubble. it's so transparently one, it's hard to understand how folks aren't seeing it. all these tools have been in the mainstream for a pretty substantial period of time (relatively) and the honest truth is they're just not moving the needle in many industries. the most frequent practical application of them in practice has been summarization, editing, and rewriting, which is a neat little parlor trick - but all the same, it's indicative of the fact that they largely model language, so that's primarily what they're good at.
you can bet on something entirely new being discovered... but what? there just isn't anything inching closer to that general AI hype we're all hearing about that exists in the real world. i'm sure folks are cooking on things, but that doesn't mean they're near production-ready. saying "this isn't a bubble because one day someone might invent something that's actually good" is kind of giving away the game - the current generation isn't that good, and we can't point to the thing that's going to overtake it.
> but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth.
100% agree, but I think those who disagree with that are failing on point 1. I absolutely think we'll need something different, but I'm also sure that there's a solid chance we get there, with a lot of bracketing around "eventually".
When something has been done once before, we have a directional map and we can often copy fairly quickly. See OpenAI to Claude.
We know animals are smarter than LLM’s in the important, learning day-to-day ways, so we have a directional compass. We know the fundamentals are relatively simple, because randomness found them before we did. We know it’s possible, just figuring out if it’s possible with anything like the hardware we have now.
We don’t know if a battery like that is possible - there are no comparisons to make, no steer that says “it’s there, keep looking”.
This is also the time in history with the most compute capacity coming online and the most people trying to solve it. Superpowers, superscalers, all universities, many people over areas as diverse as neuro, psych who wouldn’t have looked at the domain 5 years ago are now very motivated to be relevant, to study or build in related areas. We’ve tasted success. So my opinion is based on us having made progress, the emerging understanding of what it means for individuals and countries in terms of competitive landscape, and the desire to be a part of shaping that future rather than having it happen to us. ~Everyone is very motivated.
Betting against that just seems like a risky choice. Honestly, what would you bet, over what timeframe? How strongly would you say you’re certain of your position? I’m not challenging you, I just think it’s a good frame for grounding opinions. In the end, we really are making those bets.
My bands are pretty wide. I can make a case for 5 years to AGI, or 100 years. Top of my head without thinking, I’d put a small amount on 5 years, all my money on within 100, 50% or more within 20/30.
In 100 years the air may not be breathable, much less have enough CO2 carrying capacity for silicon based AGI
So I’d lose the bet? I’m not sure I’d be any worse off than those winning it!
The bet itself would make Earth less hospitable on a long shot. It's like shredding a winning lottery ticket in the hopes the shreds will win an even bigger lottery someday in the future.
There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge. This next round of innovation is fundamentally different than the innovation that is the focus now - nobody is looking to next stage bc this one hasn't achieved what we expected - bc it won't.
I sus that future iterations of AI will do much better tho.
> There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge
And that is...?
Another reply, different thought. I’d be keen to see what eg Carmack is up to. Someone outside of the usual suspects. There is a fashion to everything and right now LLM’s are a distraction on an S curve. The map is not the territory and language is a map.