We've been through this song and dance before. AI researchers make legitimately impressive breakthroughs in specific tasks, people extrapolate linear growth, the air comes out of the balloon after a couple years when it turns out we couldn't just throw progressively larger models at the problem to emulate human cognition.
I'm surprised that tech workers who should be the most skeptical about this kind of stuff end up being the most breathlessly hyperbolic. Everyone is so eager to get rich off the trend they discard any skepticism.
This is confusing. We've never had a ChatGPT-like innovation before to compare to. Yes, there have been AI hype cycles for decades, but the difference is that we now have permanent invaluable and society-changing tools out of the current AI cycle, combined with hundreds of billions of dollars being thrown at it in a level of investment we've never seen before. Unless you're on the bleeding edge of AI research yourself, or one of the people investing billions of dollars, it is really unclear to me how anyone can have confidence of where AI is not going
Because the hype will always outdistance the utility, on average.
Yes, you'll get peaks where innovation takes everyone by surprise.
Then the salesbots will pivot, catch up, and ingest the innovation into the pitch machine as per usual.
So yes, there is genuine innovation and surprise. That's not what is being discussed. It's the hype that inevitably overwhelms the innovation, and also inevitably pollutes the pool with increasing noise. That's just human nature, trying to make a quick buck from the new-hotness.
I don't agree with this.
There's a big difference between something that benefits productivity versus something that benefits humanity.
I think a good test for if it genuinely has changed society is if all gen AI were to disappear overnight. I would argue that nothing would really fundamentally change.
Contrast that with the sudden disappearance of the internet, or the combustion engine.
Work doesn't benefit humanity, work is the chains that keep us living the same day over and over til we die.
Your idea of benefit to humanity clearly doesn't involve the end of work, mine does.
AI can end work for most of us but that has to be what we want, can't be limiting it all the time bc of stupid reasons and expect it to have all the answers as if it weren't limited, that's silly.
If AI disappeared tonight so too would the future where nobody works in a call center or doing data entry or making button graphics to client exact specifications for a website nobody will ever see.
This is the Old World we live in rn - I don't want it to stay.
work is what gives us purpose and meaning. or do you want to live in wall-e world?
there is no long-term happiness without struggle and mastery.
it sounds like what you want is an end of menial labor that is treated poorly. why confuse that with work?
> I would argue that nothing would really fundamentally change.
I argue that there would be a huge collective sigh of relief from a large number of people. Not everybody, maybe not even a majority, but a large number nonetheless.
So I think it has changed society -- but perhaps not for the better overall.
It will take time though, if the internet had completely disappeared in the mid 90s nothing would have fundamentally changed
Wow. Just the fact that the Internet existed at the library was enough for me to know I could know anything as a child - once we got that Internet in 95 and Win 95 PC, everything changed for me. I was quite natural to the online world by Win 98.
MY entire worldview and daily life habits would have changed.
You must be older than me.
I don't mean it would have no impact, it's just that we hadn't reorganized society around it yet
Two things can both be true. I keep arguing both sides because:
1 Unless you’re aware of near term limits you think AI is going to the stars next year.
2 Architectures change. The only thing that doesn’t change is that we generally push on, temporarily limits are usually overcome and there’s a lot riding on this. It’s not a smart move to bet against progress over the medium term. This is also where the real benefits and risks lie.
Is AI in general more like going to space, or string theory? One is hard but doable. Other is a tar pit for money and talent. We are all currently placing our bets.
point 2 is the thing that i think is most important to point out:
"architectures change"
sure, that's a fact. let me apply this to other fields:
"there could be a battery breakthrough that gives electric cars a 2,000 mile range." "researchers could discover a new way to build nanorobots that attacks cancer directly and effectively cures all versions of it." "we could invent a new sort of aviation engine that is 1,000x more fuel efficient than the current generation."
i mean, yeah, sure. i guess.
the current hype is built on LLMs, and being charitable "LLMs built with current architecture." there are other things in the works, but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth. nothing has appeared that had the initial "wow" factor of the early versions of suno, or gpt, or dall-e, or sora, or whatever else.
this is clearly and plainly a tech bubble. it's so transparently one, it's hard to understand how folks aren't seeing it. all these tools have been in the mainstream for a pretty substantial period of time (relatively) and the honest truth is they're just not moving the needle in many industries. the most frequent practical application of them in practice has been summarization, editing, and rewriting, which is a neat little parlor trick - but all the same, it's indicative of the fact that they largely model language, so that's primarily what they're good at.
you can bet on something entirely new being discovered... but what? there just isn't anything inching closer to that general AI hype we're all hearing about that exists in the real world. i'm sure folks are cooking on things, but that doesn't mean they're near production-ready. saying "this isn't a bubble because one day someone might invent something that's actually good" is kind of giving away the game - the current generation isn't that good, and we can't point to the thing that's going to overtake it.
> but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth.
100% agree, but I think those who disagree with that are failing on point 1. I absolutely think we'll need something different, but I'm also sure that there's a solid chance we get there, with a lot of bracketing around "eventually".
When something has been done once before, we have a directional map and we can often copy fairly quickly. See OpenAI to Claude.
We know animals are smarter than LLM’s in the important, learning day-to-day ways, so we have a directional compass. We know the fundamentals are relatively simple, because randomness found them before we did. We know it’s possible, just figuring out if it’s possible with anything like the hardware we have now.
We don’t know if a battery like that is possible - there are no comparisons to make, no steer that says “it’s there, keep looking”.
This is also the time in history with the most compute capacity coming online and the most people trying to solve it. Superpowers, superscalers, all universities, many people over areas as diverse as neuro, psych who wouldn’t have looked at the domain 5 years ago are now very motivated to be relevant, to study or build in related areas. We’ve tasted success. So my opinion is based on us having made progress, the emerging understanding of what it means for individuals and countries in terms of competitive landscape, and the desire to be a part of shaping that future rather than having it happen to us. ~Everyone is very motivated.
Betting against that just seems like a risky choice. Honestly, what would you bet, over what timeframe? How strongly would you say you’re certain of your position? I’m not challenging you, I just think it’s a good frame for grounding opinions. In the end, we really are making those bets.
My bands are pretty wide. I can make a case for 5 years to AGI, or 100 years. Top of my head without thinking, I’d put a small amount on 5 years, all my money on within 100, 50% or more within 20/30.
In 100 years the air may not be breathable, much less have enough CO2 carrying capacity for silicon based AGI
So I’d lose the bet? I’m not sure I’d be any worse off than those winning it!
The bet itself would make Earth less hospitable on a long shot. It's like shredding a winning lottery ticket in the hopes the shreds will win an even bigger lottery someday in the future.
There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge. This next round of innovation is fundamentally different than the innovation that is the focus now - nobody is looking to next stage bc this one hasn't achieved what we expected - bc it won't.
I sus that future iterations of AI will do much better tho.
> There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge
And that is...?
Another reply, different thought. I’d be keen to see what eg Carmack is up to. Someone outside of the usual suspects. There is a fashion to everything and right now LLM’s are a distraction on an S curve. The map is not the territory and language is a map.
One problem is that people assume the end goal is to create a human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen. But there is no need for that at all to still cause a huge disruption; let's say most current workers in roles that benefit from AI (copilot, writing, throwaway clipart, repetitive tasks, summarizing, looking up stuff, etc.) lead not even to job loss but fewer future jobs created - what does that mean for the incoming juniors? What does that mean for the people looking for that kind of work? It's not obvious at all how big of a problem that will create.
> human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen
It's obvious to some people but that's not what many investors and company operators are saying. I think the prevailing message in the valley is "AGI is going to happen" for different values of when, not if. So I think you'd be forgiven for taking them at face value.
Just like nuclear fusion, right? "When" will always be some time after the next fundraising round.
Right, but the breathless technobabble about the future of our AI-driven world crowds out actual consideration of these important topics.
It's like con artists and management consultants. They are the most susceptible because they drink the koolaid.
I think the mistake is that in the media it is extrapolating linear growth but in practice it is a wobbly path. And this wobbly path allows anyone to create whatever nearrative they want.
It reminds me of seeing headlines last week that NVDA is down after investors were losing faith after the last earnings. Then you look at the graph and NVDA is only like 10% off its all times high and still in and out of the most valuable company in the world.
Advancement is never linear. But I believe AI trends will continue up and to the right and even in 20 years when AI can do remarkably advanced things that we can barely comprehend, there will be internet commentary about how its all just hype.
You articulated my view perfectly. I just don't get the buy in from people who should know better than trust vc funded talking heads.
> people extrapolate linear growth
You mean exponential! No one gets out of bed for linear these days.
>> I'm surprised that tech workers ... end up being the most breathlessly hyperbolic.
We're not.
There's a reason why so many of the people on the crypto grift in 2020-2022 have jumped to the AI grift. Same logic of "revolution is just around the corner", with the added mix of AGI millenarianism which hits a lot of nerds' soft spots.