While I enjoyed the article, it’s just another in a line of the same article with different flavors and authors that all have the same fundamental error.
The prevailing counterargument against AI consistently hinges on the present state of AI rather than the trajectory of its development: but what matters is not the capabilities of AI as they exist today, but the astonishing velocity at which those capabilities are evolving.
We've been through this song and dance before. AI researchers make legitimately impressive breakthroughs in specific tasks, people extrapolate linear growth, the air comes out of the balloon after a couple years when it turns out we couldn't just throw progressively larger models at the problem to emulate human cognition.
I'm surprised that tech workers who should be the most skeptical about this kind of stuff end up being the most breathlessly hyperbolic. Everyone is so eager to get rich off the trend they discard any skepticism.
This is confusing. We've never had a ChatGPT-like innovation before to compare to. Yes, there have been AI hype cycles for decades, but the difference is that we now have permanent invaluable and society-changing tools out of the current AI cycle, combined with hundreds of billions of dollars being thrown at it in a level of investment we've never seen before. Unless you're on the bleeding edge of AI research yourself, or one of the people investing billions of dollars, it is really unclear to me how anyone can have confidence of where AI is not going
Because the hype will always outdistance the utility, on average.
Yes, you'll get peaks where innovation takes everyone by surprise.
Then the salesbots will pivot, catch up, and ingest the innovation into the pitch machine as per usual.
So yes, there is genuine innovation and surprise. That's not what is being discussed. It's the hype that inevitably overwhelms the innovation, and also inevitably pollutes the pool with increasing noise. That's just human nature, trying to make a quick buck from the new-hotness.
I don't agree with this.
There's a big difference between something that benefits productivity versus something that benefits humanity.
I think a good test for if it genuinely has changed society is if all gen AI were to disappear overnight. I would argue that nothing would really fundamentally change.
Contrast that with the sudden disappearance of the internet, or the combustion engine.
Work doesn't benefit humanity, work is the chains that keep us living the same day over and over til we die.
Your idea of benefit to humanity clearly doesn't involve the end of work, mine does.
AI can end work for most of us but that has to be what we want, can't be limiting it all the time bc of stupid reasons and expect it to have all the answers as if it weren't limited, that's silly.
If AI disappeared tonight so too would the future where nobody works in a call center or doing data entry or making button graphics to client exact specifications for a website nobody will ever see.
This is the Old World we live in rn - I don't want it to stay.
work is what gives us purpose and meaning. or do you want to live in wall-e world?
there is no long-term happiness without struggle and mastery.
it sounds like what you want is an end of menial labor that is treated poorly. why confuse that with work?
> I would argue that nothing would really fundamentally change.
I argue that there would be a huge collective sigh of relief from a large number of people. Not everybody, maybe not even a majority, but a large number nonetheless.
So I think it has changed society -- but perhaps not for the better overall.
It will take time though, if the internet had completely disappeared in the mid 90s nothing would have fundamentally changed
Wow. Just the fact that the Internet existed at the library was enough for me to know I could know anything as a child - once we got that Internet in 95 and Win 95 PC, everything changed for me. I was quite natural to the online world by Win 98.
MY entire worldview and daily life habits would have changed.
You must be older than me.
I don't mean it would have no impact, it's just that we hadn't reorganized society around it yet
Two things can both be true. I keep arguing both sides because:
1 Unless you’re aware of near term limits you think AI is going to the stars next year.
2 Architectures change. The only thing that doesn’t change is that we generally push on, temporarily limits are usually overcome and there’s a lot riding on this. It’s not a smart move to bet against progress over the medium term. This is also where the real benefits and risks lie.
Is AI in general more like going to space, or string theory? One is hard but doable. Other is a tar pit for money and talent. We are all currently placing our bets.
point 2 is the thing that i think is most important to point out:
"architectures change"
sure, that's a fact. let me apply this to other fields:
"there could be a battery breakthrough that gives electric cars a 2,000 mile range." "researchers could discover a new way to build nanorobots that attacks cancer directly and effectively cures all versions of it." "we could invent a new sort of aviation engine that is 1,000x more fuel efficient than the current generation."
i mean, yeah, sure. i guess.
the current hype is built on LLMs, and being charitable "LLMs built with current architecture." there are other things in the works, but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth. nothing has appeared that had the initial "wow" factor of the early versions of suno, or gpt, or dall-e, or sora, or whatever else.
this is clearly and plainly a tech bubble. it's so transparently one, it's hard to understand how folks aren't seeing it. all these tools have been in the mainstream for a pretty substantial period of time (relatively) and the honest truth is they're just not moving the needle in many industries. the most frequent practical application of them in practice has been summarization, editing, and rewriting, which is a neat little parlor trick - but all the same, it's indicative of the fact that they largely model language, so that's primarily what they're good at.
you can bet on something entirely new being discovered... but what? there just isn't anything inching closer to that general AI hype we're all hearing about that exists in the real world. i'm sure folks are cooking on things, but that doesn't mean they're near production-ready. saying "this isn't a bubble because one day someone might invent something that's actually good" is kind of giving away the game - the current generation isn't that good, and we can't point to the thing that's going to overtake it.
> but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth.
100% agree, but I think those who disagree with that are failing on point 1. I absolutely think we'll need something different, but I'm also sure that there's a solid chance we get there, with a lot of bracketing around "eventually".
When something has been done once before, we have a directional map and we can often copy fairly quickly. See OpenAI to Claude.
We know animals are smarter than LLM’s in the important, learning day-to-day ways, so we have a directional compass. We know the fundamentals are relatively simple, because randomness found them before we did. We know it’s possible, just figuring out if it’s possible with anything like the hardware we have now.
We don’t know if a battery like that is possible - there are no comparisons to make, no steer that says “it’s there, keep looking”.
This is also the time in history with the most compute capacity coming online and the most people trying to solve it. Superpowers, superscalers, all universities, many people over areas as diverse as neuro, psych who wouldn’t have looked at the domain 5 years ago are now very motivated to be relevant, to study or build in related areas. We’ve tasted success. So my opinion is based on us having made progress, the emerging understanding of what it means for individuals and countries in terms of competitive landscape, and the desire to be a part of shaping that future rather than having it happen to us. ~Everyone is very motivated.
Betting against that just seems like a risky choice. Honestly, what would you bet, over what timeframe? How strongly would you say you’re certain of your position? I’m not challenging you, I just think it’s a good frame for grounding opinions. In the end, we really are making those bets.
My bands are pretty wide. I can make a case for 5 years to AGI, or 100 years. Top of my head without thinking, I’d put a small amount on 5 years, all my money on within 100, 50% or more within 20/30.
In 100 years the air may not be breathable, much less have enough CO2 carrying capacity for silicon based AGI
So I’d lose the bet? I’m not sure I’d be any worse off than those winning it!
The bet itself would make Earth less hospitable on a long shot. It's like shredding a winning lottery ticket in the hopes the shreds will win an even bigger lottery someday in the future.
There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge. This next round of innovation is fundamentally different than the innovation that is the focus now - nobody is looking to next stage bc this one hasn't achieved what we expected - bc it won't.
I sus that future iterations of AI will do much better tho.
> There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge
And that is...?
Another reply, different thought. I’d be keen to see what eg Carmack is up to. Someone outside of the usual suspects. There is a fashion to everything and right now LLM’s are a distraction on an S curve. The map is not the territory and language is a map.
One problem is that people assume the end goal is to create a human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen. But there is no need for that at all to still cause a huge disruption; let's say most current workers in roles that benefit from AI (copilot, writing, throwaway clipart, repetitive tasks, summarizing, looking up stuff, etc.) lead not even to job loss but fewer future jobs created - what does that mean for the incoming juniors? What does that mean for the people looking for that kind of work? It's not obvious at all how big of a problem that will create.
> human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen
It's obvious to some people but that's not what many investors and company operators are saying. I think the prevailing message in the valley is "AGI is going to happen" for different values of when, not if. So I think you'd be forgiven for taking them at face value.
Just like nuclear fusion, right? "When" will always be some time after the next fundraising round.
Right, but the breathless technobabble about the future of our AI-driven world crowds out actual consideration of these important topics.
It's like con artists and management consultants. They are the most susceptible because they drink the koolaid.
I think the mistake is that in the media it is extrapolating linear growth but in practice it is a wobbly path. And this wobbly path allows anyone to create whatever nearrative they want.
It reminds me of seeing headlines last week that NVDA is down after investors were losing faith after the last earnings. Then you look at the graph and NVDA is only like 10% off its all times high and still in and out of the most valuable company in the world.
Advancement is never linear. But I believe AI trends will continue up and to the right and even in 20 years when AI can do remarkably advanced things that we can barely comprehend, there will be internet commentary about how its all just hype.
You articulated my view perfectly. I just don't get the buy in from people who should know better than trust vc funded talking heads.
> people extrapolate linear growth
You mean exponential! No one gets out of bed for linear these days.
>> I'm surprised that tech workers ... end up being the most breathlessly hyperbolic.
We're not.
There's a reason why so many of the people on the crypto grift in 2020-2022 have jumped to the AI grift. Same logic of "revolution is just around the corner", with the added mix of AGI millenarianism which hits a lot of nerds' soft spots.
> The prevailing counterargument against AI consistently hinges on the present state of AI rather than the trajectory of its development: but what matters is not the capabilities of AI as they exist today, but the astonishing velocity at which those capabilities are evolving.
No, the prevailing counter argument is that the prevailing argument in favor of AI taking over everything assumes that the acceleration will remain approximately constant, when in fact we don't know that it will do so and we have every reason to believe that it won't.
No technology in history has ever maintained an exponential growth curve for very long. Every innovation has followed the same pattern:
* There's a major scientific breakthrough which redefines what is possible.
* That breakthrough leads to a rapid increase in technology along a certain axis.
* We eventually see a plateau where we reach the limits of this new paradigm and begin to adapt to the new normal.
AI hypists always talk as though we should extrapolate the last 2 years' growth curve out to 10 years and come to the conclusion that General Intelligence is inevitable, but to do so would be to assume that this particular technological curve will behave very differently than all previous curves.
Instead, what I and many others argue is that we are already starting to see the plateau. We are now in the phase where we've hit the limits of what these models are capable of and we're moving on to adapting them to a variety of use cases. This will bring more change, but it will be slower and not as seismic as the hype would lead you to believe, because we've already gotten off the exponential train.
AI hypists come to the conclusion that general intelligence is inevitable because they know the brain exists and are materialists. Anyone who checks those two boxes will come to the conclusion that an artificial brain is possible and therefore AGI is as well. With the amount of money being spent then its only a matter of when
> With the amount of money being spent then its only a matter of when
Yes, but there's no strong reason to believe that "when" is "within fewer than 1000 years". What's frustrating about the hype is not that people think the brain is replicable, it's that they think that this single breakthrough will be the thing that replicates it.
Moore's law is still going as far I'm aware - there may have been clarification of sorts recently but that's kept up exponentially rather well despite everyone knowing that it can't do that.
Moore's law would improve the speed of LLMs and improve their size, but in recent weeks [0] it's become apparent that we're hitting the limit of "just make them even bigger" being a viable strategy for improving the intelligence of LLMs.
I'm excited for these things to get even cheaper, and that will enable more use cases, but we're not going to see the world-changing prophesies of some of AI's evangelists in this thread come true by dint of cheaper current-gen models.
But we don't know if AI development is following an exponential or sigmoid curve (actually we do kind of, now, but that's beside the point for this post.)
A wise institution will make decisions based on current capabilities, not a prognostication.
If investors didn't invest based on expected future performance, the share market would look completely different than it actually does today. So, I can't understand how anyone can claim that.
All S-curves look exponential at some point.
It was unclear if the current wave of AI would be an exponential, or for how long, or if it would end up just being another S-curve. The potential upside hooked a lot of people into action on the VC-maths of "it doesn't matter if it's unlikely, because the upside is just too good".
It is now becoming clear however that we aren't getting AGI. What we have now is fundamentally what we're likely to have in 5-10 years time. I do think we'll find better uses, figure our shit out, and have much more effective products in that time, I think we're entering the "LLM-era" in much the same way as the 2010s were the smartphone era that redefined a lot of things, but in still the same way, a phone of ~2010 isn't materially different to a phone of ~2020, they're still just internet connected, location aware, interfaces to content and services.
But you could also say: the prevailing argument for AI consistently hinges on the (imagined, projected based on naive assumptions) trajectory of AI rather than the present state.
> the astonishing velocity at which those capabilities are evolving.
This is what is repeated ad nauseam by AI companies desperate for investment and hype. Those who’ve been in the game since before this millennium tend not to be so impressed — recent gains have mostly been due to increased volume of computation and data with only a few real theoretical breakthroughs.
Laymen (myself included) were indeed astonished by ChatGPT, but it’s quite clear that those in the know saw it coming. Remember that those who say otherwise might have reasons (other than an earnest interest in the truth) for doing so.
I honestly believe this specific case is a Pareto situation where the first 80% came at breakneck speeds, and the final 20% just won't come in a satisfactory way. And the uncanny valley effect demands a percentage that's extremely close to 100% before it has any use. Neural networks are great at approximations, but an approximate person is just a nightmare.
What is your time horizon? We're already at a date where people were saying these jobs would be gone. The people most optimistic about the trajectory of this technology were clearly wrong.
If you tell me AI newscasters will be fully functional in 10 or 15 years, I'll believe it. But that far in the future, I'd also believe news will be totally transformed due to some other technology we aren't thinking about yet.
Who gives a shit about AI newscasters?
AI allows us to see everything we track the data of rn - and see in a useful way and in real time. It also allows all the tedious and repetitive tasks done by everyone, no longer needs to be done by anyone - creating a static webpage or graphics for a mobile app, a mobile app, or game development - a of those are the easiest to do they ever have been.
AI isn't for millennials or even Gen z - it's for Alpha, they will be the first to truly understand what AI is and to use it as it will be used forever after. Til they adopt it, none of this really matters.
You're declaring you don't give a shit about the current topic. Why are you here?
It's getting old but there's an xkcd for your kind of reasoning:
Isn't this essentially the same argument as "there are only 10 covid cases in this area, nothing to worry about"?
It's really missing the point, the point is whether or not exponential growth is happening or not. It doesn't with husbands, it does with covid, time will tell about AI.
No, because as you rightly point out we know exponential growth is very possible with Covid but we don't know if that will happen with AI.
In fact, the only evidence we have for exponential growth in AI is the word of the people selling it to us.
Transformers have been around for 7 years, ChatGPT for 2. This isn't the first few samples of what could be exponential growth. These are several quarters of overpromise and underdelivery. The chatbot is cool and it outperforms what awful product search has become. But is it enough to support a 3.5 trillion dollar sized parts supplier?
It amazes me how excited people are to see their livelihoods destroyed. I'm retired, but people designing AI in their 20's will be unemployed in a decade. Good luck dudes and dude-ettes, you're fucking yourselves.
the prevailing argument in favor of investing in AI is its potential.
the prevailing argument against using AI is its current lack of potential.
Those things are inherently in tension, think of it as hiring a new employee straight out of undergrad. You are hiring them based largely on the employee they will become...with increasing expectations over time balanced against increasing variability in outcomes over time. However, if one year in that employee continues to suck at their current job, their long term potential doesn't really matter. Moreso, the long term potential is no longer credibly evidenced by the inability to progress at doing the current job.
This is an investment gone bad in the current state of things. It doesn't matter what might happen it matters what did. The investment was made based on the perception of astonishing velocity, and it seems that we may need to calibrate our spedometers.