jonahss 1 day ago

These companies are not trying to be companies that sell an LLM to summarize text or write emails. They're trying to make a full Artificial General Intelligence. The LLMs pull in some money today, but are just a step towards what they're actually trying to build. If they can build such a thing (which may or may not be possible, or may not happen soon), then they can immediately use it to make itself better. At this point they don't need nearly as many people working for them, and can begin building products or making money or making scientific discoveries in any field they choose. In which case, they're in essence, the last company to ever exist, and are building the last product we'll ever need (or the first instance of the last product we'll ever produce). And that's why investors think they're worth so much money.

some ppl don't believe this cus it seems crazy.

anyways, yes they're trying to make their own chips to not be beholden to nvidia, and are investing in other chip startups. And at the same time, nvidia is thinking that if they can make an AI, why should they ever even sell their chips, and so they're working on that too.

5
visarga 1 day ago

> they're in essence, the last company to ever exist, and are building the last product we'll ever need

Physical reality is the ultimate rate-limiter. You can train on all of humanity's past experiences, but you can't parallelize new discoveries the same way.

Think about why we still run physical experiments in science. Even with our most advanced simulation capabilities, we need to actually build the fusion reactor, test the drug molecule, or observe the distant galaxy. Each of these requires stepping into genuinely unknown territory where your training data ends.

The bottleneck isn't computational - it's experimental. No matter how powerful your AGI becomes, it still has to interact with reality sequentially. You can't parallelize reality itself. NASA can run millions of simulations of Mars missions, but ultimately needs to actually land rovers on Mars to make real discoveries.

This is why the "last company" thesis breaks down. Knowledge of the past can be centralized, but exploration of the future is inherently distributed and social. Even if you built the most powerful AGI system imaginable, it would still benefit from having millions of sensors, experiments, and interaction points running in parallel across the world.

It's the difference between having a really good map vs. actually exploring new territory. The map can be centralized and copied infinitely. But new exploration is bounded by physics and time.

Teever 1 day ago

To conquer the physical world the idea of AGI must merge with the idea of a self replicating machine.

The magnum opus of this notion is the Von Neumann probe.

With the entire galaxy and eventually universe to run these experiments the map will become as close to the territory as it can.

pixelsort 1 day ago

It seems that anyone who has ever played games like Factorio or Satisfactory can readily extrapolate similar real-world conclusions. Physical inefficiencies are merely an interface issue that erodes over time with intelligent modularizations and staging of form factors at various scales.

optimalsolver 22 hours ago

This might come as a surprise to some people, but the real world is infinitely more complex than a sim game.

visarga 1 day ago

Fully agree, self replication is key. But we can't automate GPU production yet.

Current GPU manufacturing is probably one of the most complex human endeavors we've ever created. You need incredibly precise photolithography, ultra-pure materials, clean rooms, specialized equipment that itself requires other specialized equipment to make... It's this massive tree of interdependent technologies and processes.

This supply chain can only exist if it is economically viable, so it needs large demand to pay for the cost of development. Plus you need the accumulated knowledge and skills of millions of educated workers - engineers, scientists, technicians, operators - who themselves require schools, universities, research institutions. And those people need functioning societies with healthcare, food production, infrastructure...

Getting an AI to replicate autonomously would be like asking it to bootstrap modern economy from scratch.

mrandish 1 day ago

> They're trying to make a full Artificial General Intelligence.

> then they can immediately use it to make itself better.

"AGI" is a notoriously ill-defined term. While a lot of people use the "immediately make itself better" framing, many expert definitions of AGI don't assume it will be able to iteratively self-improve at exponentially increasing speed. After all, even the "smartest" humans ever (on whatever dimensions you want to assess) haven't been able to sustain self-improving at even linear rates.

I agree with you that AGI may not even be possible or may not be possible for several decades. However, I think it's worth highlighting there are many scenarios where AI could become dramatically more capable than it currently is, including substantially exceeding the abilities of groups of top expert humans on literally hundreds of dimensions and across broad domains - yet still remain light years short of iteratively self-improving at exponential rates.

Yet I hear a lot of people discussing the first scenario and the second scenario as if they're neighbors on a linear difficulty scale (I'm not saying you necessarily believe that. I think you were just stating the common 'foom' scenario without necessarily endorsing it). Personally, I think the difficulty scaling between them may be akin to the difference between inter-planetary and inter-stellar travel. There's a strong chance that last huge leap may remain sci-fi.

RedNihilism133 1 day ago

I pretty much agree with this article - It seems like LLM companies are just riding the hype, and the idea that LLMs will lead onto General AI feels like quite a stretch. They’re simply too imprecise and unreliable for most technical tasks. There's just no way to clearly specify your requirements, so you can never guarantee you’ll get what you actually need. Plus, their behaviour is constantly changing which only makes them even more unreliable.

This is why our team developing The Ac28R have taken a completely new approach. It's a new kind of AI which can write complex accurate code, handling everything from databases to complex financial models. The AI is based on visual specifications which allow you to specify exactly what you want, The Ac28R’s analytical engine builds all the code you need - No guesswork involved.

horns4lyfe 18 hours ago

Please keep your ads to Twitter or LinkedIn or whatever

Mistletoe 1 day ago

>If they can build such a thing (which may or may not be possible, or may not happen soon), then they can immediately use it to make itself better.

This sounds like a perpetual motion machine or what we heard over and over in the 3d printing fad.

We have natural general intelligence in 8 billion people on earth and it hasn't solved all of these problems in this sort of instant way, I don't see how a synthetic one without rights, arms, legs, eyes, ability to move around, start companies, etc. changes that.

sysmax 1 day ago

LLMs are a very good tool for a particular class of problems. They can sift through endless amounts of data and follow reasonably ambiguous instructions to extract relevant parts without getting bored. So, if you use them well, you can dramatically cut down the routine part of your work, and focus on more creative part.

So if you had that great idea that takes a full day to prototype, hence you never bothered, an LLM can whip out something reasonably usable under an hour. So, it will make idea-driven people more productive. The problem is, you don't become a high-level thinking without doing some monkey work first, and if we delegate it all to LLMs, where will the next generation of big thinkers come from?

kbenson 16 hours ago

> This sounds like a perpetual motion machine or what we heard over and over in the 3d printing fad.

Except that it is actually what humanity and these 8 billion people are doing, making each successive generation "better", for some definition of better that is constantly in flux based on what it believed at the current time.

It's not guaranteed though, it's possible to regress. Also, it's not humanity as a whole, but a bunch of subgroups that have slightly differing ideas of what better means at the edges, but that also share results for future candidate changes (whether explicitly through the international scientific community or implicitly through memes and propaganda at a national or group level).

It took a long time to hit on strategies that worked well, but we've found a a bunch over time, from centralized government (we used to be small tribes on plains in in caves) to the scientific method to capitalism (and whether it's what we'll consider the best choice in the future or not it's been invaluable for the last several centuries), they've all moved us forward, which is simple to see if you sample every 100 years or so going into the past.

The difference between what we've actually got in reality with the uman race and what's being promised with GAI is speed of iteration. If a areal GAI can indeed emulate what we have currently with the advancement of the human race but at a faster cycle, then it makes sense it would surpass us at some point, whether very quickly or eventually. That's a big if though, so who knows.

AtlasBarfed 1 day ago

AGI is only coming with huge amounts of good data.

Unfortunately for AI in general, LLMs are forcing data moats, either passive or due to aggressive legal attack, or generating so much crud data that the good data will get drowned out.

In fact, I'm not sure why I continue to uh, contribute, my OBVIOUSLY BRILLIANT commentary on this site knowing it is fodder for AI training.

The internet has always been bad news for the "subject expert" and I think AI will start forcing people to create secret data or libraries.

tim333 1 day ago

Current LLMs need huge amounts of data but before we get AGI we'll probably get better algorithms that are less limited by that.

neural_thing 1 day ago

i wrote about the prospect of financial returns from AGI here if anyone's interested - https://sergey.substack.com/p/will-all-this-ai-investment-pa...