lukev 4 days ago

This kind of news should be a death-knell for OpenAI.

If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

17
ChuckMcM 4 days ago

Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to "fix" that is build you're own meadow for data cows.

xAI isn't allowing people to use the Twitter feed to train AI

Google is keeping it's properties for Gemini

Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.

So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.

lucianbr 4 days ago

The assumption that you can just build a successful social network as an aside because you need access to data seems wildly optimistic. Next will be Netflix announcing working on AGI because lately show writers have been not very imaginative, and they need fresh content to keep subscribers.

DJHenk 4 days ago

'Wildly optimistic' is a very fitting categorization for Altman/OpenAI.

pjc50 4 days ago

Netflix will almost certainly try to push a vizslop or at least AI-written show at some point to see how bad the backlash is.

bn-l 3 days ago

There’s a guy who does these weird alien talking head / vox pop videos on YouTube. They’re pretty funny. I can see the potential.

littlestymaar 3 days ago

Yes, but this misses the point ade by the comment above. Using AI in your workflow ≠ attempting to build AGI.

pjc50 3 days ago

If you have AGI, what do you do with it in your workflow? Or is the question what does it do with you?

everdrive 3 days ago

I came across a quote in a forum which was part of a discussion around why corporate messaging and pandering has gotten so crazy lately. One comment stuck out as especially interesting, and I'll quote it in full below:

---

C suites are usually made of up really out of touch and weird workaholics, because that is what it takes to make it to the C suite of a large company. They buy DSS (decision support service / software) from vendors, usually marketing groups, that basically tell them what is in and what isn't. Many marketing companies are now getting that data from twitter and reddit, and portraying it as the broad social trend. This is a problem because twitter and reddit are both extremely curated and censored, and the tone of the conversation there is really artificial, and can lead to really bad conclusions.

---

This is only somewhat related, but if OpenAI did actually succeed in building their own successful social media platform (doubtful) they would be basing a lot of their model on whatever subset of people wanted to be part of the OpenAI social media platform. The opportunity for both mundane and malicious bias in models there seems huge.

Somewhat related, apparently a lot of English spellings were standardized by the invention of the printing press. This isn't surprising; it was one of the first technologies to really democratize written materials, and so it had a very outsized power to set standards. LLMs feel like they could be a bit like this, particularly if everyone continues with their current trends of intentionally building reliance on them into their products / companies / workflows. As a real life example, someone at work realized you could ask co-pilot to rate the professionalism of your communication during a meeting. This seems quite chilling, since you're not really rating your professionalism, but measuring yourself against whatever weird bell curve exists in co-pilot.

I'm absolutely baffled that LLMs are seeing broad adoption, and absolutely baffled that people intentionally adopting and integrating them into their lives. I'm in my early 40s now. I'm not sure if I can get out of the tech field at this point, but I'm seriously thinking about options at this point.

Workaccount2 3 days ago

I lowkey believe that the twitterification of journalists is probably one of the worst things to happen to the country in the last 25 years.

The social harm inflicted by journalists thinking "Damn, I can just go on twitter to find out what is going on and how people feel about it!"

Magma7404 3 days ago

I'm 40 too and used to laugh at all those programmers who were switching to wood working for a living. Now that vibe coding is being advertised all over the place, and will most likely be bought by the CEOs, and the trend of stealing open-source software without attribution while making fun of people who are proud of their knowledge and craft, I'm starting to think that all those future wood workers may not be wrong.

Whatever happens, it will be the end of programming as I know it, but it cannot end well.

ryanjamurphy 3 days ago

> They buy DSS (decision support service / software) from vendors, usually marketing groups

What are these platforms, exactly? I've heard about them but have never come across them.

m_fayer 4 days ago

I would just like to appreciate your imagery and wordplay here, it’s spot on and I think should be our standard for conceptualizing this corporate behavior.

Springtime 4 days ago

They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.

safety1st 4 days ago

Good heavens, I'd think that if anything could turn an AI model into a misanthrope, it would be this.

Springtime 4 days ago

One distinctive quality I've observed with OpenAI's models (at least with the cheapest tiers of 3,4 and o3) are their human-like face-saving when confronted with things they've answered incorrectly.

Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.

It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).

I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.

wodenokoto 4 days ago

> It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it

I don't know whats better here. ChatGPT did have a tendency to reply with things like "Oh, I'm sorry, you are right that x is wrong because of y. Instead of x, you should do x"

cameronh90 3 days ago

> Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault

Human-level AI is closer than I'd realised... at this rate it'll have a seat in the senate by 2030.

spencerflem 3 days ago

They are already passing ChatGPT written laws

https://hellgatenyc.com/andrew-cuomo-chatgpt-housing-plan/

silisili 3 days ago

I cannot think of a worse future for AI than parroting Reddit comments.

amy214 3 days ago

the upside of reddit data is you have updoots and downdoots, so you can positively and negatively train your AI model on what people would typically upvote, and train against what they might downvote

Now, that's the upside, the downside is you end up with an AI catering to the typical redditor. Since many claims there are formed on the basis of, "confident, sounds reasonable, speaks with authority, gets angry when people disagree" - hallucinations happen. Rather we want something like "produces evidence-based claims with unbiased data sources"

bayindirh 4 days ago

They also have syndication agreement with The Guardian [0]. I lost all my respect for The Guardian after seeing this.

[0]: https://www.theguardian.com/gnm-press-office/2025/feb/14/gua...

teruakohatu 3 days ago

Why shouldn’t they license their content? It’s theirs and it’s a non-profit that needs revenue.

bayindirh 3 days ago

It's not that they shouldn't license their content. I'm not a fan of OpenAI and their "fair use" of things, to be honest.

freedomben 3 days ago

But this is the opposite of fair use. They're licensing the content, which means they're paying for it in some fashion, not just scraping it and calling it fair use.

If you don't like the fair use of open information, I would expect you to be cheering this rather than losing respect for those involved.

KoolKat23 3 days ago

Why? I can't see any issues with this at all, whatsoever.

bayindirh 3 days ago

Everybody is free to have their own opinions. I don't like how AI companies and mostly OpenAI "fair-uses" whole internet, so there's that.

dylan604 3 days ago

Again, like others have contested with you, how is this The Guardian's fault to have issue with? They convinced ClosedAI to give them money in a licensing deal to use their content as training data without having it scraped for free.

Your sense of injustice or whatevs you want to call it is aimed in the opposite direction.

KoolKat23 3 days ago

This isn't even fair-use defence, they're paying to use it expressly for this purpose.

scyzoryk_xyz 4 days ago

Mmmm nice try hooking me up.

Instead, I’m just going to hang out here in this hacker meadow and on FOSS social networks where something like that would never happen!

dylan604 3 days ago

Why oh why did I take the red pill? Plug me back in.

scyzoryk_xyz 3 days ago

Please Mr Smith, pleas plug me back into those sweet sweet udder suckling attention extractors.

jtwoodhouse 3 days ago

This metaphor nails it. In this day and age, the way around walled gardens is to build your own walled garden apparently.

baq 4 days ago

> Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.

sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.

freehorse 4 days ago

I was always wondering if such sociopaths actually believe they are doing sth for noble reasons, or they consciously use it as a trick in their quest for power.

bravetraveler 4 days ago

Must consider the illusory truth effect

fullshark 3 days ago

Sociopathy isn't enough, you have to mix in some megalomania and that's the output

everythingisfin 3 days ago

If this were their plan, they’d be discounting that some of their users would be controlled by their own AI.

My guess is that they’re trying other things to diversify themselves and/or to try to keep investors interested. Whether or not it works is irrelevant as long as they can convince others it will increase their usage.

anshumankmr 4 days ago

But don't they have ChatGPT, the fifth or whatever most popular website on the planet? And deals with Reddit. Sure that can't touch the treasure trove Google is sittig on, xAI sure won't give them access and Github could perhaps sell their data (but that's a maybe)

Nuzzerino 4 days ago

> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.

Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.

latexr 4 days ago

> I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.

Even if you believe all that to be true, it in no way contradicts what you quoted or makes it unfair. Having a kick ass product and good brand awareness in no way correlates to being close to AGI.

smt88 4 days ago

- Product is not kickass. Hallucinations and cost limit its usefulness, and it's incinerating money. Prices are too high and need to go much higher to turn a profit.

- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.

- Many key employees have already left or been forced out.

tokioyoyo 4 days ago

My dad uses ChatGPT for some excel macros. He’s ~70, and not really into tech news. Same with my mom, but for more casual stuff. You’re underestimating how prevalent the usage is across “normies” who really don’t care about second order effects in terms of employment and etc.

OccamsMirror 3 days ago

I loathe AI for what it's doing the job market.

But I'd be stupid not to use it. It has made boilerplate work so much easier. It's even added an interesting element where I use it to brain storm.

Even most haters will end up using it.

I think eventually people will realize it's not replacing anyone directly and is really just a productivity multiplier. People were worried that email would remove jobs from the economy too.

I'm not convinced our general AIs are going to get much better than they are now. It feels like we're we are at with mobile phones.

frm88 3 days ago

> People were worried that email would remove jobs from the economy too.

And it did. Together with other technology, but yes:

https://archive.is/20200118212150/https://www.wsj.com/articl...

https://www.wsj.com/articles/the-vanishing-executive-assista...

MisterSandman 3 days ago

> I loathe AI for what it's doing the job market.

To be fair, it's not doing anything to the job market, it's just being used as an excuse. Very few tech jobs have truly been replaced AI, it's just an easy excuse for layoffs due to recession/mismanagement etc.

tokioyoyo 3 days ago

It has affected graphic design and copy writing jobs quite a lot. Software engineering is still a high-barrier-entry job, so it'll take some time. But pressure is already here.

player1234 3 days ago

Sure sounds like the trillion dollar killer app needed for ROI.

Topfi 3 days ago

Counterpoint, ChatGPT as a brand has insane mindshare and buy in. It is synonymous with LLMs/“aI” in the mind of many and has broken through like few brands before it. That ain’t nothing.

Counter-Counterpoint, I still feel investors priced in a bit more than that. Yahoo! Had major buy in as well and AGI believers were selling investors not just on the next Unicorn, but rather the next industry, AGI not being merely a Google in the 90s, but rather all of the internet and what that would become over the decades to this day. Anything less than delivering that, is not exactly what a large part of investors bought. But then again, any bubble has to burst someday.

spacebanana7 4 days ago

> No one is brand-loyal to OpenAI.

Sam Altman is incredibly popular with young people on TikTok. He cured homework - mostly for free - and has a nice haircut. Videos of him driving his McLaren have comment sections in near total agreement that he deserves it.

pjc50 3 days ago

> He cured homework - mostly for free

People argue about the damage COVID lockdowns did to education, but surely we're staring down the barrel of a bigger problem where people can go through school without ever having to think or work for themselves.

Duralias 3 days ago

Not entirely sure what they meant but chatgpt and the likes have forced schools to stop relying on homework for grades and are instead shifting over to assignments done more as an mini exam, more work for the school, but you can't substitute your knowledge for chatgpt in such cases, you actually need to know to succeed.

dminik 3 days ago

Well, that may be, but it's entirely possible that this outlook might change.

In the worst case he's poisoned an entire generation. If ChatGPT doctors, architects and engineers are anything like ChatGPT "vibe-coders" we're fucked.

stef25 4 days ago

> Product is not kickass

It might not be the best but people of whom you'd have never thought that they would use it, are using it. Many non technical people in my circle are all over it and they have never even heard of Claude. They also wouldn't understand why people would loathe AI because they simply don't understand those reasons.

DJHenk 4 days ago

Still, it burns money like nothing has in history.

jappgar 3 days ago

Consumers may not be brand loyal, but companies are.

If you're doing deep deals with MSFT you're going to be strongly discouraged from using Gemini.

smt88 3 days ago

Microsoft isn't even loyal to OpenAI! You can use multiple models in Azure and CoPilot

weatherlite 3 days ago

> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.

klabb3 3 days ago

Bingo. The secret sauce is never a sustainable long term moat. Things leak, competitors copy, or they make even better things, employees switch jobs. AI looks less vulnerable, since it’s academically difficult and expertise is still limited. But time and time again, new secret sauce ingredients last for months at a maximum, before they are exceeded oftentimes by hobbyists or other small actors.

I remember at Google they said well if source code leaks nobody is actually worried about stealing tech, the vast majority of code is open to all employees, with some exceptions like spam-, ranking, etc. It’s still protected, but not considered moat.

The moat comes from other things, such as datacenter & global networking infrastructure, marketing new products by pushing them through existing products (put Gemini in search, add chrome to Android etc). Most importantly you can use data you already have to bootstrap new products, say Gmail and calendar integrated with personalized assistants.

If you play your cards right, yes there’s some first-mover advantages, but they are more superficial than your average Twitter hype thread makes you think. It can give you the ability to set unofficial standards and APIs, like Kubernetes, S3 – (maybe OpenAI APIs?). And you can set certain agendas and market your name for recognition and trust. But all that can slip through your fingers if a behemoth picks up where you left off. They have so many advantages, except for being the fastest.

Topfi 3 days ago

In fairness, the AGI definition predicted by doomsday safety experts who don’t have time for such unimportant concerns as copyright or misinformation via this tech, everyone who is most certainly not merely hyping to get investor cash and the utterly serious and scientifically grounded research happening at MIRI, is essentially that one company will achieve AGI, shortly thereafter that will lead to a singularity and that’s that. No one else could create a second because that scary AGI is so powerful and would prevent anyone else from shutting it down, including other AGI. And no, this is totally no a sci-fi plot, but rigorous research. Incidentally, I’m looking for someone to help me prevent OAI from killing my own grandfather, cause that scenario is also incredibly likely and should be taken seriously.

If it’s not obvious already, I believe we are far away from that with LLMs and that the attention those working in model safety give to AGI over current day concerns like Meta just torrenting for model data, are not very serious people, but I have accepted that this isn’t a popular opinion amongst industry professionals.

Not least because letting laws prevent them from model training is bad according to them, either for the same weird logic Musk uses to justify testing FSD Betas on an unwilling public due to the potential to prevent future deaths, or because they genuinely took RB seriously. No idea what’s worse for serious adults…

ActorNightly 3 days ago

The funny thing is that nobody stops to ask if that version of AGI is even possible. For example, it would have to run a simulation of reality with enough accuracy faster than reality happens, and also multiple simulations in parallel. For physical world interaction, this is doable, but when it comes to predicting human behavior with enough accuracy, its probably not.

So even if we manage to come up with an algorithm that self learns, self expands, and so on, it likely won't be without major flaws.

robotresearcher 4 days ago

AGI is a technology or a feature, not a product. ChatGPT is a product. They need some more products to pay for one of the most expensive technologies ever (to not be delivered yet).

leptons 4 days ago

It's a shame that $1 trillion is being poured into AI so quickly, but fusion research has only seen a fraction of that over many decades.

imafish 4 days ago

Unfortunately VC investments rely on FOMO, and there is no current huge breakthrough in fusion research that they are afraid of missing out on.

andsoitis 4 days ago

Isn’t that a reflection, to some (major?) extent, of the general sense of likelihood of breakthrough?

xmprt 4 days ago

I used to think like this but after seeing the amount of money invested into crypto companies which most average people could have quickly dismissed as irrelevant, I'm not sure VCs are a good judge of value.

9rx 3 days ago

That's not an entirely fair take. Once upon a time it was believed that cryptocurrencies would be able to supplant payment processors (i.e. VISA, Master Card) We even had a rollout of crypto ATMs to utilize the belief in that network. That is a money printing business, so it wasn't unreasonable for VC money to chase it.

Sure, those intimately familiar with the technology may have always known that it wouldn't work, but the average person absolutely did not. They only came to learn that it wouldn't work as the cryptocurrency ventures proved that it wouldn't work.

The pivot to being some kind of tool to store wealth, or a way to trade cartoon monkeys, or whatever as a last ditch effort to save the companies that failed to turn it into payment processing technology was more obviously irrelevant, but that came much, much later. The VCs were desperately seeking an exit by that point.

leptons 3 days ago

We already know fusion exists and happens and has been happening for billions of years - it's well understood. It may not be simple to reproduce and sustain on earth, but there is a fairly clear path to get there.

There has never been an AGI, and there's no clear path to get there. And no, LLMs are not bringing us any closer to a real AGI, no matter how much Sam Altman wants to try to redefine what "AGI" means so that he can claim he has it.

The difference between AGI funding and fusion funding is the people hawking AGI are better liars and the people funding are far more gullible than the people trying to make fusion power a reality.

Nevermark 3 days ago

We already know fusion has been a grind. For decades.

We already know that computation has grown exponentially in scale and cost, for decades. With breakthrough software tech showing up reliably as it harnesses greater computational power, even if unpredictably and sparsely.

If we view AI across decades, it has reliably improved. Even through its “winters”. It just took time to hit thresholds of usefulness making relatively smooth progress look very random from a consumers point of view.

As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.

That just hasn’t been the case for fusion.

leptons 3 days ago

>We already know fusion has been a grind. For decades.

Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI. No, LLMs are not AGI and cannot be AGI. They may be a component of an eventual AGI in the future, but we're still far, far away from artificial general intelligence. LLMs are not intelligent, but they do fool a lot of people.

With the paltry amount of money spent on fusion, it's no wonder it's been delayed for so long. Maybe try putting the $1 Trillion into it in the short time "AI" has had its windfall, and let's see what happens with fusion.

>If we view AI across decades, it has reliably improved

That's your opinion. It's still not much better than Eliza. And LLMs lie half the time, just to make the user happy hearing what they want to hear.

>As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.

And it's still going to "hallucinate" and give wrong answers a lot of the time because the fundamental tech behind LLMs is a bit flawed and is in no way even close to AGI.

>That just hasn’t been the case for fusion.

And yet we have actual fusion reactions happening now, sustained for 22 minutes. We're practically on the threshold of having fusion power, but still there is nowhere near the money being spent on it compared to LLMs, which won't be capable of AGI in spite of the marketing line you've been told.

Nevermark 3 days ago

> Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI.

"Not there yet" isn't an argument against achieving anything. AGI and fusion included. It just means we are not there yet.

That argument is a classic fallacy.

Second, neural network capabilities have made steady gains since the 80's, in lock step with compute power, with additional benefits from architecture and algorithm improvements.

From a consumer's, VC's, or other technologists point of view, nothing might have seemed to happen. Successful applications were much smaller, and infrequently sexy. But problem sizes grew exponentially nevertheless.

You are arguing that unrelenting neural network progress for decades, highly coupled to the exponential growth in computing for 3/4 a century, will suddenly stop or pause, because ... you didn't track the field until recently?

Or the recent step was so big (models that can converse about virtually any topics known to humankind, in novel combinations, with hardly a mid-step), that it is easy to overlook the steady progress that enabled it?

Gains don't always look the same, because of threshold effects. And thresholds of tech vs. solution are very hard to predict. But increases in compute capacity are (relatively) easy to predict.

It may be that for a few years, efficiency gains and specialized capabilities are consolidated before another big step in general capability. Or another big step is just around the corner.

The only thing that has changed in terms of steady progress, is the intensity and number of minds working on improvements now is 1000 times what it was even a few years ago.

leptons 3 days ago

>That isn't a useful lens

Yet that is the exact "lens" you chose to view fusion with. It absolutely also applies equally to AGI.

> you didn't track the field until recently?

Okay, now you're trolling. You don't know me and are making assumptions based on what exactly??

This conversation is over. I'm not going back and forth when you're going to make attacks like this.

Nevermark 3 days ago

My apologies if pressing my point came over too strong. I will take that as a lesson for me.

I have not made any claims that fusion is unachievable. Just pointed out that the history of the two technologies couldn’t be more different.

There is no reason to believe fusion isn’t making progress, or won’t be successful, despite the challenges.

> you didn't track the field until recently?

That was a question not an assumption.

If you are aware of the progression of actual neural network use (not just research) from the 80’s on, or for any shorter significant time scale, great. Then you know it’s first successes were on toy problems, but it’s been a steady drumbeat of larger, more difficult, more general problems getting solved with higher quality solutions every year since. Often quirky problems in random fields. Then significant acceleration with nVidia’s intro of general compute on graphics cards and researcher hosted leader boards tracking progress more visibly. Only recently solving problems of a magnitude that is relevant to the general public.

Either you were not aware of that, which is no crime. Or dismissing it, or perhaps simply not addressing it directly.

If you have a credible reason that continued compute and architecture exploration is going to stop what has been exponential progress, for 40 years, I want to hear it.

(Not being aggressive, I mean I really would be interested in that. Even if it’s novel and/or conjectural.

Chalk up any steam on my part to being highly motivated to consider alternative reasoning. Not an attempt to change your mind, but to understand.)

j_maffe 4 days ago

No it's the perception of likelihood of breakthrough

rchaud 3 days ago

Capital gains on AI investment doesn't need a breakthrough. I'm sure pharma companies and others are building their own custom models to assist R&D, but those aren't the ones sinking the truly huge sums into consumer-facing LLMs.

caseyy 4 days ago

It’s why we have warnings to investors saying past performance is not an indicator of future returns.

plorg 3 days ago

Maybe it means investors think it will happen, maybe it means investors simply think it will be profitable. It could even be investors seeking a market that has juice long after ZIRP dried up.

player1234 2 days ago

Luddite, didn't you read in this very thread someone's dad is using it for excel macros! That is at least worth 500 trillions!

csharpminor 4 days ago

If you believe the AGI thesis, this is a stepping stone to unlocking fusion and other scientific advances.

BeFlatXIII 3 days ago

It'll be fun to point and laugh at everyone when the bubble bursts.

ActorNightly 3 days ago

The bubble won't burst. LLMs are very useful. There just hasn't been enough optimization for general use case, which is kinda how all tech starts. Eventually someone will make a more optimized lower parameter model that has decent accuracy, and can be run on lower grade hardware, that will be a pretty good assistant with manual wrappers around automation.

BeFlatXIII 1 day ago

LLMs themselves won't go away, but they may not last as a sustainable business.

leptons 3 days ago

This is the same llm-splaining that practically every enthusiast does. It's either "more hardware" or "more optimization" will solve all the problems and bring us into a new era of blah blah blah. But it won't. It won't solve LLM slop or hallucinations, which are inherent to how LLMs work, and why that bubble is already looking pretty unstable.

ActorNightly 3 days ago

The goal isn't to build perfect assistants with LLM. The goal is to build something that is just good enough to be useful. It doesn't need to be even a full fledged conversational LLM. It could just be an LM.

Lots of tasks don't require knowledge to be encoded into the model. For example "summarize my emails" is a task that can be done with a fairly small model trained on just basic text.

There are also unexplored avenues. For example, if I had the hardware to do this, I would basically take an early version of GPT, and then start training it on additional data, and when the training run completes, I would diff the model with the original version, and use that as the training set of another model. Basically build a model on top of GPT that can automatically adjust parameter weights and encode it into the model, thus giving it persistent memory.

Tenoke 4 days ago

Much, much less was being poured into AI until it started to have some returns.

leptons 3 days ago

What returns? OpenAI is still not profitable.

immibis 4 days ago

Once we unlock fusion, humanity starts down a millenia-long path to making the planet permanently uninhabitable (not just temporarily like with climate change) by turning all the water into bitcoins.

leptons 3 days ago

"There are 1,335,000,000,000,000,000,000 liters of water in all the oceans."

"A large fusion power plant, generating 1,500 megawatts of electricity, would consume approximately 600 grams of tritium and 400 grams of deuterium each day. A 1,000 MW coal-fired power plant requires 2.7 million tonnes of coal per year, while a similar fusion plant would need only 250 kg of fuel per year (half deuterium, half tritium)."

You do the math. I'm personally not worried about running out of water due to fusion.

orbifold 3 days ago

I don’t think that is true, there is only so much heavy water on the planet and doing hydrogen fusion is not all that economical.

bookofjoe 3 days ago

Gemini 10.0

Wait for it

pjc50 4 days ago

If its proponents are taken at face value, AGI is a threat.

jug 3 days ago

AI as we know it (GPT based LLM’s) have peaked. OpenAI noticed this sometime autumn last year when would-be GPT-5 was unimpressive despite the huge size. I still think ChatGPT 4.5 was GPT-5, just rebranded to set expectations.

Google Gemini 2.5 Pro was remarkably good and I’m not sure how they did it. It’s like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.

DeepSeek made huge inroads in affordability…

But even with those, intelligence itself is seeing diminishing returns while training costs are not.

So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they’re toast. So they can’t.

ActorNightly 3 days ago

>I’m not sure how they did it

TPUs absolutely dumpster Nvidia cards, for the same reason that mining bitcoin is done with ASICs instead of cards.

So yeah, just more training, more data, and so on.

If Google wasn't so cloud focused, they could take over the AI Chip market lead from NVIDIA.

Topfi 3 days ago

I tentatively agree that LLMs have reached somewhat of a ceiling at this stage and diversifying would make sense at this stage, in any other industry. But as others pointed out, OAI and others have attached their valuation directly to their definition of achieving “AGI”. Any pivot from that, if it were realistic in the coming years (my opinion: it isn’t), would be foolhardy and go against investors, so in turn, this is clearly admitting that even sama doesn’t see AGI as possible in the near term.

sevensor 3 days ago

Adding social media to your thing is so 2018. Is the next big thing really just a warmed over version of the last big thing? Is sama just completely out of ideas to save his money-burner?

rchaud 3 days ago

It's an easy thing to slap on to a service with lots of users. Back in the day this would be called a 'message board'. "Social media" requires the use of iframes that can be embedded on 3rd party sites. OpenAI is a login-only environment so I can see them going for a Discord-type of platform rather than something that spreads to the open web.

kromem 4 days ago

Don't underestimate the importance of multi-user human/AI interactions.

Right now OAI's synthetic data pipeline is very heavily weighted to 1-on-1 conversations.

But models are being deployed into multi-user spaces that OAI doesn't have access to.

If you look at where their products are headed right now, this is very much the right move.

Expect it to be TikTok style media formats.

ben_w 4 days ago

OpenAI's idea of "shortly" offering AGI is "thousands" of days, 2000 days is just under 5.5 years.

westoncb 4 days ago

I think it might just be about distribution. Grok gets a lot of interesting opportunities for it over X, then throw in the way people reacted to new 4o image gen capabilities.

9rx 3 days ago

On the other hand, if you knew AGI was on the near horizon, you'd know that AGI will want to have friends to remain happy. You can give AGI a physical form so it can walk down to the bar – or you can, much more simply, give it an online social network.

saltysalt 4 days ago

Indeed! Ultimately, all online business models end at ad click revenue.

jacobsenscott 3 days ago

Everything devolves to ad sales. Do you know the minute details about their lives people type into chat gpt prompts? It's a gold mine for ads.

pjc50 4 days ago

Someone down below mentioned ads, and I think that might well be the route they're going to try: charging advertisers to influence the output of the AI.

As for whether it will work, I don't know how they're possibly going to get the "seed community" which will encourage others to join up. Maybe they're hoping that all the people making slop posts on other social networks want to cut out the middleman and have communities of people who actually enjoy that. As always, the sfw/nsfw censorship line will be an important definer, and I can't imagine them choosing NSFW.

NewUser76312 3 days ago

I think it was a strategic mistake for Sam et al to talk about "AGI".

You don't need some mythical AI to be a great company. You need great products, which OpenAI has, and they keep improving them.

Now they've hamstrung themselves into this AGI nonsense to try and entice investors further, I guess.

diggan 3 days ago

> Now they've hamstrung themselves into this AGI nonsense to try

AFAIK, they've been on the AGI hype-train for a very long time, before they reached mainstream popularity for sure. From their own blog (2020 - https://openai.com/index/organizational-update/), here is a mention of their "mission":

> We’re proud of these and other research breakthroughs by our team, all made as part of our mission to achieve general-purpose AI that is safe and reliable, and which benefits all humanity.

I'm not sure OpenAI trying to reach AGI is a "strategic mistake" as much as "the basis for the business" (which, to be fair, was a non-profit organization initially).

make3 4 days ago

this might just be a way to generate data

pyfon 4 days ago

It is a Threads. How is that doing?

parhamn 4 days ago

There could be too-many-cooks in the AI research part of their work.

Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.

smt88 4 days ago

Altman doesn't think like a typical large org manager because he's never successfully built or run one. He failed upward into this role.

OpenAI doesn't have enough money to even run ChatGPT in perpetuity, so building internal moonshots is an irresponsible waste of investor funds.