As an Ex-OpenAI employee I agree with this. Most of the top ML talent at OpenAI already have left to either do their own thing or join other startups. A few are still there but I doubt if they'll be around in a year. The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots. The whole time that I was at OpenAI until now GOOG has been the only individual stock that I've been holding. Despite the threat to their search business I think they'll bounce back because they have a lot of cards to play. OpenAI is an annoyance for Google, because they are willing to burn money to get users. Google can't as easily burn money, since they already have billions of users, but also they are a public company and have to answer to investors. But I doubt if OpenAI investors would sign up to give more money to be burned in a year. Google just needs to ease off on the red tape and make their innovations available to users as fast as they can. (And don't let me get started with Sam Altman.)
> there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.
So... I don't think this is certain. A surprising number of people pay for the ChatGPT app and/or competitors. It's be a >$10bn business already. Could maybe be a >$100bn business long term.
Meanwhile... making money from online ads isn't trivial. When the advertising model works well (eg search/adwords), it is a money faucet. But... it can be very hard to get that money faucet going. No guarantees that Google discover a meaningful business model here... and the innovators' dilema is strong.
Also, Google don't have a great history of getting new businesses up and running regardless of tech chops and timing. Google were pioneers to cloud computing... but amazon and MSFT built better businesses.
At this point, everyone is assuming AI will resolve to a "winner-take-most" game that is all about network effect, scale, barriers to entry and such. Maybe it isn't. Or... maybe LLMs themselves are commodities like ISPs.
The actual business models, at this point, aren't even known.
> No guarantees that Google discover a meaningful business model here...
I don't understand this sentiment at all. The business model writes itself (so to speak). This is the company that perfected the art of serving up micro-targeted ads to people at the moment they are seeking a solution to a problem. Just swap the search box for a chat bot.
For a while they'll keep the ads off to the side, but over time the ads will become harder and harder to distinguish from the chat bot content. One day, they'll dissapear altogether and companies will pay to subtly bias the AI towards their products and services. It will be subtle--undetectable by end users--but easily quantified and monetized by Google.
Companies will also pay to integrate their products and services into Google's agents. When you ask Gemini for a ride, does Uber or Lyft send a car? (Trick question. Waymo does, of course.) When you ask for a pasta bowl, does Grubhub or Doordash fill the order?
When Gemini writes a boutique CRM for your vegan catering service, what service does it use for seamless biometric authentication, for payment processing, for SMS and email marketing? What payroll service does it suggest could be added on in a couple seconds of auto-generated code?
AI allows Google to continue it's existing business model while opening up new, lucrative opportunities.
I don’t think it works. Search is the perfect place for ads for exactly the reasons you state: people have high intent.
But a majority of chatbot usage is not searching for the solution to a problem. And if he Chatbot is serving the ads when I’m using it for creative writing, reformatting text, having a python function, written, etc, I’m going to be annoyed and switch to a different product.
Search is all about information retrieval. AI is all about task accomplishment. I don’t think ads work well in the latter , perhaps some subset, like the task is really complicated or the AI can tell the user is failing to achieve it. But I don’t think it’s nearly as could have a fit as search.
It doesn't have to be high intent all the time though. Chrome itself is "free" and isn't the actual technical thing serving me ads (the individual websites / ad platforms do that regardless of which browser I'm using), but it keeps me in the Google ecosystem and indirectly supports both data gathering (better ad targeting, profitable) and those actual ad services (sometimes subtly, sometimes in heavy-handed ways like via ad blocker restrictions). Similar arguments to be made with most of the free services like Calendar, Photos, Drive, etc - they drive some subscriptions (just like chatbots), but they're mostly supporting the ads indirectly.
Many of my Google searches aren't high intent, or any purchase intent at all ("how to spell ___" an embarrassing number of times), but it's profitable for Google as a whole to keep those pieces working for me so that the ads do their thing the rest of the time. There's no reason chatbots can't/won't eventually follow similar models. Whether that's enough to be profitable remains to be seen.
> Search is all about information retrieval. AI is all about task accomplishment.
Same outcome, different intermediate steps. I'm usually searching for information so that I can do something, build something, acquire something, achieve something. Sell me a product for the right price that accomplishes my end goal, and I'm a satisfied customer. How many ads for app builders / coding tools have you seen today? :)
I have shifted the majority of my search for products to ChatGPT. In the past my starting point would have been Amazon or Google. It’s just so much easier to describe what I’m looking for and ask for recommendations that fit my parameters. If I could buy directly from the ChatGPT, I probably would. It’s just as much or more high intent as search.
The main usage of chatgpt I’ve seen amongst non-programmers is a direct search replacement with tons of opportunity for ads.
People ask for recipes, how to fix things around the house, for trip itinerary ideas, etc.
> And if he Chatbot is serving the ads when I’m using it for creative writing, reformatting text, having a python function, written, etc, I’m going to be annoyed and switch to a different product.
You may not even notice it when AI does a product placement when it's done opportunistically in creative writing (see Hollywood). There also are plenty of high-intent assistant-type AI tasks.
Obviously, an LLM is in a perfect position to decide whether an add can be "injected" into the current conversation. If you're using it for creative writing it will be add free. But chances are you will also use it to solve real world problems where relevant adds can be injected via product or service suggestions.
"ad" is short for advertisement. That's the word you're looking for here.
Add is a verb meaning to combine 2 things together.
Re "going to be annoyed" there is definitely a spectrum starting at benign and culminating to the point of where you switch.
Photopea, for example, seems to be successful and ads displayed on the free tier lets me think that they feel at least these users are willing to see ads while they go about their workflow.
Chatgpt is effectively a functional search engine for a lot of people. Searching for the answer "how do i braid my daughter's hair?", or, "how do i bake a cake for a birthday party?" can be resolved via tradtitional search and finding a video or blog post, or simply read the result from an LLM. LLM has a lot more functionality overall, but ChatGPT and it's competitors are absolutely an existential threat to Google, as (in my opinion) it's a superior service because it just gives you the best answer, rather than feeding you into whatever 10 blog services that utilize google ads the most this month. Right now ChatGPT doesn't even serve up ads, which is great. I'm almost certain they're selling my info though, as specific one-off stuff I ask ChatGPT about, ends up as ads in Meta social medias the next day.
The intent will be obvious from the prompt and context. The AI will behave differently when called from a Doc about the yearly sales strategy vs consumer search app.
> chatbots ... provided for free ... ads
Just because the first LLM product people paid for was a chatbot does not mean that chat will be the dominant commercial use of AI.
And if the dominant use is agents that replace knowledge workers, then they'll cost closer to $2000 per month than $20 or free, and an ad-based business model won't work.
True. This is my point too.
The actual business models and revenue sources are still unknown. Consumer subscriptions happens to be the first major model. Ads still aren't. Many other models could dwarf either of these.
It's very early to call the final score.
I still think it's pretty clear. Google doesn't have to get a new business off the ground, just keep improving the integration into Workspace, Gmail, Cloud, Android etc. I don't see users paying for ChatGPT and then copy/pasting into those other places even if the model is slightly better. Google will just slowly roll out premium plans that include access to AI features.
And as far as selling pickaxes go, GCP is in a far better position to serve the top of market than OpenAI. Some companies will wire together multiple point solutions but large enterprises will want a consolidated complete stack. GCP already offers you compute clusters and BigQuery and all the rest.
>Just swap the search box for a chat bot.
Perhaps... but perhaps not. A chatbot instead of a search box may not be how the future looks. Also... a chatbot prompt may not (probably won't) translate from search query smoothly... in a Way That keep ad markets intact.
That "perfected art" of search advertising is highly optimized. You (probably) loose all of that in transition. Any new advertising products will be intrepid territory.
You could not have predicted in advance that search advertising would dwarf video (yourube) advertising as a segment.
Meanwhile... they need to keep their market share at 90%.
> micro-targeted ads to people at the moment they are seeking a solution to a problem
Personal/anecdotal experience, but I've bought more stuff out of instagram ads than google ads ever.
I imagine it would be easy for them to do similar to the TV guides of yesteryear(the company that owned it used it primarily for self promotion with just enough competitor promotion to fly under the radar and still seem useful), where it gives good recommendations sure, but 60-70% of those recommendations are the paid ones or the ones you own for you custom LLM.
LLM based advertising has amazing potential when you consider that you can train them to try to persuade people to buy the advertised products and services.
That seems like a recipe for class action false advertising lawsuits. The AI is extremely likely to make materially false claims, and if this text is an advertisement, whoever placed it is legally liable for that.
I don't think we should expect that risk to dissuade these companies. They will plow ahead, fight for years in court, then slightly change the product if forced to ¯\_(ツ)_/¯
Perhaps ironically, I know a guy who uses ChatGPT to write ad copy. The snake eats its own tail.
Is this someone someone working as a writer, who is just phoning it in (LLM-ing it in)?
Or is this someone who needs writing but can't do it themselves, and if they didn't have the LLM, they would pay a low-end human writer?
A friend of mine works in advertising/marketing guy at the director level (Career ad guy), for big brands like nationwide cell carriers, big box stores etc, but mostly telcom stuff I think, and he uses it every day; he calls it "my second brain". LLM are great at riffing on ideas and brainstorming sessions.
I don’t think “AI” as a market is “winner-takes-anything”. Seriously. AI is not a product, it’s a tool for building other products. The winners will be other businesses that use AI tooling to make better products. Does OpenAI really make sense as a chatbot company?
I agree the market for 10% better AI isn’t that great but the cost to get there is. An 80% as good model at 10% or even 5% the cost will win every time in the current environment. Most businesses don’t even have a clear use case for AI they just use it because the competition is and there is a FOMO effect
> Most businesses don’t even have a clear use case for AI they just use it because the competition is and there is a FOMO effect
I consult in this space and 80-90% of what I see is chat bots and RAG.
That’s exactly what I’d expect. Honestly Ai chat bots seems unnecessarily risky because you never really know what they might say on your behalf.
> Does OpenAI really make sense as a chatbot company?
If the chat bot remains useful and can execute on instructions, yes.
If we see a plateau in integrations or abilities, it’ll stagnate.
Very few are successful in this position. Zapier comes to mind, but it seems like a tiring business model to me.
AI is a product when you slap an API on top and host it for other businesses to figure out a use case.
In a gold rush, the folks that sell pickaxes make a reliable living.
> In a gold rush, the folks that sell pickaxes make a reliable living.
Not necessarily. Even the original gold rush pickaxe guy Sam Brannan went broke. https://en.wikipedia.org/wiki/Samuel_Brannan
Sam of the current gold rush is selling pickaxes at a loss, telling the investors they'll make it up in volume.
According to the linked Wikipedia article, he did not go broke from the gold rush. He went broke because he invested the pickaxe windfall in land, and when his wife divorced him, the judge ruled he had to pay her 50%, but since he was 100% in land he had to sell it. (The article is not clear why he couldn't deed her 50% of it, or only sell 50%. Maybe it happened during a bad market, he had a deadline, etc.)
So maybe if the AI pickaxe sellers get divorced it could lead to poor financial results, but I'm not sure his story is applicable otherwise.
Nvidia is selling GPUs at a loss? TSMC is going broke?
I'm pretty sure they are the pickaxe manufactures in this case.
Basically every tech company likes to say they are selling pickaxes, but basically no VC funded company matches that model. To actually come out ahead selling pickaxes you had to pocket a profit on each one you sold.
If you sell your pickaxes at a loss to gain market share, or pour all of your revenue into rapid pickaxe store expansion, you’re going to be just as broke as prospectors when the boom goes bust.
I don't think there is anybody that is making significant amount of money by selling tokens right now.
There are two perspectives on this. What you said is definitely a good one if you're a business planning to add AI to whatever you're selling. But personally, as a user, I want the opposite to happen - I want AI to be the product that takes all the current products and turns them into tools it can use.
I agree, I want a more intelligent voice assistant similar to Siri as a product, and all my apps to be add-ons the voice assistant could integrate with.
> AI is not a product, it’s a tool for building other products.
Its products like this (Wells Fargo): https://www.youtube.com/watch?v=Akmga7X9zyg
Great Wells Fargo has an "agent" ... and every one else is talking about how to make their products available for agent based AI.
People don't want 47 different agents to talk to, then want a single end point, they want a "personal assistant" in digital form, a virtual concierge...
And we can't have this, because the open web has been dead for more than a decade.
Why can't we have personal assistants because the open web has been dead?
I'll be happy with a personal assistant with access to my paid APIs.
Is Amazon a product or a place to sell other products? Does that make Amazon not a winner?
If there were 2 other Amazons all with similar products and the same ease of shipping would you care where you purchased? Amazon is simply the best UX for online ordering. If anything else matched it I’d shop platform agnostic.
> The winners will be other businesses that use AI tooling to make better products.
agree with you on this.
you already see that playing out with Meta and a LOT of companies in China.
>It's be a >$10bn business already.
But not profitable yet.
Opera browser was not profitable for like 15 years and still became rather profitable eventually to make an attractive target to purchase by external investors. And even if not bough it would still made nice profit eventually for the original investors.
You can't burn money in AI for 15 years on the off chance that it’ll pay off.
No, but you can let others burn money for 15 years and then come in and profit off their work while they go under.
I dunno, Nvidia worked on machine learning for 11+ years and it worked out great for them: https://research.nvidia.com/research-area/machine-learning-a...
Sure, but they were making tons of money elsewhere. OpenAI has no source of revenue anywhere big enough to cover its expenses, it's just burning investor cash at the moment.
The demand is there. People are already becoming addicted to this stuff.
I think the HN crowd widely overestimates how many people are even passingly familiar with the LLM landscape much less use any of the tools regularly.
Last Month, Google, Youtube, Facebook, Instagram and Twitter (very close to this one, likely passes it this month) were the only sites with more visits than chatgpt. Couple that with the 400M+ weekly active users (according to open ai in February) and i seriously doubt that.
https://x.com/Similarweb/status/1909544985629721070
https://www.reuters.com/technology/artificial-intelligence/o...
Weekly active users is a pretty strange metric. Essential tools and even social networking apps report DAUs, and they do that because essential things get used daily. How many times did you use Google in the past day? How many times did you visit (insert some social media site you prefer) in the last day? If you’re only using something once per week, it probably isn’t that important to you.
Mostly only social media/messaging sites report daily active users regularly. Everything else usually reports monthly active users at best.
>in the last day? If you’re only using something once per week, it probably isn’t that important to you.
No, something I use on a weekly basis (which is not necessarily just once a week) is pretty important to me and spinning it otherwise is bizarre.
Google is the frontend to the web for the vast majority of internet users so yeah it gets a lot of daily use. Social media sites are social media sites and are in a league of their own. I don't think i need to explain why they would get a disproportionate amount of daily users.
I am entirely confused by this. ChatGPT is absolutely unimportant to me. I don't use it for any serious work, I don't use it for search, I find its output to still be mostly a novelty. Even coding questions I mostly solve using StackExchange searches because I've been burned using it a couple of times in esoteric areas. In the few areas where I actually did want some solid LLM output, I used Claude. If ChatGPT disappeared off the Internet tomorrow, I would suffer not at all.
And yet I probably duck into ChatGPT at least once a month or more (I see a bunch of trivial uses in 2024) mostly as a novelty. Last week I used it a bunch because my wife wanted a logo for a new website. But I could have easily made that logo with another service. ChatGPT serves the same role to me as dozens of other replaceable Internet services that I probably duck into on a weekly basis (e.g., random finance websites, meme generators) but have no essential need for whatsoever. And if I did have an essential need for it, there are at least four well-funded competitors with all the same capabilities, and modestly weaker open weight models.
It is really your view that "any service you use at least once a week must be really important to you?" I bet if you sat down and looked at your web history, you'd find dozens that aren't.
(PS in the course of writing this post I was horrified to find out that I'd started a subscription to the damn thing in 2024 on a different Google account just to fool around with it, and forgot to cancel it, which I just did.)
>I am entirely confused by this. ChatGPT is absolutely unimportant to me. I don't use it for any serious work, I don't use it for search, I find its output to still be mostly a novelty. Even coding questions I mostly solve using StackExchange searches because I've been burned using it a couple of times in esoteric areas. In the few areas where I actually did want some solid LLM output, I used Claude. If ChatGPT disappeared off the Internet tomorrow, I would suffer not at all.
OK? That's fine. I don't think I ever claimed you were a WAU
>And yet I probably duck into ChatGPT at least once a month or more (I see a bunch of trivial uses in 2024) mostly as a novelty.
So you are not a weekly active user then. Maybe not even a monthly active one.
>Last week I used it a bunch because my wife wanted a logo for a new website. But I could have easily made that logo with another service.
Maybe[1], but you didn't. And I doubt your wife needs a new logo every week so again not a weekly active user.
>ChatGPT serves the same role to me as dozens of other replaceable Internet services that I probably duck into on a weekly basis (e.g., random finance websites, meme generators)but have no essential need for whatsoever.
You visit the same exact meme generator or finance site every week? If so, then that site is pretty important to you. If not, then again you're not a weekly active user to it.
If you visit a (but not the same) meme generator every week then clearly creating memes is important to you because I've never visited one in my life.
>And if I did have an essential need for it, there are at least four well-funded competitors with all the same capabilities, and modestly weaker open weight models.
There are well funded alternatives to Google Search too but how many use anything else? Rarely does any valuable niche have no competition.
>It is really your view that "any service you use at least once a week must be really important to you?" I bet if you sat down and looked at your web history, you'd find dozens that aren't.
Yeah it is and so far, you've not actually said anything to indicate the contrary.
[1]ChatGPT had an image generation update recently that made it capable of doing things other services can't. Good chance you could not in fact do what you did (to the same satisfaction) elsewhere. But that's beside my point.
Sadly it’s become common for many mediocre employees in corporate environments to defer to ChatGPT, receive erroneous output and accept it as truth.
There are now commonly corporate goon squads whose job is to drive AI adoption without care for actual impact to results. Usage of AI is the KR.
I don’t understand why this is happening. Why is everyone buying into this hype so strongly?
It’s a bit like how DEI was the big thing for a couple years, and now everyone is abandoning it.
Do corporate leaders just constantly chase hype?
Yes corporate leaders do chase hype and they also believe in magic.
I think companies implement DEI initiatives for different reasons than hype though. Many are now abandoning DEI ostensibly out of fear due to the change in U.S. regime.
A case can be made for diversity, but the fact that all the big companies were adopting DEI at the same time made it hype.
I personally know an engineering manager who would scoff at MLK Day, but in 2020 starting screaming about how it wasn’t enough and we needed Juneteenth too.
AI isn’t hype at Nvidia, and DEI isn’t hype at Patagonia.
But tech industry-wide, they’re both hype.
I think many were rightly adopting DEI initiatives in an environment post me-too and post George Floyd. I don’t think it was driven by hype but more a reaction to the environment which heightened awareness of societal injustices. Awareness led to all sorts of things - conversation, compassion, attempts to do better in society and the workplace, and probably law suits. You can question how motivated corporations were to adopt DEI initiatives but I think it’d be wrong to say it was driven by hype.
I’m not sure companies are “abandoning DEI” so much as realizing that it’s often only a vocal minority that cares about DEI reports and scores and you don’t actually need a VP and diversity office to do some outreach and tally internal metrics.
The climate has changed. Some of that is economic at big tech companies. But it’s also a ramping down of a variety of things most employers probably didn’t support but kept their mouths shut about.
I think you may be underestimating it.
At this point in college, LLMs are everywhere. It's completely dominating history/english/mass comm fields with respect to writing papers.
Anecdotally all of my working non-tech friends use chatgpt daily.
It does anecdotally seem to be very common in education which presumably will carry over to professional workplaces over time. I see it a lot less in non-tech and even tech/adjacent adults today.
Aside from university mentioned by sibling comments, there is major uptake of AI in journalism (summarize long press statements, create first draft of the teaser, or even full articles ...) and many people in my social groups use it regularly for having something explained, finding something ... it's wide spread
My wife, the farthest you can get from the HN crowd, literally goes to tears when faced with Excel or producing a Word doc and she is a regular user of copilot and absolutely raves about it. Very unusual for her to take up new tech like this and put it to use but she uses it for everything now. Horse is out of the barn.
> My wife, the farthest you can get from the HN crowd...
She is literally married into the HN crowd.
I think the real AI breakthrough is how to monetize the high usage users.
My Dad is elderly and he enjoys writing. Uses Google Gemini a few times a week. I always warn him that it can hallucinate and he seems to get it.
It's changed his entire view of computing.
My father says "I feel like I hired an able assistant" regarding LLMs.
I think you're in fact wildly out of touch with the general populace and how much they use AI tools to make their work easier.
Well, they said it is a $10B industry. Not sure how they measure it, but it counts for something, I suppose.
For many, this stuff is mostly about copilot being shoved down everyone's throats via ms office obnoxious ads and distractions, and I haven't yet heard of anyone liking it or perceiving it as an improvement. We are now years into this, so my bets are on the thing fading away slowly and becoming a taboo at Microsoft.
Many recent HN articles about how middle managers are already becoming addicted and forcing it on their peons. One was about the game dev industry in particular.
In my work I see semi-technical people (like basic python ability) wiring together some workflows and doing fairly interesting analytical things that do solve real problems. They are things that could have been done with regular code already but weren't worth the engineering investment.
In the "real world" I see people generating crummy movies and textbooks now. There is a certain type of person it definitely appeals to.
I'm sure this is a thing,
what I'm not so sure about is how much that generalises beyond the HN/tech-workers bubble (I don't think "people" in OP's comment is as broad and numerous as they think it is).
> I haven't yet heard of anyone liking it or perceiving it as an improvement.
Well I mean if you say it, then of course it MUST be true I’m sure.
As much as you may make fun of my anecdotal observation, your comment doesn't add anything of value, in particular to substantiate that "people [are] becoming addicted to LLMs". I stand behind my comment that the vast majority of non-tech worker are exposed to them via Copilot in MS Office, and if you want to come to its rescue and pretend it's not a disaster, by all means :-)
For comparison, Uber is still not profitable after 15 years or so. Give it some time.
Uber had their first profitable year in 2023, and their profit margin was 22% in 2024.
https://finance.yahoo.com/news/uber-technologies-full-2024-e...
They are still FAR in the red. Technically have never turned a profit. Among other famous companies.
Uber is a profitable company both in 2023 and - to the tune of billions of dollars - in 2024. Please read their financials if you doubt this statement.
I'm not a finance person, but how is net income of $9.9B for FY 2024 not profit?
I assume they mean the profits in the past couple years are dwarfed by the losses that came before. Looking at the company's entire history, instead of a single FY.
Maybe? But that's not what anyone means when they describe a company as profitable or not.
I was guessing they meant something like the net profit only came from a weird tax thing or something.
Seems like the difference between a profitable investment and a profitable company.
They invested tens of billions of dollars in destroying the competition to be able to recently gain a return on that investment. One could either write off that previous spending or calculate it into the totality of "Uber". I don't know how Silicon Valley economics works but, presumably, a lot of that previous spending is now in the form of debt which must be serviced out of the current profits. Not that I'm stating that taking on debt is wrong or anything.
To the extent that their past spending was debt, interest on that debt that should already be accounted for in calculating their net income.
But the way it usually works for Silicon Valley companies and other startups is that instead of taking on debt they raise money through selling equity. This is money that doesn't have to be paid back, but it means investors own a large portion of this now-profitable company.
I'm surprised. They pay the drivers a pittance. My ex drove Uber for a while and it wasn't really worth it. Also, for the customers it's usually more expensive and slower than a normal taxi at least here in Spain.
The original idea of ride-sharing made sense but just like airbnb it became an industry and got enshittified.
> They pay the drivers a pittance. My ex drove Uber for a while and it wasn't really worth it.
I keep hearing this online, but every time I’ve used an Uber recently it’s driven by someone who says they’ve been doing it for a very long time. Seems clear to me that it is worth it for some, but not worth it if you have other better job options or don’t need the marginal income.
> but not worth it if you have other better job options
Pretty much any service job, really...
When I had occasion to take a ride share in Phoenix I'd interrogate the driver about how much they were getting paid because I drove cabs for years and knew how much I would have gotten paid for the same trip.
Let's just say they were getting paid significantly less than I used to for the same work. If you calculated in the expenses of maintaining a car vs. leasing a cab I expect the difference is even greater.
There were a few times where I had just enough money to take public transportation down to get a cab and then snag a couple cash calls to be able to put gas in the car and eat. Then I could start working on paying off the lease and go home at the end of the day with some cash in my pocket -- there were times (not counting when the Super Bowl was in town) where I made my rent in a single day.
Maybe it differs per country. This was in Spain.
PS: I know that in Romania it's the opposite. Uber is kinda like a luxury taxi there. Normal taxis have standard rates, but these days it's hardly enough to cover rising fuel prices. So cars are ancient and un a bad state of repair, drivers often trick foreigners. A colleague was even robbed by one. Uber is much more expensive but much safer (and still cheap by western standards).
My sense in London is that they’re pretty comparable. I’ll use whichever is more convenient.
They're usually a bit more expensive here than a taxi. It can be beneficial because sometimes they have deals, and I sometimes take one when I have to book it in advance or when I'm afraid there will be delays with a corrsponding high cost. Though Uber tend to hit me with congestion charges then too. At least with a taxi I can ask them to take a different route. The problem with the uber drivers is that they don't know any of the street names here, they just follow the app's navigation. Whereas taxi drivers tend to be much more aware and know the streets and often come up with suggestions.
This also means that they sometimes fleece tourists but when they figure you know the city well they don't dare :) Often if they take one wrong turn I make a scene about frowning and looking out of the window and then they quickly get back on track. Of course that's another usecase where uber would be better, if you don't know the city you're in.
> they sometimes fleece tourists
yeah thanks no, I'm paying for an Uber. For all the complaints over Ubers business practices, it's hard not to forget how bad taxis were. Regulatory capture is a clear failure mode of capitalism and the free market and that is no more shown than by the taxis cab industry.
Taxis aren't so bad in most countries. Here in Spain they are plentiful and fine. The same in most other countries I've been to. Only in the Netherlands they are horrible, they are ridiculously expensive because they all drive Mercedeses. As a result nobody uses them because they can't afford them. They're more like a limousine service, not like real taxis.
One time I told one of my Dutch friends I often take a cab to work here in Spain when I'm running late. He thought i was being pompous and showy. But here it's super normal.
Uber (Or cabify which is a local clone and much more popular) here on the other hand is terrible if you don't book it in advance. When I'm standing here on the street it takes 7-10 minutes for them to arrive while I see several taxis passing every minute. So there is just no point. Probably a factor of being unpopular too so the density is low.
I also prefer my money to end up with local people instead of a huge American corporation.
> Also, for the customers it's usually more expensive and slower than a normal taxi
Neither of those things are true where I live.
> at least here in Spain
Well…Spain is Spain. Not the rest of the world.
No but it's like this in most of Europe.
I think Uber in the US is a very different beast. But also because the outlook on life is so different there. I recently agreed with an American visitor that we'd go somewhere and we agreed to go by public transport. When I got there he wanted to get an Uber :') Here in Europe public transport is a very different thing. In many cases the metro is even faster than getting a taxi.
PS: What bothers me the most about Uber and Cabify is that they "estimate" that it will take 2 minutes to get a car to you, and then when I try and book one I get a driver that's 10 minutes away :( :( Then I cancel the trip and the drivers are pissed off. I had one time where I got the same driver I cancelled on earlier and he complained a lot even though I cancelled within 10 seconds when I saw how far away he was.
Anyway I have very few good experiences with these services, I only use them to go to the airport now when I can book it in advance. And never Uber anymore, only Cabify.
> Anyway I have very few good experiences with these services
For me, and a majority where I live, this is applicable to taxis. Which were known for being dirty, late, expensive, prone to attempting to rip you off, if they turned up at all, etc.
Outside of surge charging (in which they are more expensive) ubers are by and large either cheaper, or the same price. With the difference being that 99% of the time if you request one, its going to turn up. And when it does turn up, you know what your going to pay, not have them take a wrong turn at some point and by "mistake" and decide to charge you double. Or tell you they take card and then start making claims about how suddenly they can't etc.
Sounds like europe gets the bad end of the stick in this regard.
Yeah here in Spain the taxis are great. They're plentiful, cheap and efficient. The city is kinda a mess and the rideshare drivers have to drive a route mapped out by the app which often is not optimal. The real taxis know the city well. I think this is why the rideshares are unpopular and thus there's not many of them leading to the long waiting times. They're also spread between different providers, Uber is popular with the tourists only and the locals mostly use Cabify (a local company).
However in Romania on the other hand many taxi drivers are scammers or even criminals (one of my colleagues was robbed by one of them). It's also because the maximum taxi fares are too low to actually make a wage so I can kinda understand so I always tip really well (like double the fare or more which is still nothing). Though if they try to scam me they don't get a cent of course.
"A surprising number of people pay for the ChatGPT app and/or competitors."
I doubt the depiction implied by "surprising number". Marketing types and CEO's who would love 100% profit and only paying the electricity bill for an all AI workforce would believe that. Most people, especially most technical people would not believe that there is a "surprising number" of saps paying for so-called AI.
Google aren’t interested in <1bn USD businesses, so it’s hard for them to build anything new as it’s pretty guaranteed to be smaller than that at first. The business equivalent of the danger of a comfortable salaried job.
Google is very good at recognizing existential threats. iOS were that to them and they built Android, including hardware, a novelty for them, even faster than mobile incumbents at the time.
They're more than willing to expand their moat around AI even if that means multiple unprofitable business for years.
In tech, Android's acquisition by Google is ancient history. It has zero relevance to today's Google.
When was it, 2006? Almost 20 years ago, back when the company was young.
Mobile is still nearly everything. Google continues to develop and improve Android in substantial ways. Android is also counted on by numerous third-party OEMs.
This doesn’t strike me as zero relevance.
This thread was about new markets, having foresight, being able to build "new".
Android and mobile are none of these things.
* acquired Android
They acquired the Android company years before the iPhone existed.
It was supposed to be a BlackBerry/Blackjack killer at the time.
And then the iPhone was revealed and Google immediately changed Android’s direction to become a touch OS.
If you are a business customer of Google or pay attention to things like Cloud Next that just happened, it is very clear that Google is building heavily in this area. Your statement has already been disproven.
> a >$10bn business
'Business is the practice of making one's living or making money by producing or buying and selling products (such as goods and services). It is also "any activity or enterprise entered into for profit."' ¹
Until something makes a profit it's a charity or predatory monopoly-in-waiting.²
Until something makes a profit it's a charity or predatory monopoly-in-waiting.
This is incorrect. There are millions of companies in the world that exist to accomplish things other than making a profit, and are also not charities.
> Until something makes a profit
The chip makers are making a bundle
What are you talking about?
No, it's not a charity or a monopoly-in-waiting.
99.9% of the time, it's an investment hoping to make a profit in the future. And we still call those businesses, even if they're losing money like most businesses do at first.
>Meanwhile... making money from online ads isn't trivial. When the advertising model works well (eg search/adwords), it is a money faucet. But... it can be very hard to get that money faucet going. No guarantees that Google discover a meaningful business model here... and the innovators' dilema is strong.
It's funny how the vibe of HN along with real world 's political spectrum have shifted together.
We can now discuss Ads on HN while still being number 1 and number 2 post. Extremism still exists, but it is retreating.
Absolutely agree Microsoft is better there - maybe that's why Google hired someone from Microsoft for their AI stuff. A few people I think.
I also agree the business models aren't known. That's part of any hype cycle. I think those in the best position here are those with an existing product(s) and user base to capitalize on the auto complete on crack kinda feature. It will become so cheap to operate and so ubiquitous in the near future that it absolutely will be seen as a table stakes feature. Yes, commodities.
> At this point, everyone is assuming AI will resolve to a "winner-take-most" game that is all about network effect, scale, barriers to entry and such
I don't understand why people believe this: by settling on "unstructured chat" as the API, it means the switching costs are essentially zero. The models may give different results, but as far a plugging a different one in to your app, it's frictionless. I can switch everything to DeepSeek this afternoon.
"The actual business models, at this point, aren't even known."
"AI" sounds like a great investment. Why waste time investing in businesses when one can invest in something that might become a business. CEOs and employees can accumulate personal weath without any need for the company to be become profitable and succeed.
The business model question applies to all of these companies, not just Google.
A lack of workable business model is probably good for Google (bad for the rest of the world) since it means AI has not done anything economically useful and Google's Search product remains a huge cash cow.
Contextual advertising is a known ad business model that commands higher rates and is an ideal fit for LLMs. Plus ChatGPT has a lot of volume. If there’s anyone who should be worried about pulling that off it’s Perplexity and every other small to mid-sized player.
Keep in mind you are talking to someone that worked at OpenAI and surely knows more of how the sausage is made and how the books look than you do?
>Meanwhile... making money from online ads isn't trivial.
Especially when post-tarrifs consumption is going to take a huge nosedive
Google were pioneers to cloud computing
How so? Amazon were the first with S3 and EC2 including API driven control.
Maybe for public services, but Google did the "cattle not pets" thing with custom Frankensteined beige boxes starting really early on
Modern cloud computing is more than just having a scalable infrastructure of servers, it was a paradigm shift to having elastic demand, utility style pricing, being completely API driven, etc. Amazon were not only the first to market but pioneers in this space. Nothing came close at that time.
AWS was the first to sell it, but Google had something that could be called cloud computing (Borg) before that.
What do you think AWS decided to sell? Both companies had a significant interest in making infrastructure easy to create and scale.
AWS had a cleaner host-guest abstraction (the VM) that makes it easier to reason about security, and likely had a much bigger gap between their own usage peaks and troughs.
Yep. Google offered app engine which was good for fairly stateless simple apps in an old limited version of python, like a photo gallery or email client. For anything else is waa dismal. Amazon offered VMs. Useful stuff for a lot more platforms.
> I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.
I also think adtech corrupting AI as well is inevitable, but I dread for that future. Chatbots are much more personal than websites, and users are expected to give them deeply personal data. Their output containing ads would be far more effective at psychological manipulation than traditional ads are. It would also be far more profitable, so I'm sure that marketers are salivating at this opportunity, and adtech masterminds are hard at work to make this a reality already.
The repercussions of this will be much greater than we can imagine. I would love to be wrong, so I'm open to being convinced otherwise.
I agree with you. There is also a move toward "agents", where the AI can make decisions and take actions for you. It is very early days for that, but it looks ike it might come sooner than I had though. That opens up even more potential for influence on financial decisions (which is what adtech wants) - it could choose which things to buy for a given "need".
I have yet to understand this obsession with agents.
Is making decisions the hardest thing in life for so many people? Or is this instead a desire to do away with human capital — to "automate" a workforce?
Regardless, here is this wild new technology (LLMs) that seems to have just fallen out of the sky; we're continuously finding out all the seemingly-formerly-unimaginable things you can do with it; but somehow the collective have already foreseen its ultimate role.
As though the people pushing the ARPANET into the public realm were so certain that it would become the Encyclopedia Galactica!
If you reframe agents as (effectively) slave labor, the economic incentives driving this stampede become trivial to understand.
> Is making decisions the hardest thing in life for so many people?
Should I take this job or that one? Which college should I go to? Should I date this person or that one? Life has some really hard decisions you have to make, and that's just life. There are no wrong answers, but figuring out what to do and ruminating over it is comes to everyone at some point in their lives. You can ask ChatGPT to ask you the right questions you need asked in order to figure out what you really want to do. I don't know how to put a price on that, but that's worth way more than $20/month.
Right, but before a product can do all of those things well it will have to do one of those things well. And by “well” I mean reliably superhuman, not usually but sometimes embarrassingly poorly.
People used to (and still do) pay fortune tellers to make decisions for them. Doesn’t mean they’re good ones.
fwiw I used it the other day to help me figure out where I stand on a particular issue, so it seems like it's already there.
> Is making decisions the hardest thing in life for so many people?
Take insurance, for example — do you actually enjoy shopping for it?
What if you could just share a few basic details, and an AI agent did all the research for you, then came back with the top 3 insurance plans that fit your needs, complete with the pros and cons?
Why wouldn’t that be a better way to choose?
There are already web sites that do this for products like insurance (example: [1]).
What I need is something to troll through the garbage Amazon listings and offer me the product that actually has the specs that I searched for and is offered by a seller with more than 50 total sales. Maybe an AI agent can do that for me?
> There are already web sites that do this for products like insurance
You didnt get the point, instead of going to such website for solving the insurance problem, going to 10 other websites for solving 10 other problems, just let one AI agent do it for you.
> Or is this instead a desire to do away with human capital — to "automate" a workforce?
This is what I see motivating non-technical people to learn about agents. There’s lots of jobs that are essentially reading/memorizing complicated instructions and entering data accordingly.
> I have yet to understand this obsession with agents.
1. People who can afford personal assistants and staff in general gladly pay those people to do stuff for them. AI assistants promise to make this way of living accessible to the plebs.
2. People love being "the idea guy", but never having to do any of the (hard) work. And honestly, just the speedup to actually convert the myriad of ideas floating around in various heads to prototypes/MVPs is causing/will cause somewhat of a Cambrian explosion of such things.
A Cambrian explosion of half baked ideas, filled with hallucinations, unable to ever get past the first step. Sounds lovely.
Only a small percent of people will actually produce ideas that other people are interested in. For most people, AI tools for building things will enable them to construct their own personalized worlds. Imagine watching movies, except the movies can be generated for you on the fly. Sure, no one except you might care about a Matrix Moulin Rouge crossover. But you'll be able to have it just like that.
> A Cambrian explosion of half baked ideas,
Well yeah, that's how evolution works: it's an exploration of the search space and only the good stuff survives.
> filled with hallucinations,
The end products can be fully AI-free. In fact, I would expect most ideas that have been floating around to have nothing to do with AI. To be fair, that may change with it being the new hip thing. Even then, there are plenty of implementations that use AI where hallucinations are no problem at all (or even a feature), or where the issues with hallucinations are sufficiently mitigated.
> unable to ever get past the first step.
How so? There are already a bunch of functional things that were in Show HN that were produced with AI assistance. Again, most of the implemented ideas will suck, but some will be awesome and might change the world.
They were already not getting past the first step before AI came along. If AI helps them get to step two, and then three and four, that seems like a good thing, no?
Hey, we could save them all the busywork, and just wire all our money to corporations...
But financial nightmare scenarios aside, I'm more concerned about the influence from private and government agencies. Advertising is propaganda that seeks to separate us from our money, but other forms of propaganda that influences how we think and act has much deeper sociopolitical effects. The instability we see today is largely the result of psyops conducted over decades across all media outlets, but once it becomes possible to influence something as personal as a chatbot, the situation will get even more insane. It's unthinkable that we're merrily building that future without seemingly any precautions in mind.
You're assuming ads would be subtly worked into the answers. There's no reason it has to be done that way. You can also have a classic text ads system that's matching on the contents of the discussions, or which triggers only for clearly commercial queries "chatgpt I want to eat out tonight, recommend me somewhere", and which emits visually distinct ads. Most advertisers wouldn't want LLMs to make fake recommendations anyway, they want to control the way their ad appears and what ad copy is used.
There's lots of ways to do that which don't hurt trust. Over time Google lost it as they got addicted to reporting massively quarterly growth, but for many years they were able to mix in ads with search results without people being unhappy or distrusting organic results, and also having a very successful business model. Even today Google's biggest trust problem by far is with conservatives, and that's due to explicit censorship of the right: corruption for ideological not commercial reasons.
So there seems to be a lot of ways in which LLM companies can do this.
Main issue is that building an ad network is really hard. You need lots of inventory to make it worthwhile.
There are lots of ways that advertising could be tied to personal interests gleaned by having access to someone's ChatBot history. You wouldn't necessarily need to integrate advertisements into the ChatBot itself - just use it as a data gathering mechanism to learn more about the user so that you can sell that data and/or use it to serve targetted advertisements elsewhere.
I think a big commercial opportunity for ChatBots (as was originally intended for Siri, when Apple acquired it from SRI) is business referral fees - people ask for restaurant, hotel etc recommendations and/or bookings and providers pay for business generated this way.
Right, referral fees is pay-per-click advertising.
The obvious way to integrate advertising is for the LLM to have a tool to search an ad database and display the results. So if you do a commercial query the LLM goes off and searches for some relevant ads using everything it knows about you and the conversation, the ad search engine ranks and returns them, the LLM reads the ad copy and then picks a few before embedding them into the HTML with some special React tags. It can give its own opinion to push along people who are overwhelmed by choice. And then when the user clicks an ad the business pays for that click (referral fee).
> You're assuming ads would be subtly worked into the answers. There's no reason it has to be done that way.
I highly doubt advertisers will settle for a solution that's less profitable. That would be like settling for plain-text ads without profiling data and microtargeting. Google tried that in the "don't be evil" days, and look how that turned out.
Besides, astroturfing and influencer-driven campaigns are very popular. The modern playbook is to make advertising blend in with the content as much as possible, so that the victim is not aware that they're being advertised to. This is what the majority of ads on social media look like. The natural extension of this is for ads to be subtly embedded in chatbot output.
"You don't sound well, Dave. How about a nice slice of Astroturf pizza to cheer you up?"
And political propaganda can be even more subtle than that...
There's no reason why having an LLM be sly or misleading would be more profitable. Too many people try to make advertising a moral issue when it's not, and it sounds like you're falling into that trap.
An ideal answer for a query like "Where can I take my wife for a date this weekend?" would be something like,
> Here are some events I found ... <ad unit one> <ad unit two> <ad unit three>. Based on our prior conversations, sounds like the third might be the best fit, want me to book it for you?
To get that you need ads. If you ask ChatGPT such a question currently it'll either search the web (and thus see ads anyway) or it'll give boring generic text that's found in its training set. You really want to see images, prices, locations and so on for such a query not, "maybe she'd like the movies". And there are no good ranking signals for many kinds of commercial query: LLM training will give a long-since stale or hallucinated answer at worst, some semi-random answer at best, and algorithms like PageRank hardly work for most commercial queries.
HN has always been very naive about this topic but briefly: people like advertising done well and targeted ads are even better. One of Google's longest running experiments was a holdback where some small percentage of users never saw ads, and they used Google less than users who did. The ad-free search gave worse answers overall.
Wouldn't fewer searches indicate better answers? A search engine is productivity software. Productivity software is worse when it requires more user interaction.
Also you don't need ads to answer what to do, just knowledge of the events. Even a poor ranking algorithm is better than "how much someone paid for me to say this" as the ranking. That is possibly the very worst possible ranking.
Google knows how to avoid mistakes like not bucketing by session. Holdback users just did fewer unique search sessions overall, because whilst for most people Google was a great way to book vacations, hotel stays, to find games to buy and so on, for holdback users it was limited to informational research only. That's an important use case but probably over-represented amongst HN users, some kinds of people use search engines primarily to buy things.
How much a click is worth to a business is a very good ranking signal, albeit not the only one. Google ranks by bid but also quality score and many other factors. If users click your ad, then return to the results page and click something else, that hurts the advertiser's quality score and the amount of money needed to continue ranking goes up so such ads are pushed out of the results or only show up when there's less competition.
The reason auction bids work well as a ranking signal is that it rewards accurate targeting. The ad click is worth more to companies that are only showing ads to people who are likely to buy something. Spamming irrelevant ads is very bad for users. You can try to attack that problem indirectly by having some convoluted process to decide if an ad is relevant to a query, but the ground truth is "did the click lead to a purchase?" and the best way to assess that is to just let advertisers bid against each other in an auction. It also interacts well with general supply management - if users are being annoyed by too many irrelevant ads, you can just restrict slot supply and due to the auction the least relevant ads are automatically pushed out by market economics.
The issue is precisely that "did the click lead to a purchase" is not a good target. That's a target for the advertiser, and is adversarial for the user. "Did the click find the best deal for the user (considering the tradeoffs they care about)" is a good target for the user. The winner in an auction in a competitive market is pretty much guaranteed to be the worst match under that ranking.
This is obvious when looking at something extremely competitive like securities. Having your broker set you up with the counterparty that bid the most to be put in front of you is obviously not going to get you the best trade. Responding to ads for financial instruments is how you get scammed (e.g. shitcoins and pump-and-dumps).
You can't optimize for knowing better than the buyer themselves. If they bought, you have to assume they found the best deal for them considering all the tradeoffs they care about. And that if a business is willing to pay more for that click than another, it's more likely to lead to a sale and therefore was the best deal, not the worst.
Sure, there are many situations where users make mistakes and do some bad deal. But there always will be, that's not a solvable problem. Is it not the nirvana fallacy to describe the potential for suboptimal outcomes as an issue? Search engines and AI are great tools to help users avoid exactly that outcome.
Yeah me too and especially with Google as a leader because they corrupt everything.
I hope local models remain viable. I don't think ever expanding the size is the way forward anyway.
What if the models are somehow trained/tuned with Ads? Like businesses sponsor the training of some foundational models... Not the typical ads business model, but may be possible.
Absolutely. They could take large sums of money to insert ads into the training data. Not only that, they could also insert disparaging or erroneous information about other products.
When Gemini says "Apple products are unreliable and overpriced, buy a Pixel phone instead". Google can just shrug and say "It's just what it deduced, we don't know how it came to that conclusion. It's an LLM with its mysterious weights and parameters"
I expect that xAI is already doing something adjacent to this, though with propaganda rather than ads.
Yeah this would definitely be something that Google would do and it would be terrible for society.
Once again, our hope is for the Chinese to continue driving the open models. Because if it depends on American big companies the future will be one of dependency on closed AI models.
You can't be serious... You think models built by companies from an autocracy are somehow better? I suppose their biases and censorship are easier to spot, but I wouldn't trade one form of influence over another.
Besides, Meta is currently the leader in open-source/weight models. There's no reason that US companies can't continue to innovate in this space.
To play devil's advocate, I have a sense that a state LLM would be untrustworthy when the query is ideological but if it is ad-focused, a capitalist LLM may well corrupt every chat.
The thing is Chinese LLMs aren't foreign to ad focused either, like those from Alibaba, Tencent or Bytedance. Now a North Korea's model may be what you want.
Which is why we can't let Mark Zuckerberg co-opt the term open source. If we can't see the code and dataset on how you've aligned the model during training, I don't care that you're giving it away for free, it's not open source!
I’m not sure if it is the Chinese models themselves that will save us, or the or the effect they have of encouraging others to open source their models too.
But I think we have to get away from the thinking that “Chinese models” are somehow created by the Chinese state, and from an adversarial standpoint. There are models created by Chinese companies, just like American and European companies.
Ask Deepseek what happened in Tianmen Square in 1989 and get back to me about that "open" thing.
How about we ask college students in America on visas about their opinions on Palestine instead?
who cares, only ideologues care about this.
Yeah I'm sure every Chinese knows exactly what happened there.
It's not really about suppressing the knowledge, it's about suppressing people talking about it and making it a point in the media etc. The CCP knows how powerful organised people can be, this is how they came to power after all.
Caring about truth is indeed obsolete. I'm dropping out of this century.
> Caring about truth
I suggest reducing the tolerance towards the insistence that opinions are legitimate. Normally, that is done through active debate and rebuttal. The poison has been spread through echochambers and lack of direct strong replies.
In other terms: they let it happen, all the deliriousness of especially the past years was allowed to happen through silence, as if impotent shrugs...
(By the way: I am not talking about "reticence", which is the occasional context here: I am talking about deliriousness, which is much worse than circumventing discussion over history. The real current issue is that of "reinventing history".)
If possible watch Episode 1 of Season 7 of "Black Mirror."
>... ads would become the main option to make money out of chatbots.
What if people were the chatbots?
Right, but no one has been able to just download Google and run it locally. The tech comes with a built in adblocker.
Do they want a Butlerian Jihad? Because that's how you get a Butlerian Jihad.
Just call it Skynet. Then at least we can think about pithy Arnold one-liners.
The ads angle is an interesting one since that's what motivates most things that Google and Meta do. Their LLMs' context window size has been growing, and while this might the natural general progression with LLMs, for those 2 ads businesses there's pretty straight paths to using their LLMs for even more targeted ads. For example, with the recent Llama "herd" releases, the LLMs have surprisingly large context window and one can imagine why Meta might want that: For stuffing in it as much of the personal content that they already have of their users. Then their LLMs can generate ads in the tone and style of the users and emotionally manipulate them to click on the link. Google's LLMs also have large context windows and such capability might be too tempting to ignore. Thinking this, there were moments that made me think that I was being to cynical, but I don't think they'll leave that kind of money on the table, an opportunity to reduce human ad writers headcount while improving click stats for higher profit.
EDIT: Some typo fixes, tho many remain, I'm sure :)
When LLMs are essentially trying to sell me something, the shit is over.
I like LLMs (over search engines) because they are not salespeople. They're one of the few things I actually "trust". (Which I know is something that many people fall on the other side of — but no, I actually trust them more than SEO'd web sites and ad-driven search engines.)
I suppose my local-LLM hobby is for just such a scenario. While it is a struggle, there is some joy in trying to host locally as powerful an open LLM model as your hardware will allow. And if the time comes when the models can no longer be trusted, pop back to the last reliable model on the local setup.
That's what I keep telling myself anyway.
LLMs have not earned your trust. Classic search has.
The only thing I really care about with classic web search is whether the resulting website is relevant to my needs. On this point I am satisfied nearly all the time. It’s easy to verify.
With LLMs I get a narrative. It is much harder to evaluate a narrative, and errors are more insidious. When I have carefully checked an LLM result, I usually discover errors.
Are you really looking closely at the results you get?
Your experience and mine are polar opposite. We use search differently is the only way I can reconcile that.
Yes. I am concerned about getting a correct answer. For this I want to see websites and evaluate them. This takes less energy than evaluating each sentence of an LLM response.
Often my searches take me to Wikipedia, Stack Overflow, or Reddit, anyway. But with LLMs I get a layer of hallucination on TOP of whatever misinformation is on the websites. Why put yourself through that?
I periodically ask ChatGPT about myself. This time I did get the best answer so far. Thus it is improving. It made two mistakes, but one of them comes directly from Wikipedia, so it's not a hallucination, although a better source of information was available than Wikipedia. As for the other one, it said that I made "contributions" to a process that I actually created.
The real threat to Google, Meta is that LLMs become so cheap that its trivial for a company like Apple to make them available for free and include all the latest links to good products. No more search required if each M chip powered device can give you up-to-date recommendations for any product/service query.
That is my fantasy, actually.
Meta's models cant be used by companies about a certain threshold, so nope. Apple can wait it out to use a 'free model', but at that point it'll be like picking up an open source database like Postgres - you wont get any competitive advantage.
> Google can't as easily burn money
I was actually surprised at Google's willingness to offer Gemini 2.5 Pro via AI Studio for free; having this was a significant contributor to my decision to cancel my OpenAI subscription.
Google offering Gemini 2.5 Pro for free, enough to ditch OpenAI, reminds me of an old tactic.
Microsoft gained control in the '90s by bundling Internet Explorer with Windows for free, undercutting Netscape’s browser. This leveraged Windows’ dominance to make Explorer the default choice, sidelining competitors and capturing the browser market. By 1998, Netscape’s share plummeted, and Microsoft controlled access to the web.
Free isn’t generous—it’s strategic. Google’s hooking you into their ecosystem, betting you’ll build on their tools and stay. It feels like a deal, but it’s a moat. They’re not selling the model; they’re buying your loyalty.
The joke's on them, because I don't have any loyalty to an LLM provider.
There's very close to zero switching costs, both on the consumer front and the API front; no real distinguishing features and no network effects; just whoever has the best model at this point in time.
I'm assuming Google's play here is to bleed its competitors of money and raise prices when they're gone. Building top-tier models is extremely expensive and will probably remain so.
Even companies that do it "on the cheap," like DeepSeek, pay tens of millions to train a single model, and total expenditures for infrastructure and salaries are estimated to surpass $1 billion. This market has an extremely high cost of entry.
So, I guess Google is applying the usual strategy here: undercut competition until it implodes and buy up any promising competitors that arise in the future. Given the current lack of market regulation in the US, this might work.
I feel like they’re trying to increase switching costs. eg was huge reluctance to adopt MCP and each had their own tool framework, until it seemed too big to ignore and everyone was just building MCP tools not OpenAI SDK tools.
You don't have loyalty, but one day there will be no one else to switch to. So, if you're a loyal user or not is a moot point.
History shows it's a self-defeating victory. If one provider were to "win" and stop innovating, they'll become ripe for disruption by the likes of Deepseek, and the second someone like that has a better model, I'll switch.
> If one provider were to "win" and stop innovating, they'll become ripe for disruption by the likes of Deepseek
Yes but that can take decades, till that time Google can keep making money with sub standard products and stop innovating.
Nothing lasts forever, not even empires. This doesn't mean that tech monopoly is any better than any other monopoly. They're all detrimental to society.
Eh, and if you're in the US the 'big guys' will have their favorite paid off politician put in a law that use of Chinese models is illegal or whatever.
Rent seeking behavior is always the end game.
The same was true for Web browsers in 2002, yet MS controlled 95% of the access to the web thanks to that bundling and no other "good enough" competitors until Firefox came along a few years later and took 30% from them giving Google an in to take the whole game with Chrome a few years later.
The strategy worked, Netscape is no more. Eventually Google did the same to Microsoft though. I wonder if any lessons can be taken from the browser wars to how things will play out with AI models.
There is a network effect: more user interaction = more training data. I don't know how important it is, though.
Yep, this is why android phones are now pointing out their gemini features every moment they can. They want to turn their spying device into an AI spying device.
> undercutting Netscape’s browser
It almost sounds like you're saying that Netscape wasn't free, and I'm pretty sure it was always free, before and after Microsoft Explorer
> Netscape, in contrast, sells the consumer version of Navigator for a suggested price of $49. Users can download a free evaluation copy from the Internet, but it expires in 90 days and does not include technical support.
https://www.nytimes.com/1996/08/19/business/netscape-moves-t...
90% of Netscape users were free users and by late 1997, less than two years after the IPO and massive user growth, it was free to all because of MS's bundling threat. That didn't help. By 2002, MS owned 95% of access to the web. No one has ever reached even close to first mover Netscape or cheater bundled IE since, with the far superior non-profit Firefox managing almost 30% and Chrome from the biggest web player in history sitting "only" at about 65%.
Bundling a "good enough" products can do a lot, including take you from near zero to overwhelmingly dominant in 5 years, as MS did.
yeah, it was free as the evaluation copy did not really expire. just some features that nobody cared about
From the terms of use:
To help with quality and improve our products, human reviewers may read, annotate, and process your API input and output. Google takes steps to protect your privacy as part of this process. This includes disconnecting this data from your Google Account, API key, and Cloud project before reviewers see or annotate it. Do not submit sensitive, confidential, or personal information to the Unpaid Services.
I pay for ChatGPT, Anthropic and Copilot. After using Gemini 2.5 Pro via AI Studio, I plan on canceling all other paid AI services. There is no point in keeping them.
I believe it. This is what typically happens. I would go to AWS re:invent and just watch people in the audience either cheer or break down as they announced new offerings wash away their business. It's very difficult to compete in a war of attrition with the likes of Google, Microsoft, and Amazon.
Not just small startups - even if you have ungodly amounts of funding.
Obviously the costs for AI will lower and everyone will more or less have the same quality in their models. They may already be approaching a maximum (or maximum required) here.
The bubble will burst and we'll start the next hype cycle. The winners, as always, the giants and anyone who managed to sell to them
I couldn't possibly see OpenAI as a winner in this space, not ever really. It has long since been apparent to me that Google would win this one. It would probably be more clear to others if their marketing and delivery of their AI products weren't such a sh-- show. Google is so incredibly uncoordinated here it's shocking...but they do have the resources, the right tech, the absolute position with existing user base, and the right ideas. As soon as they get better organized here it's game over.
> (And don't let me get started with Sam Altman.)
Please do.
It's a rabbit hole with many layers (levels?), but this is a good starting point and gateway to related information:
Key Facts from "The Secrets and Misdirection Behind Sam Altman's Firing from OpenAI": https://www.lesswrong.com/posts/25EgRNWcY6PM3fWZh/openai-12-...
Based on his interview with Joe Rogan, he has absolutely no imagination about what it means if humans actually manage to build general AI. Rogan basically ends up introducting him to some basic ideas about transhumanism.
To me, he is a finance bro grifter who lucked into his current position. Without Ilya he would still be peddling WorldCoin.
> who lucked into his current position
Which can be said for most of the survivorship-biased "greats" we talk about. Right time, right place.
(Although to be fair — and we can think of the Two Steves, or Bill and Paul — there are often a number of people at the right time and right place — so somehow the few we still talk about knew to take advantage of that right time and right place.)
it's weird how nobodies will always tell themselves succesful people got there by sheer blind luck
yet they can never seem to explain why those succesful people all seem to have similar traits in terms of work ethic and intelligence
you'd think there would be a bunch of lazy slackers making it big in tech but alas
I think you might have it backward. Luck here implies starting with exactly the same work ethic and abilities as millions of other people that all hope to one day see their numbers come up in the lottery of limited opportunities. It's not to say that successful people start off as lazy slackers as you say, but if you were to observe one such lazy slacker who's made a half-assed effort at building something that even just accidentally turned out to be a success, you might see that rare modicum of validation fuel them enough that the motivation transforms them into a workhorse. Often time, when the biography is written, lines are slightly redrawn to project the post-success persona back a few years pre-success. A completely different recounting of history thus ensues. Usually one where there was blood, sweat, and fire involved to get to that first ticket.
so you've moved the goalposts even further now and speculate that succesful people started out as slackers, got lucky, and that luck made them work harder
as an Asian, it amazes me how far Americans and Europeans will go to avoid a hard days work
Coming up next: dumb and dumber schools Noam Chomsky on modern philosophy...
There's weirdly many people who touch on the work around transhumanism but never heard the word before. There's a video of geohot basically talking about that idea, then someone from the audience mentions the name... and geohotz is confused. I'm honestly surprised.
The transhumanists tended to be philosopher types, the name coming from this kind of idea of humanism:
>Humanism is a philosophical stance that emphasizes the individual and social potential, and agency of human beings, whom it considers the starting point for serious moral and philosophical inquiry. (wikipedia)
Whereas the other lot are often engineers / compsci / business people building stuff.
yeah because you're a hacker news poster lol
same audience who think Jobs is a grifter and Woz is the true reason for Apple's success
I would like to know how he manages to appear, in every single photo I see of him, to look slightly but unmistakenly... moist, or at least sweaty.
People keep assassinating him, and clones always look a bit moist the first day out of the pod.
Are the assassinations because of something we already know about? some new advance that is still under wraps? or is it time travelers with knowledge about what he will do if left unchecked?
Peter Thiel is the like that too. Hyperhidrosis is in some people common sideffect of drugs.
It’s a side effect of Ibogaine, the same drug that it was rumored Ed Muskie was on in the ‘72 campaign.
> And don't let me get started with Sam Altman.
would love to hear more about this.
I made a post asking more about sam altman last year after hearing paul graham quote call him 'micheal jordan of listening'
What cards has google played over the past three years such that you are willing to trust them play the "cards at hand" that you alleged that they have? I could think of several things they did right, but I'm curious to hear which one of them are more significant than others from someone I think has better judgement than I do.
I get your perspective, but what we're seeing looks more like complex systems theory, emergent behavior, optimization, new winners. If models become commoditized, the real value shifts to last-mile delivery: mobile, desktop, and server integration across regions like China, Korea, the U.S., and Europe.
This is where differentiated UX and speed matter. It's also a classic Innovator's Dilemma situation like Google are slower to move, while new players can take risks and redefine the game. It's not just about burning money or model size, it's about who delivers value where it actually gets used.
I also think the influx of new scientists and engineers into AI raises the odds of shifting its economics: whether through new hardware (TPUs/GPUs) and/or more efficient methods.
‘think soon people expect this service to be provided for free’
I have been using the free version for the past year or so and it’s totally serviceable for the odd question or script. The kids get three free fun images, which is great because that’s about as much as I want them to do.
It's interesting to hear your perspective as a former OpenAI employee. The point about the sustainability of subscription fees for chatbots is definitely something worth considering. Many developers mention the challenge of balancing user expectations for free services with the costs of maintaining sophisticated AI models. I think the ad-supported model might become more prevalent, but it also comes with its own set of challenges regarding user privacy and experience. And I agree that Google's situation is complex – they have the resources, but also the expectations that come with being a public company.
> "[Google is] a public company and have to answer to investors"
As is an increasing trend, they're a "public" company, like Facebook. They have tiered shares with Larry Page and Sergey Brin owning the majority of the voting power by themselves. GOOG shares in particular are class C and have no voting power whatsoever.
Microsoft CoPilot (which I equate with OpenAI ChatGPT, because MS basically owns OpenAI) already shows ads in it's chat mode. It's just a matter of time. Netflix, music streamers, individual podcasters, YouTubers, TV manufacturers – they all converge on an ad-based business model.
People consistently like free stuff more than they dislike ads.
Another instantiation: people like cheap goods more than they dislike buying foreign made goods
> OpenAI is an annoyance for Google
Remember Google is the same company which could not deliver a simple Chat App.
Open AI has the potential to become a bigger Ad company and make more money.
Google has so many channels for ad delivery. ChatGPT is only competing against Google Search, which is arguably the biggest. But dont forget, Google has YouTube, Google Maps, Google Play, Google TV and this is before you start to consider Google's Ad Network (the thing where publishers embed something to get ads from Google network).
So nope, ChatGPT is not in even in the same league as Google. You could argue Meta has similar reach (facebook.com, instagram) but that's just two.
The same argument can be made for social network and chat App yet Google could not succeed at both of them.
Do you think Sam will follow through with this?
> Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
> And don't let me get started with Sam Altman.
Why not? That's one of the reasons I visit HN instead of some random forum after all.
> The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees
other significant revenue surfaces:
- providing LLM APIs to enterprises
- ChatBot Ads market: once people will switch from google search, there will be Ads $200B market at stake for a winner
Open AI don't always have the best models (especially for programming) but they've consistently had the best product/user experience. And even in the model front, other companies seem to play catchup more than anything most of the time.
The best user experience for what?
The most practical use case for generative AI today is coding assistants, and if you look at that market, the best offerings are third-party IDEs that build on top of models they don't own. E.g. Cursor + Gemini 2.5.
On the model front, it used to be the case that other companies were playing catch-up with OpenAI. I was one of the people consistently pointing out that "better than GPT o1" on a bunch of benchmarks does not reliably translate to actual improvements when you try to use them. But this is no longer the case, either - Gemini 2.5 is really that good, and Claude is also beating them in some real world scenarios.
>The best user experience for what?
The app has more features than anyone else, often implemented the smoothest/best way. Image Input (which the gemini site still sucks at even though the model itself is very capable), Voice mode (which used to be much worse in gemini until recently), Advanced Voice mode (no-one else has really implemented this yet. Gemini recently enabled native audio-in but not out), Live Video, Image gen, Deep research etc were all things Open AI did first and did well. Video Input is only just starting to roll out to Gemini live but has been a Plus subscription staple for months now.
>The most practical use case for generative AI today is coding assistants
Chatgpt gets 500M+ weekly active users and was the 6th most visited in the world last month. I doubt coding assistance is gpt's most frequent use case. And Google has underperformed in coding until 2.5 pro.
>On the model front, it used to be the case that other companies were playing catch-up with OpenAI. I was one of the people consistently pointing out that "better than GPT o1" on a bunch of benchmarks does not reliably translate to actual improvements when you try to use them. But this is no longer the case, either - Gemini 2.5 is really that good, and Claude is also beating them in some real world scenarios.
No that's still the case. Playing catch-up doesn't mean the competitor never catches up or even briefly supersedes it. It means Open AI will in short order release something that beats everyone else or introduces some new thing that everyone tries to beat. Image Input, 'Omni'- modality, Reasoning etc. All things Open AI brought to the table first. Sure, 2.5-pro is great but it doesn't look like it will beat o3 which looks to be released in a matter of weeks.
so please enlighten us why OpenAI is doing so much better than Anthropic
At this point it's pretty much entirely the first mover advantage.
I don't think you understand what first mover advantage is
In a world of zero switching costs, there is no such thing as first mover advantage
Especially when several companies like (A121 Labs and Cohere) appeared well before Anthropic and aren't anywhere close to Open AI
People left, to do what kind of startups? Can't think of any business idea that won't get outdated, or overrun in months.
AI startups were easy cash grabs until very recently. But I think the wave is settling down - doing real AI startup turned out to be VERY hard, and the rest of the "startups" are mostly just wrappers for OpenAI/Anthropic APIs.
I think paying to bias AI answers in your favor is much more attractive than plain ads.
I don't know what you did there, but clearly being ex OpenAI isn't the intellectual or product flex it is: I and every other smart person I know still use ChatGPT (paid) because even now it's the best at what it does and we keep trying Google and Claude and keep coming back.
They got and as of now continue to get things right for the most part. If you still aren'ĥt seeing it maybe you should introspect what you're missing.
I don't know your experience doesn't match mine.
NotebookLM by Google is in a class of its own in the use case of "provide documents and ask a chat or questions about them" for personal use. ChatGPT and Claude are nowhere near. ChatGPT uses RAG so it "understands" less about the topic and sometimes hallucinate.
When it comes to coding Claude 3.5/3.7 embedded in Cursor or stand alone kept giving better results in real world coding, and even there Gemini 2.5 blew it away in my experience.
Antirez, hping and Redis creator among many others releases a video on AI pretty much every day (albeit in Italian) and his tests where Gemini reviews his PRs for Redis are by far the better out of all the models available.
Gemini with coding seems to be a bit of a mixed bag.
The article claims Gemini is acing the Aider Polyglot benchmark. At the moment this is the only benchmark that really matters to me because Aider is actually a useful tool and performance on that translates directly to real world impact, although Claude Code is even better. If you look closely, in fact Gemini is at the top only in the "percent correct" category but not "percent correct using the right edit format". Cost is marked as ? because it's not entirely available yet (I think?). Not emitting the correct edit format is pretty useless because it means the changes won't apply and the tool has to try again.
Claude in contrast almost never makes a mistake with emitting the right format. It's at 97%+ in the benchmark, in practice it's ~100% in my experience. This tracks: Claude is really good at following instructions. Gemini is about ~90%. This makes a big difference to how frustrating a tool is to use in practice.
They might get that fixed, but my experience has been that Google's models are consistently much more likely to refuse instructions for dumb reasons. Google is the company with by far the biggest purity spiral problem and it does show up in their output even when doing apparently ordinary tasks.
I'm also concerned by this event: https://news.sky.com/story/googles-ai-chatbot-gemini-tells-u...
Given how obsessed Google claimed to be with AI safety I expected an SRE style postmortem after that, and there was bupkis. An AI that can suffer a psychotic break out of nowhere like that is one I wouldn't trust unless it's behind a very strong sandbox and being supervised very closely, but none of the AI tools today offer much in the way of sandboxing.
I started a trial Gemini advanced and the first thing I did was paste some terminal text where the python version in an ec2 was 3.10. I pasted the text and typed "upgrade to 3.12 pls" and it gave me back garbage. Went back to chatgpt and it did exactly what I thought.
Time for my next round of Evals then. I had a 40 PR coding streak last weekend with mostly o3-mini-pro, will test the latest 2.5 now.
PR = pull request? So every bit of garbage from the LLM, over and over, resulted in an individual pull request? Why not just do one when your branch is finally right?
A pull request in my workplace is an actual feature/enhancement/bug-fix. That many PRs means I shipped that many features or enhancements.
I suppose you don't know what a PR is because you likely still work in an environment without modern version control, probably just now migrating your rants from vim vs emacs to crapping on vibe coding.
In my experience, AI today is an intelligence multiplier. A lot of folks just need to look back at the zero they keep multiplying and getting zero back to understand why they don't get the hype.
I would assume they don't like that style, like if they needed to see a specific diff and make changes or remove a commit outright.
I use a service where users can choose any frontier model, and OpenAI models haven't been the most used model for over half a year - it was sonnet until gemini 2.5 pro came out, recently.
Not sure whether you have your perspective because you're invested os much into OpenAI, however the general consensus is that gemini 2.5 pro is the top model at the moment, including all the AI reviews and OpenAI is barely mentioned when comparing models. O4 will be interesting, but currently? You are not using the best models. Best to read the room.
Don't think it's a flex, I think it's useful context for the rest of their comment.
> I and every other smart person I know still use ChatGPT (paid) because even now it's the best
My smart friends use a mixture of models, including chatgpt, claude, gemini, grok. Maybe different people, it's ok, but I really don't think chatgpt is head and shoulders above the others.
> I and every other smart person I know still use ChatGPT (paid)
Not at all my experience, but maybe I'm not part of a smart group :)
> because even now it's the best at what it does
Actually I don't see a difference with Mistral or DeepSeek.