sva_ 8 days ago

It is sort of funny to me how the sentiment about whoever seems to be leading in ML changes so frequently (in particular here on HN.) A couple months ago it felt like people were sure that Google completely fucked it up for themselves (especially due to the fact that they invented the transformer but didn't productize it themselves at first.)

For a short while, Claude was the best thing since sliced cheese, then Deepseek was the shit, and now seemingly OpenAI really falls out of favor. It kinda feels to me like people cast their judgement too early (perhaps again in this case.) I guess these are the hypecycles...

Google is killing it right now, I agree. But the world might appear completely different in three months.

7
patrickhogan1 8 days ago

It’s not just sentiment though. It’s reality. Before December 2024 timeframe Google’s models were awful. Now with 2.5 they are awesome.

There is no clear winner. The pace is fast.

h2zizzle 8 days ago

You could also be seeing waves of various astroturf campaigns.

joenot443 8 days ago

Personally, I don't really think there's a team at Google, nor at OpenAI, paying for "astroturfing" on sites like HN.

What are the rough steps through which you see this working? I see people talking about "astroturfing" all the time without much explanation on the mechanisms. So roughly, there are employees paid solely to post on social media like HN trying to push the needle in one direction or another?

light_triad 8 days ago

You sound like you're from the Google team ;)

Rough steps:

1. Monitor keywords

2. Jump in to sway conversation

3. Profit

I'm not saying this is happening. Purely hypothetical.

joenot443 7 days ago

Right, so who's doing the jumping in to sway conversation?

Full disclosure, I am Xoogler, but if anything I think that makes my skepticism even more justified. If there were people there paid to post nice things about Google on HN and Twitter then I'd love to apply for that team!

sandspar 8 days ago

There doesn't need to be a team. Individuals can act according to personal incentives and still create coordinated behavior. Look at flocks of birds. Each bird acts for itself; together they move in unison.

joenot443 7 days ago

Right, but isn't that just fans being fans?

Usually when I read "astroturfed" I assume there's some higher level coordination involved. I think the flock of birds metaphor is probably a reasonable comparison to the behavior we see on social media all the time - members acting individually on their own self interests in a means which appears coordinated when you zoom out.

okdood64 7 days ago

Sure, but that's not what 'astroturf campaigns' impies.

sandspar 7 days ago

The top level comment was questioning why sentiment changes so frequently.

bigyabai 7 days ago

I think this is how you induce schizophrenia in yourself, not how you identify secret psyop campaigns organized by private sponsors.

h2zizzle 7 days ago

This is a great example of a strawman argument. I didn't say anything about teams, or "employees paid solely to post on social media". You injected those details, because you think that they make the idea of an astroturf campaign seem farfetched. But we know that such campaigns happen in other contexts, sponsored by entities with less money to throw around. Why not here? And why do we need to know the mechanics, if all we care about is whether or not it's happening (and maybe, if it's not self-evident, what the goal of such a campaign is)? We don't, really.

joenot443 7 days ago

Sure, so it sounds like we've got a different idea in mind for what this sort of work would look like. Totally understandable!

In your opinion then, what would a Google-run astroturfing campaign roughly look like? Sounds like this article is an example, right? I'm not asking for insider info, I'm just curious about your mental model on the basic mechanics.

Personally, I think the case "other entities with comparable resources do this, so Google probably does too" isn't super convincing to me. IMO, the null hypothesis "Google has lots of nerdy fans who'll happily post positively about it for free" is a lot reasonable, but perhaps there's something I'm missing.

sva_ 8 days ago

Yeah... I wish there were laws that would require disclosure of such behavior. Might be tricky to implement though, and probably contradicts the interests of politicians.

light_triad 8 days ago

AI is changing fast! And to be fair to the model companies, they have been releasing products of (mostly) increasing quality.

It really depends what your use case is. Over the range of all possible use cases this has been the narrative.

I tried Google's model for coding but it kept giving me wrong code. Currently Claude for coding and ChatGPT for more general questions is working for me. The more exotic your use case, the more hit or miss it's going to be.

ZeroTalent 8 days ago

Claude was only ever good for coding, in my opinion. It had nothing on OpenAI pro models for multimodal use.

int_19h 8 days ago

The sentiment changes this fast because SOTA changes this fast. E.g. Google models were objectively crappy compared to OpenAI, but Gemini 2.5 really turned the tables (and I'm not talking about synthetic benchmarks here but real world coding).

The state of affairs with local models is similarly very much in flux, by the way.

uncomplexity_ 7 days ago

yes yes and it should be like this, this is healthy competition!

uncomplexity_ 7 days ago

and the consistent all time winner? the goddamn consumers!

googlehater 8 days ago

> A couple months ago it felt like people were sure that Google completely fucked it up for themselves

Hey it's me!