firejake308 1 day ago

I'm confused why they are working on their own frontier models if they are going to be bought by OpenAI anyway. I guess this is something they were working on before the announcement?

6
allenleein 1 day ago

It seems OpenAI acquired Windsurf but is letting it operate independently, keeping its own brand and developing its own coding models. That way, if Windsurf runs into technical problems, the backlash lands on Windsurf—not OpenAI. It’s a smart way to innovate while keeping the main brand safe.

sunshinekitty 16 hours ago

This is an incredibly premature statement to make. The acquisition announcement is days old.

riffraff 1 day ago

But doesn't this mean they have twice the costs in training? I was under the impression that was still the most expensive part of these companies' balance.

kcorbitt 1 day ago

It's very unlikely that they're doing their own pre-training, which is the longest and most expensive part of creating a frontier model (if they were, they'd likely brag about it).

Most likely they built this as a post-train of an open model that is already strong on coding like Qwen 2.5.

rfoo 22 hours ago

mid/post training does not cost that much, except maybe large scale RL, but even this is more of an infra problem. If anything, the cost is mostly in running various experiments (i.e. the process of doing research).

It is very puzzling why "wrapper" companies don't (and religiously say they won't ever) do something on this front. The only barrier is talents.

anshumankmr 22 hours ago

You might be underestimating the barrier to hiring the really smart people. Open AI/Google etc would be hiring and poaching people like crazy, offering cushy bonuses and TCs that would make blow your mind.(Like say Noam Brown at Open AI) And some of the more ambitious ones would start their own ventures (like say Ilya etc.).

That being said I am sure a lot of the so called wrapper companies are paying insanely well too, but competing with FAANGMULA might be trickier for them.

whywhywhywhy 20 hours ago

Any half decent and methodical software engineer can fine tune/repurpose a model if you have the data and the money to burn on compute and experiment runs, which they do.

anshumankmr 19 hours ago

Fine tuning/distilling etc is fine. I was speaking to the original commenter's question about research, which is where things are trickier. Fine tuning is something I even managed and Unsloth has removed even barriers for training some of the more commonly used open source models.

brookst 19 hours ago

They can absolutely do it, but they will get poorer results than someone who really understands LLMs. There is still a huge amount of taste and art in the sourcing and curation of data for fine tuning.

NitpickLawyer 21 hours ago

FAANGMULA ... Microsoft, Uber?, L??, Anthropic? Who's the L?

riffraff 16 hours ago

A is Airbnb, afair.

Archonical 21 hours ago

Lyft.

ActionHank 17 hours ago

Windsurf is a hedge against MS + VSCode and GH + copilot.

OAI is trying frantically to build a moat without doing any digging.

OtherShrezzing 21 hours ago

This is effectively how Microsoft is treating OpenAI.

jstummbillig 20 hours ago

Why would OpenAI not let smart people work on models? That seems to be what they do. The point is: They are no longer "their own" models. They are now OpenAI models. If they suck, if they are redundant, if there is no idea there that makes sense, that effort will not continue indefinitely.

seunosewa 22 hours ago

They were working on the model before the acquisition. It makes sense to test it and see how it does instead of throwing the work away. Their data will probably be used to improve gpt-4.1, o4 mini high, and other OpenAI coding models

kristopolous 1 day ago

Must have been. These things take months.

dyl000 1 day ago

openAI models have an issue where they are pretty good at everything but not incredible at anything. They're too well rounded.

for coding you use anthropic or google models, I haven't found anyone who swears by openAI models for coding... Their reasoning models are either too expensive or hallucinate massively to the point of being useless... I would assume the gpt 4.1 family will be popular for SWE's

Having a smaller scope model (agentic coding only) allows for much cheaper inference and windsurf building its own moat (so far agentic IDE's haven't had a moat)

jjani 23 hours ago

> openAI models have an issue where they are pretty good at everything but not incredible at anything. They're too well rounded.

This suggests OpenAI models do have tasks they're better at than the "less rounded" competition, who have taks they're weaker in. Could you name a single sucg task (except for image generation, which is an entirely different usecase), that OpenAI models are better at than Gemini 2.5 and Claude 3.7 without costing at least 5x as much?

anshumankmr 1 day ago

Getting more money perhaps also, if they believed their model to be good, and had amassed some good training data Open AI can leverage, apart from the user base.