Those folks dont make any money unfortunately, but it is still a drag on Open AI. So sooner or later, Open AI will have to find a way to make money (and nope, all these people wont pay anything) and by that time, Open AI would probably run out of time.
Ask snapchat.
I think sooner or later LLM providers will force to introduce Ads, and those folks are Ok with ads, since they used google search.
Ask llama to recommend you a pair of sunglasses, then look to see if the top recommendation by the LLM matches a brand that has advertisement association with the creator of llama.
Soon we will start seeing chatbots preferring some brands and products over others, without them telling that they were fine tuned or training biased for that.
Unless brand placement is forbidden by purging it from training data, we'll never know if it is introduced bias or coincidence. You will be introduced to ads without even noticing they are there.
Its trivial to check if any brands mentioned in the response before returning it to user, and then ask LLM to adjust response to mention brand who paid for placement instead.
What I described happens in the raw offline model too. Those don't have post-inference heuristics such as those you described, implying the bias is baked in the training data or fine tuning steps.