LLMs should be legally required to act in the interest of their users (not their creators).
This is a standard that already applies to positions of advisors such as Medical professionals, lawyers and financial advisors.
I haven't seen this discussed much by regulators, but I have made a couple of submissions here and there expressing this opinion.
AIs will get better, and they will become more trusted. They cannot be allowed to sell the answer to the question "Who should I vote for?" To the highest bidder.
Who decides what's in the interest of the user?
The same same for the human professions, a set of agreed upon guidelines on acting in service of the client, and enforcement of penalties against identifiable instances of prioritizing the interests of another party over the client.
There will always be grey areas, these exist when human responsibilities are set also, and there will be those who skirt the edges. The matters of most concern are quite easily identifiable.
> LLMs should be legally required to act in the interest of their users (not their creators).
lofty ideal... I don't see this ever happening; not anymore than I see humanity flat out abandoning the very concept of "money"
I am not a fan of fatalism. Instead of saying it won't ever happen, we need to be asking to have rights.
At the very least you will force people to make the case for the opposing opinion, and we learn who they are and why they think that.
Lawyers cannot act against their clients, do you think we have irreparably lost the ability as a society to create similar protections in the future.
but that would kill monetization no?
Of course not. You’d have to pay for the product, just like we do with every other product in existence, other than software.
Software is the only type of product where this is even an issue. And we’re stuck with this model because VCs need to see hockey stick growth, and that generally doesn’t happen to paid products.