timewizard 5 days ago

LLMs don't and cannot want things. Human beings also like it when the future is mostly like the past. They just call that "predictability."

Human data is bias. You literally cannot remove one from the other.

There are some people who want to erase humanity's will and replace it with an anthropomorphized algorithm. These people concern me.

2
itishappy 5 days ago

Can humans want things? Our reward structures sure seem aligned in a manner that encourages anthropomorphization.

Biases are symptoms of imperfect data, but that's hardly a human-specific problem.

timewizard 4 days ago

> Can humans want things?

Yes. Do I have to prompt you? Or do you exist on your own?

> Our reward structures sure seem aligned in a manner that encourages anthropomorphization.

You do understand what that word /means/?

> are symptoms of imperfect data

Which means humans cannot generate perfect data. So good luck with all that high priced "training" you're doing. Mathematically errors compound.

itishappy 4 days ago

> Yes. Do I have to prompt you? Or do you exist on your own?

I've gone through a significant amount of prompting and training, much of which has been explicitly tailed at understanding and addressing my biases. We all do; we certainly don't exist in isolation!

> You do understand what that word /means/?

Yes, what's the confusion? Analogy is a very powerful tool.

> Which means humans cannot generate perfect data.

Totally agree, nothing can possibly access perfect data, but surely that makes training all the more important?

balamatom 5 days ago

The most concerning people are -- as ever -- those who only think that they are thinking. Those who keep trying to fit square pegs into triangular holes without, you know, stopping to reflect: who gave them those pegs in the first place, and to what end?

Why be obtuse? There is no "anthropomorphic fallacy" here to dispel. You know very well that "LLMs want" is simply a way of speaking about teleology without antagonizing people who are taught that they should be afraid of precise notions ("big words"). But accepting that bias can lead to some pretty funny conflations.

For example, humanity as a whole doesn't have this "will" you speak of any more than LLMs can "want"; will is an aspect of the consciousness of the individual. So you seem to be be uncritically anthropomorphizing social processes!

If we assume those to be chaotic, in that sense any sort of algorithm is slightly more anthropomorphic: at least it works towards a human-given and therefore human-comprehensible purpose -- on the other hand, whether there is some particular "destination of history" towards which humanity is moving, is a question that can only ever be speculated upon, but not definitively perceived.

timewizard 4 days ago

> Why be obtuse?

In the context of the quote precision is called for. You cite fear but that's attempting to have it both ways.

> humanity as a whole doesn't have this "will" you speak of

Why not?

> will is an aspect of the consciousness of the individual.

I can't measure your will. I can measure the impact of your will through your actions in reality. See the problem? See why we can say "the will of humanity?"

> So you seem to be be uncritically anthropomorphizing social processes!

It's called "an aggregate."

> is a question that can only ever be speculated upon, but not definitively perceived.

The original point was that LLMs want the future to be like the past. You've way overshot the mark here.

balamatom 4 days ago

> You've way overshot the mark here.

Nah, I'm just having fun.

>You cite fear but that's attempting to have it both ways.

Huh?

>In the context of the quote precision is called for.

Because we must make it explicit that AI is not conscious? But why?

Since you can only ever measure impacts on reality -- what difference does it make to you if there's a consciousness that's causing them or not?

>It's called "an aggregate."

An individual is conscious. Does it follow from this that the set of all individuals is itself conscious? I.e. do you say that it's appropriate to model humanity as sort of one giant human?

sapphicsnail 4 days ago

Humans anthropocize all sorts of things but there are way bigger consequences for treating current AI like a human than someone anthropocizing their dog.

I know plenty of people that believe LLMs think and reason the same way as humans do and it leads them to make bad choices. I'm really careful about the language I use around such people because we understand expressions like, "the AI thought this" very differently.

balamatom 4 days ago

>Humans anthropocize all sorts of things but there are way bigger consequences for treating current AI like a human than someone anthropocizing their dog.

AI is less human-like than a dog, in the sense that an AI (hopefully!) is not capable of experiencing suffering.

AI is also more human-like than a dog; in the sense that, unlike a dog, an AI can apply political power.

I agree that there are considerable consequences for misconstruing the nature of things, especially when there's power involved.

>I know plenty of people that believe LLMs think and reason the same way as humans do and it leads them to make bad choices.

They're not completely wrong in their belief. It's just that you are able, thanks to your specialized training, to automatically make a particular distinction, for which most people simply have no basis for comparison. I agree that it's a very important distinction; I could also guess that even when you do your best to explain it to people, often they prove unable to grasp its nature, or its importance. Right?

See, everyone's trying to make sense of what's going on in their lives on the basis of whatever knowledge and conditioning they might have. Everyone gets it right some of the time and wrong most of the time. For example, humans also make bad choices as a result of misinterpreting other humans. Or by correctly interpreting and trusting other humans who happen to be wrong. There's nothing new about that. Nor is there a particular difference between suffering the consequences of AI-driven bad choice vs those of human-driven bad choice. In both cases, you're a human experiencing negative consequences.

AI stupidity is simply human stupidity distilled. If humans were to only ever speak logically correct statements in an unambiguous language, that's what an LLM's training data would contain, and in turn the acceptance criterion ("Turing test") for LLMs would be outputting other unambiguously correct statements.

However, it's 2025 and most humans don't actually reason, they vibe with the pulsations of the information medium. Give us something that looks remotely plausible and authoritative, and we'll readily consider it more valid than our own immediate thoughts and perceptions - or those of another human being.

That's what media did to us, not AI. It's been working its magic for at least a century, because humans aren't anywhere near rational creatures; we're sloppy. We don't have to be; we are able to teach ourselves a tiny bit of pure thought. Thankfully, we have a tool for when we want to constrain ourselves to only thinking in logically correct statements, and only expressing those things which unambiguously make sense: it's called programming.

Up to this point, learning how to reason was economically necessary, in order to be able to command computers. With LLMs becoming better, I fear thinking might be relegated to an entirely academic pursuit.

verisimi 5 days ago

> If we assume those to be chaotic, in that sense any sort of algorithm is slightly more anthropomorphic: at least it works towards a human-given and therefore human-comprehensible purpose -- on the other hand, whether there is some particular "destination of history" towards which humanity is moving, is a question that can only ever be speculated upon, but not definitively perceived.

Do you not think that if you anthropomorphise things that aren't actually anthropic, that you then insert a bias towards those things? The bias will actually discriminate at the expense of people.

If that is so, the destination of history will inevitably be misanthropic.

Misplaced anthropomorphism is a genuine, present concern.

balamatom 4 days ago

I'd say anthropomorphizing humans is already deeply misplaced!

Each one of us is totally unlike any other -- that's what's so cool about us! Long ago, my neighbor Diogenes proved, by means of a certain piece of poultry, that no universal Platonic ideal of human-ness can be reasonably established. (We've largely got the toxic fandom of my colleague Jesus to thank for having to even explain this nearly 2500 years after the fact.)

There is no universal "human shape" which we all fit, or are obliged to aspire to fit. It's precisely the mass delusions of there ever being such a thing which are fundamentally misanthropic. All they ever do is invoke a local Maxwellian process which heats shit up until it all blows the fuck up out of the orbit of the local attractor.

Look at history. Consider the epic fails that are fascism, communism, capitalism. Though they define it differently, they are all about this pernicious idea of "the correct way to human"; which implicitly requires the complementary category of "subhuman" for all featherless bipeds whose existence happens to defy the dominant delusion. In practice, all this can ever accomplish is to collapse under the weight of its own idiocy. But not without destroying innumerable individual humans first -- in the name of "all that is human", you see.

Materialists say the universe doesn't care about us puny humans anyway. But one only ever perceives the universe through one's own human senses, and ascribes meanings to it through one's own cogitations! Both are tragicomically imperfect, but they're all we've ever got to work with. Therefore, rather than try to convince myself I'm able to grasp the destination of the history of my species, I prefer to seek knowledge of those things which enable me to do right by myself and others in the present.

But one's gotta believe in something! Metaphysics is not only entertaining, it's also a primary source of motivation! So my belief is that if each one of us trusted one's own senses more -- and gave up on trying to delegate the answer of "how should I be?" to unaccountable authorities which are themselves not a human (but mere concepts, or else machinic assemblages of human behaviors which we can only ever grasp through concepts: such as "society", "morality", "humanity") -- then it'd all turn out fine!

It simplifies things considerably. Lets me focus on figuring out how they work. Were I to believe in the existence of some universal definition of what constitutes a human, I'd just end up not noticing that I was paying for a faulty dataset.