ggreer 2 days ago

Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?

Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?

2
gloosx 2 days ago

1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new

2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment

imtringued 2 days ago

Minor nitpicks. I think your points are pretty good.

1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.

2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)

gloosx 1 day ago

Neuromorphic chips are looking cool, they simulate plasticity — but the circuits are fixed. You can’t sprout a new synaptic route or regrow a broken connection. To self-rewire is not just merely changing your internal state or connections. To self-rewire means to physically grow or shrink new neurons, synapses or pathways, externally, acting from within. This is not looking realistic with the current silicon design.

The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly.

missingrib 2 days ago

Yes, they can't have understanding or intentionality.

recursive 2 days ago

Coincidentally, there is no falsifiable/empirical test for understanding or intentionality.

WXLCKNO 2 days ago

Right now or you mean ever?

It's such a small leap to see how an artificial intelligence can/could become capable of understanding and have intentionality.