cf. the philosophical zombie:
https://en.wikipedia.org/wiki/Philosophical_zombie
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
> We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe.
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
The computer is physical, but the computation is (at least) a level of abstraction above the physical layer. The physical process may be the important part, not just the (apparent) algorithm that the physical process executes.
>but the computation is (at least) a level of abstraction above the physical layer
I think you're kinda right, but the tricky thing here is that the computation itself is physical too. The abstraction may just be whatever it is that the computation has in common with the thing it's modeling in brains, which could mean it, too, has consciousness, or is 'doing' consciousness in some sense.
If the physical process is "the important part" then that can be modeled in an abstract way, too.
We can run any algorithm using billiard balls:
https://en.wikipedia.org/wiki/Billiard-ball_computer
What I’m saying is that we don’t know that consciousness is just an algorithm. If the physical implementation matters, then modelling it might be useful or interesting, but it wouldn’t actually create the real thing. Maybe consciousness arises from a specific pattern of movement of electrons, but not from any pattern of billiard ball movements.
>What I’m saying is that we don’t know that consciousness is just an algorithm.
I caught that, I think(?). I would flag that the upshot or implication can be (1) something outside of physics altogether which I think, while romantic, is at the extreme end of extreme in terms of tenuous and inviting bad metaphysics, bad notions of emergentism etc, but there's also (2) something about the difference between how something is "embodied" which, as you note, still is about billiard ball style simulation at the end of the day, but can raise interesting questions about what kinds of simulations work.
I also do wonder of there's some kind of physicalist essentialism working its way in there. If there's something different about electrons that's importantly different, something about it (hopefully) is a physical property and as such able to be modeled. If consciousness is intrinsically and preferentially tied to a certain kind of matter, e.g. atoms, or brain-stuff, that starts to sound a little woo-ey.
What you really mean is is there any meaningful difference in what can be processed by biological computing and non-biological computing.
The answer to that would appear to be, no.
What makes you claim that? I have not seen proof of that (on the contrary, we don't have smooth emulation of animal like movement yet, which brains figure out pretty fast)
The question is, what evidence is there that the most simple structures we can call brains are doing something which is fundamentally impossible to do with something other than a brain.
Given that brains are fundamentally governed by the same physical laws as everything else, there shouldn't be anything about them which cannot be replicated in some way by something sufficiently capable of emulation of their processes.
That's not to say it's simple. Just that brains obey the laws of physics, and as long as that's true, they should be able to be replicable.
Unless your contention is that brains are somehow able to operate outside the constraints of the laws of physics, in which case we're going to have a fundamental difference of opinion as to the nature of the universe and whether things with brains are particularly special.
This 3 pounds of meat between my ears is certainly special (perhaps the most complicated thing in the known universe) but it certainly not magical.
Just because something is non-magical doesn't mean it can necessarily be simulated by a computer, especially a practical computer that we can actually build given our level of technology and available resources.
I know you mean today, but what about in 100 years?
There are still physical limits to computation, even if we had godlike powers to rearrange the universe as we like (Bremermann's limit, Landauer's principle, there are probably others but I'm not a computer scientist or physicist). More practically, the mass and energy we have to build and operate computers with is finite. Until we know what principles the brain actually runs on, we can't do the math to determine if its physically possible to build computers that simulate it.
That said, if we find it uses some new principle that we don't exploit in our computers, things get very exciting because then we can start trying to do that (you see a faint whisper of this in the excitement around quantum computing).
I mentioned this elsewhere, but encryption is a great example: other than the breakthrough of another sort (like quantum computing), we can easily come up with a key size that is exponentially harder to compute solutions for compared to computing power increases we can achieve. In a 100, 1000 or a million years.
What if the problems we need to tackle are of a similar complexity? Do we ever get there?
We are all holding our breath for both fusion and quantum computing, and while we know they are theoretically possible, will we ever make them practical?
"Able to be replicable" is a far cry from being practically replicable.
We are unable to get two biologically identical (or at least extremely close) brains of identical twins to develop in the same way, let alone two distinct brains, or a simulated version of a brain.
The claim is potentially equivalent to a claim that since universe is theoretically computable, we'll eventually be able to simulate it.
Not at all. I'm not saying we actually will simulate it, just that it's got the property of theoretical simulatability. Which means there's not anything magical going on under the hood. Which means consciousness isn't magic.
"What can be processed" to me implied practical utility. If you did not mean that, thanks for the clarification.
Still, to me the fact that it could be theoretically achieved with computers is not very useful if it can't be achieved practically, and that certainly makes "biological" computation different from synthetic computation.
i would rather ask one to think, what evidence is there that we cannot do brain on non-gooey stuff?
If i take every atom/molecule from one brain (assume a snapshot in time) and replicate it one by one at a different location, and replicate the external IO (stimulus, glucose...), what evidence do we have that this won't work? likely not much
Now instead of replicating ALL the atoms/molecules exactly, I replace one of the higher level entities like a single neuron with a computational equivalent - a tiny computer of sorts that perfectly replaces a neuron within the error bars of the biological neuron. Will this not work? I mean, will it not behave in the same exact way as the original biological brain with consciousness? (We have some evidence that we can replace certain circuits in the brain with man-made equivalents and it continues to work.)
You know where I'm going with this... FindAll, ReplaceAll. Why would it be any different?
---
If i had to argue that it wouldn't be the same, here's a quick braindump off the top of my head:
- some entities like neurons literally cannot be replicated without the goo. physics limitation? but the existence of the goo is a proof of existence. but still, maybe the goo has properties that cannot be replicated with other substances
- our model of the physical world has serious limitations. on the order of pre-knowing-speed-of-light-limitation. maybe putting the building blocks together does not create the full thing. maybe building blocks + magic is needed to create the whole.
- other fun limitation of our physical model
This is actually not that true, what exactly are you saying with "replaced certain circtuits in the brain with man-made equivalents and it continues to work"? I'm certain I never saw something "man-made" like that used to "replace circuits in the brain" and it "continuing to work", in fact this would probably get a nobel for the creator if this was really proven.
Also, we don't have evidence that the processes in the brain are replicable at all, if for example Penrose's theories are correct (or any other non-reductionist that accepts the need for local identity and/or metaphysical properties for the consciousness). You need to assume A LOT of things in order to get this theory some credit, and many things we are literally unable to explain (like consciousness itself) should be reduced to those assumptions in order to make it work (for example, you must assume that is not the gooey stuff that gives rise to the consciousness in the first place, that it does not need very extremely specific conditions to exist, and so on). This line of thinking is kinda dangerous.
Note that I was responding to a comment claiming:
> What you really mean is is there any meaningful difference in what can be processed by biological computing and non-biological computing.
> The answer to that would appear to be, no.
So specifically to "appear to be, no". > i would rather ask one to think, what evidence is there that we cannot do brain on non-gooey stuff?
Because we haven't practically done it despite decades of trying?I don't think this should stop us from trying, and it's pretty obvious it won't. But there is no proof either way — potentially the problem is so complex that we never get there in practice?
(Also note that proving a general negative statement is pretty tricky and usually avoided — we usually look for counter-examples, evaluate a full finite/countable set of scenarios, etc)
Another point I'd add is that we've already got practical examples of computations that are especially hard for computers to practically do even if they are "obvious" in theory — we use those as a base for encryption, for instance.
I think an even simpler argument can be made: our brain develops in response to the physical stimulus we experience from birth (earlier even).
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
So a Chinese Room?
Tell me you've read Blindsight... and if you haven't, go read it, I'll wait.
... and then read The Freeze-Frame Revolution.
Thanks, I've been looking for some recommendations actually.
(It's been a few months since the last time I rambled aimlessly through my house muttering consciousness is a parasite under my breath.)
There's a sequence of stories by different authors that allude to one another that I find to be an interesting read in that order.
First, there's the BLIT stories by David Langford. Several of these are online.
https://www.infinityplus.co.uk/stories/blit.htm
https://www.nature.com/articles/44964 (did you know that Nature did science fiction short stories? https://www.nature.com/nature/articles?type=futures )
https://www.lightspeedmagazine.com/fiction/different-kinds-o...
Then, you go to Accelerando
https://www.antipope.org/charlie/blog-static/fiction/acceler...
> Luckily, infowar turns out to be more survivable than nuclear war – especially once it is discovered that a simple anti-aliasing filter stops nine out of ten neural-wetware-crashing Langford fractals from causing anything worse than a mild headache.
This is followed by its sequel-ish Glasshouse https://www.goodreads.com/book/show/17866.Glasshouse
It's not technically a sequel, but one can see the universe of Glasshouse following from the ending of Accelerando.
A quick diversion to Vernor Vinge with Peace War and Marooned in Realtime https://www.goodreads.com/series/57273-across-realtime (there's a short story in there titled The Ungoverned)
Implied spaces Walter Jon Williams https://www.goodreads.com/book/show/2059573.Implied_Spaces which takes another approach to the unexplored events leading into Glasshouse and a possible path of escalation. The reference passage in this book is:
> “I and my confederates,” Aristide said, “did our best to prevent that degree of autonomy among artificial intelligences. We made the decision to turn away from the Vingean Singularity before most people even knew what it was. But—” He made a gesture with his hands as if dropping a ball. “—I claim no more than the average share of wisdom. We could have made mistakes.”
This then ends with... {spoilers}.