[I originally wrote this post as a section in “Thoughts on the foundations of consciousness,” but decided it was a bit distracting, since I feel less confident in my claims here than in that post. I’ll assume you’ve read that post first for context, though this post might be comprehensible to readers familiar with the philosophy of consciousness regardless.]
In “Thoughts on the foundations of consciousness,” I surveyed different views on how to account for the existence of subjective experiences. This doesn’t yet tell us how to diagnose whether a given configuration of stuff has subjective experiences (of some kind). Here’s a drive-by of what I think of the idea that subjective experiences are fundamentally computational.1 I can’t do justice to the topic with a post this brief, and for now I’ll recommend McLean’s “Computational functionalism on trial.” But I figured it was better than nothing to at least share this much for now.
It’s important to distinguish different kinds of computationalism here:
“If you know everything about the computations a system is running, you know what subjective experiences that system has, if any.”
“Whether a system has any subjective experience at all is (metaphysically, not logically) fully determined by whether the system is running certain kinds of computations.”
“The subjective experiences a system has are (metaphysically, not logically) fully determined by facts about the computations the system is running. In particular, whether a system has the kinds of subjective experiences we consider morally relevant, like suffering, is determined by facts about computation.”
Since I reject materialism, I reject (1). But that doesn’t rule out (2) or (3), just as rejecting materialism doesn’t rule out the view “the subjective experiences a system has are (metaphysically, not logically) fully determined by third-person facts” (cf. Pearce’s “non-materialist physicalism”). Panpsychism of the sort derived from dual-aspect monism, as opposed to Tomasik-style illusionist panpsychism, trivially rules out (2) because it says everything has subjective experiences.
A prima facie reason we might think type (3) computationalism is on the right track is, it seems that at least some kinds of human and animal subjective experiences serve a role of efficiently transmitting information. E.g., suppose you turn around and see a snake. Instead of your brain processing this visual information into an explicit model of all the things that make the snake dangerous and worth avoiding, it seems much easier (though prone to false positives!) to just process this information into “scary!”.
This doesn’t logically explain the experience of fear, though. And I’d say it also doesn’t metaphysically explain why that experience has the intensity that it does. Sure, if we reject epiphenomenalism, the negative valence of the experience is causally responsible for you jumping away from the snake, etc. But it’s not clear why that same behavior couldn’t have resulted from an experience that has the same “direction,” and same “strength” relative to all other possible experiences, but weaker absolute “strength.” As a more practical challenge, it seems very underdetermined what aspects of the “efficient information processing” are necessary or sufficient for experiences of happiness or suffering. So I’m wary of claims about, e.g., artificial sentience based on fuzzy intuitions about computation. Overall I’m not that sold on this argument for type (3) computationalism.
The classic “fading qualia” argument2 for computationalism, which could apply to type (2) or (3), is: Imagine replacing Bob’s brain one neuron at a time with a silicon chip that does all the same computational work as the neuron. On one hand, isn’t it implausible that there’s some arbitrary cutoff point at which Bob would stop having subjective experiences? And on the other hand, if the replacement of Bob’s neurons preserves all their computational functions and hence all the third-person behavior, isn’t it also implausible that his subjective experiences would gradually “fade out,” while he nonetheless talks and behaves as if his experiences haven’t changed at all? It appears that if we accepted the fade-out story, we’d have to accept epiphenomenalism, and thereby wonder if our beliefs about our subjective experiences could be deluded.
My responses: First, as McLean argues, it seems that “computation” is an abstraction, dependent on the observer/modeler of a system rather than a precise joint-carving property of the system itself. Describing processes as performing the same computations (or functions generally?) is up to interpretation. (Which is not to say that we can reasonably describe any given thing as performing any given computation — I’m not making the “waterfall argument,” if you know of that. Just that there’s some degree of observer-dependence.) This is a problem for both type (2) and (3) computationalism. Because both the existence of subjective experience in general, and the extent to which a subjective experience contains, e.g., happiness, are not observer-dependent. So computation doesn’t seem to be the sort of thing that can be 1-to-1-mapped to subjective experience.
Second, I think this argument might conflate two kinds of “belief that Bob has a subjective experience”: i) Patterns in an algorithm that, combined with other dispositions, lead to Bob’s behavior, including uttering “I have subjective experiences.” ii) A first-person response of recognition of his subjective experiences. I'd agree it's totally bizarre (if not incoherent) for Bob to have a mistaken (ii)-belief that he’s having a subjective experience. But in order to resist the replacement argument, we only need Bob to have a mistaken (i)-belief that he’s having a subjective experience. This is because replicating the third-person computational properties of the brain only guarantees that his (i)-beliefs are undisturbed.
But wait! Isn’t this epiphenomenalism? I.e. shouldn’t the change in the first-person experience result in a change in the third-person (i)-beliefs? I don’t think this objection applies, because denying epiphenomenalism doesn’t require us to think that changing the first-person aspect of X always changes the third-person aspect of some Y that X causally influences. Only that this sometimes can happen. If we artificially intervene on Bob’s brain so as to replace X with something else designed to have the same third-person effects on Y as the original, it doesn’t follow that the new X has the same first-person aspect! Indeed, according to dual-aspect monism, the whole reason why our beliefs usually reliably track our subjective experiences is, the subjective experience is naturally coupled with some third-person aspect that tends to cause such beliefs.
(Added Jan 5, 2025:) I also worry that the possibility of suddenly disappearing subjective experiences, at least for type (3) computationalism, is dismissed too quickly. Aides (2015) points out that we do observe discontinuous relationships in physical systems, e.g., the relationship between the ejection of electrons and the frequency of electromagnetic radiation, or between the position of an object suspended by a rope and the number of strands of the rope we cut. Or, imagine the following scenario. You and your mischievous friend are having lunch, and your plate has a pile of fries, which you don’t eat until after you’ve finished your entrée. Unfortunately, before you can finish your entrée, you receive a lot of phone calls from telemarketers, so every 30 seconds, you have to get up and leave to take the call. Each time you do so, your friend eats one fry. Presumably, you don’t notice the first fry. But surely there’s some point at which you notice your pile of fries has been tampered with, before the whole pile has been eaten. So isn’t there some from-our-perspective-“arbitrary” number of fries that discontinuously suffices for you to notice your friend has eaten your fries? (I wouldn’t be too surprised if there are reasonable objections to this, but I haven’t seen anyone directly refute it.)
There’s also a lot written about the so-called “binding problem” for computationalism, though I don’t know enough about it to comment yet.
The main practical upshot of all this is how we approach the applied ethics of artificial sentience. Compared to many others who are sympathetic to concern for artificially sentient beings in principle, I think computationalist models of subjective experience aren’t that solid a basis for figuring out how to avoid mistreating artificial minds (and how to trade this off against other moral concerns). More developed thoughts on this are for another post.
I thiiink my responses in this post apply to non-computational functionalism as well, but not sure.
(Added Jan 5, 2025:) If I understand the “dancing qualia” argument correctly, my responses in this post apply just as well to that argument too.