Well, except in the sense of this encroaching digital dystopia that colonizes almost every aspect of our lives, of course. John and Cory’s recent emphasis on how the Metaverse marketing fails because we’re in a sense already inhabiting it is illustrative.
This is likely also one of the main reasons for this trope’s persistence.
Yet the recent rehashing of the old brain-in-a-vat skeptical thought experiment is really just sophomoric nonsense. What provokes me to even address the issue was this morning’s news harvest included a clickbait Guardian headline quoting ESA astronaut Tim Peake. He’s stating that after death, we’ll find out if we were living in a simulation.
This is weak and deferential non-thought on so many levels, and would best be laughed off as a mildly amusing and obviously false jest by any half-robust immune system of critical thinking.
The fact that most of us lack anything akin to the latter, however, and that many people seem genuinely tormented by this notion, calls for a more comprehensive response.
+++
The brain-in-a-vat idea is basically just a rephrasing of the Cartesian dream (how do I know I’m not actually dreaming?), which in turn is nothing more than a riff on certain strands of ancient Western skepticism of classical antiquity (which has certain cognates in East Asian philosophy).
The most straightforward response to the Cartesian hypothesis and its modern reiterations is the affirmation of the existence of God. Since we’re just dealing with the metaphysical framing of the external world here, none of the arguments in support of classical theism are in any way threatened by the epistemic shift of perspective involved. And as these arguments render God’s existence certain, we can in turn infer the validity of sensory experience since skepticism of that sort isn’t compatible with classical theism.
To leave creatures in a world where immediate perceptions could even in principle be categorically misleading would namely be tantamount to lying, due to the character of our immediate experiences and the sense in which they command definite assent, even to the pre-reflective perception of a child or animal.
But the non-theist (by choice or habit) can take heart. There are obvious workarounds to these really quite childish hypotheses that need not involve any acquiescence to the existence of deities.
The brain-in-a-vat hypothesis, as well as the “simulation”-iterations, by definition posit a set of readily falsifiable truth-claims. In one way or another, a person’s body is taken to be preserved in a certain state that is inaccessible to that person’s conscious experience.
To this is added the projection of a false set of experiences, or at least a set of experiences that do not reflect the body’s immediate situation and preclude access to them.
This projection-aspect of the situation is really the most sensitive to an informed scientific critique.
To begin with, the effective and fully controlled “infusion” of experiences into a human brain is not metaphysically possible by material means. We can’t even measure the levels of neurotransmitters within synapses (well, without cutting up the brain and immediately tossing it into liquid nitrogen), and to accurately and immediately modulating brain activity in minute detail? Impossible.
I.e. it’s not even possible to perform a non-destructive analysis of brain activity to such a level of detail that we can fully understand the individual workings of this plastic and malleable organ. You can’t use something like nanobots forcing themselves into the synapses, since they’d have to be able to chemically interact with the contents to analyze it, thus interfering with normal neuronal operations.
There’s a metaphysical impossibility in the way here as well. This “plasticity” I just mentioned expresses the fact that there are no identifiable hard-coded correlates between acts/experiences and certain clusters of brain activity. Not even within specific individuals over time.
The above problems mean that there’s no way a machine could, even in principle, map the workings of your brain to such an extent that it would be possible to immediately infuse experiences and coherently interact with your conscious responses to this stimuli.
Even if the brain wasn’t plastic in this sense, and if we assume that a full, non-destructive analysis would be possible, there’s really no way to implant the phenomenal experiences from the outside without ruining the synaptic communications.
Of course, one can always pile on the skepticism here and move towards entirely speculative modes of intervention, or simply state that what you think you know about the workings of the brain are part of the simulation.
Sure. But then all bets are off and we’re back at the Cartesian demon hypothesis of the 17th century.
+++
The Cartesian dream is really the most profound formulation of these perspectives, representing the monomaniacal Western obsession with objective individual perception, which really is a correlate of the supporting ideologies of early capitalism.
That aside, the most straightforward response to Descartes’ model (where a malevolent external mind projects a set of false experiences upon his conscious experience) is, again, the invocation of theism (which was also his own response).
Yet another significant countermeasure is simply to turn the entire issue on its head. The presumed reductive approach towards dreams or equivalents as illusory and unreal can namely not be defended, since everything present to your experience is inevitably something in a sense real. It has ontological status, even if it’s an “illusion” and thus representative of something else, and can therefore anchor your experience in objective reality. Indeed, for many cultures, the dream-perception offers us a more profound access into deep reality than the filtered experiences of normal conscious experience (Cf. e.g. Jung or Aboriginal culture).
So if we just take Descartes’ hypothesis as a given, well, then there’s a malevolent external consciousness that causes our immediate experiences of what seems to be our surroundings. But then at least that’s true, then that fact as such becomes a robust and definite point of anchoring, and our reason can keep on inferring all that we’ve concluded from the character of our immediate phenomenal experiences.
+++
... and if one pushes further, assuming the Cartesian demon can actually nullify all of your rational inferences, it's nonetheless the case that said hypothesis must assume a complex set of truths which with regard to their meaning imply immediate access to the whole edifice of logic and an external world.
Indeed, the CD hypothesis assumes quite too much. Its affirmation of an invincible causal principle and absolute truth (the demon can always successfully manage your perception and nullify your rational inferences) inevitably means that the truth of the classical cosmological arguments for the existence of God cannot in principle be denied without also immediately undermining the CD hypothesis itself, whether or not my rational inferences are nullified.
This is because the meanings inherent to the complex set of propositions that actually make up the CD hypothesis cannot in principle be untangled from those we find in the conclusions of e.g. the cosmological arguments or the rationalist proofs; they’re the same.
There are also epistemological issues with the very notion of “undermining” rational inferences, since they’re immediate aspects of our subjective experience. Negating rational inferences seems to erroneously assume that there’s always a mechanism of inference “between” perception and reason’s conclusion, whereas there’s actually no such distance. This phenomenological line of reasoning can be employed against any skeptical challenge of the above types.
Thank you for this analysis. Very helpful. There is so much confusion -- and superficial reasoning -- around the nature of consciousness and subjectivity. We are now told to await the impending "singularity" moment when "super intelligent machines" can alter itself without human intervention and create other super intelligent hardware. The fact that all of these concepts, from "super intelligent" (or simply "intelligent") machines to the idea of an "explosion" of machine intelligence (the singularity), are simply incoherent has not deterred "artificial intelligence" zealots. The "matrix" metaphor is perhaps the dumbest expansion of IT zealotry.