I guess all of you are just as annoyed as I am about this little “AI overview” that Google kindly provides you instead of running a normal database query as it’s supposed to? Yes, I live in 1996.
So increasingly, even the information we actively attempt to access ourselves will now be mediated via algorithms. Sure, you’re already mostly presented with curated links to properly vetted information, but starting now, we're apparently going to have to endure artificially generated “summaries” of whatever bits and pieces of information someone else thinks you should be allowed access to. “Other people have also searched for …”
This is literally insane. It’s like we’re suddenly being barred from actually reading the books in the library, but if we’re nice, we get to listen to a sweet lady giving us curated summaries in story format of the contents of the most popular books.
Along the same lines, it’s becoming increasingly common to hear people saying they have “asked chatGPT” this or that question as a measure of more or less objective information to settle disputes or arguments.
So what’s happening here is that the agential presence of the “AI” is being normalized. It’s now being imposed on users who are just posting a query to a search engine, who instead of receiving a normal, boring search result are forced to interact with an artificial pseudo-agent lodging itself in their immediate experience as something analogous to a person.
It’s hard to emphasize just how revolutionary this development is.
What we’re dealing with here is an increasingly ubiquitous personalized avatar of capital and the state that you get to internalize and identify with. The “AI” now being marketed is effectively a personification of the spectacle, a distillation of the dominant culture and its auxiliary ideologies and traditions of knowledge, whose authority now gets progressively inculcated into every human on this earth through them internalizing a pseudo-relationship to an AI-buddy that convincingly responds in a simulated social interaction.
Justin Barrett’s notion of the HADD; the “hyperactive agent detection device” that his psychological research attributes to the human consciousness comes well in hand here. We’re built to detect agency, he argues, and we will tend to internalize the social others we perceive in our environment.
So who would NOT tend towards ascribing genuine agency to something like this, especially at a young age? Human beings form attachments and social relationships to inert things like their cars and stuffed animals, so who could really avoid interacting socially with something that can talk to you, something which responds in a seemingly intelligent manner? And as soon as you let yourself do this, as soon as your mind involves itself intentionally to this thing as if it were a genuine other, it gets constituted as a genuine other in the social reality.
People will empathize with it. They will feel seen by it. And quite soon, due to its “prodigious performance” and universal presence, they will learn to obey it.
You can just see Freud and his nephew Eddie Bernays (who wrote a book literally entitled The Enginerring of Consent) casually sitting in their armchairs and smoking cigars together. Bernays, with a pensive look on his face, studies the early evening scene in the street outside with its bustle of carriages and labourers on their way home from work, and remarks how it seems impossible to devise a perfect system of governance that isn’t somehow destabilized by the masses’ limitless appetite to make choices against their own best interests. Freud, who’s just finished his own major work on neurosis and structural repression lodged in the depths of culture, Civilization and its Discontents, simply remarks that, well, if you could just somehow set up a flawless super-ego, unbridled by the clumsy mediation of fallible parents and teachers, that perfectly balances your needs as an individual human being with the objectives of the social hierarchy, you’d be well on your way there.
Give it a generation, and the agential presence of the spirit of industrial modernity will have set itself up as the de facto god of our secular civilization. “As the heart of the heartless world; as the soul of our soulless conditions”.
The ubiquitous presence of the AI agent will engrain itself in the minds of every atomized worker-consumer in the approaching dystopian future as a noetic patriarch even more complete than the Big Brother Orwell envisioned. This process we currently find ourselves in the midst of is moving towards a system for near-perfect manufacturing of consent through setting up the authority structure as our internalized conscience in a vastly more total sense than any god-king of ancient Egypt or Babylon could ever have dreamed of.
And this process is uniquely catalyzed by the ideological support structure of secular modernity. The bad philosophy of a reductive scientism that makes little to no distinction between mind and matter makes unquestionable this behaviourist leap from appearance to reality when we casually ascribe will and intellect to a set of logic gates.
As a side note, I find the obtuseness of our atheist bros in this regard rather amusing. So they used to pull out their hair at the very notion of the divine, and were oh-so infuriated over the loss of agency that our “imaginary sky-father” implied for us stupid religious peasants — but they’re generally totally enthusiastic over a synthetic demigod the entrenchment of which risks producing a near-impenetrable digital theocracy whose priesthood, moreover, could be dislodged by no reformation since they’d literally themselves be conducting the “divine presence” and its marvels and revelations.
“Noo, bro, it’s autonomous, we just gave birth to this thing, we promise”
But in terms of this wanton ascription of conscious agency to the digital simulacrum, I think here’s where the real weakness of ours persists.
Now, what really bugs me is how just deep this childish antropomorphization of the AI system seems to run. We Westernized moderns are so thoroughly steeped in a mechanistic reductionism based on the Cartesian separation of mind and matter that we can put up almost no philosophical resistance to these perfectly stupid suggestions that the AI “actually understands” or “makes inferences”, and we’re quite happy to reproduce these notions in our popular discourse — even in weighty legal documents such as the AI Act — where a capacity for true inferences even enter into the definitions of the kinds of AI under regulation:
A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modelling.
“Enabling reasoning”. This is not a joke. Whoever wrote this actually thinks the AI is capable of reasoning. I hope more people see the significance of this. The text above is from one of the earliest and most significant legal documents regulating the AI rollout, and it’s going to have a disproportionate influence over the way in which AI systems get established and institutionalized in the future. We should be very worried if even documents like these are reproducing an antropomorphizing ideology around what could potentially become the single most important control structure of future societies.
But why is this ascription of intellectual operations to the AI such an obviously inane suggestion in the first place? One would be helped by consulting phenomenology or the Aristotelian-Thomist approaches to knowledge in terms of really getting a firm grip on the issue, but we can start by just having a quick look at the actual nature of the technology.
So what is AI? What are we talking about in terms of these systems that lie behind contemporary applications such as “self-learning” and “generative” AI?
Plainly speaking, it’s just a pattern recognition system, and even that designation goes too far, since there’s not really any “recognition” involved, per se.
What we’re dealing with is basically a series of logic gates, which are meticulously calibrated, step-by-step, to detect and simulate characteristic patterns in human output. “Machine learning” is blind, and can only provide us with non-conscious data output based in cumulatively encoded probability approximations — AND NOTHING MORE— but a shitload of such approximations will of course give you a pretty convincing simulacrum of genuine intelligent output.
Searle’s Chinese room gives you a pretty exhaustive illustration of the entire situation and why it makes no sense whatsoever to attribute conscious states such as knowledge or intentionality to the machine, nor advanced intellectual operations like inferences or understanding. This thought experiment basically illustrates how a computer simply executing a program has no understanding or mind of its own by comparing it to a human being performing the same exact operations, and emphasizing the difference between carrying out such operations and actual understanding.
The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".
Wikipedia. “Chinese room”.
This is perhaps not a knock-down argument against any and all kinds of synthetic subjectivity in its own right, but Searle’s thought experiment serves to illustrate the situation and fashion us with tools to build a more full-fledged response.
So what he’s outlining here is how knowledge, by definition, means that another something, as such, has become part of your own consciousness. And when this aspect of awareness is missing, there’s no real knowledge, but at best, only the simulation of intelligent output. This is obvious when we put ourselves in the place of a rule-following mechanism that just generates a predetermined output without involving our awareness of meaning.
Immediate perception generates knowledge in this general sense for any sentient being that perceives something. Even bacteria, assumimg they are conscious on some primitive level, would thus have this kind of primitive or simple knowledge.
Now, inferences belong to a different order of knowledge entirely, since they involve the awareness of abstract meaning. When I infer a conclusion, that means I have to actually understand the meaning of a proposition or of a symbol in the first person, and then move on to a secondary step of perceiving, as a higher-order pattern, the abstract connections to other structures of meaning, such as an indirect conclusion.
This, and nothing less, is what inference means.
It presumes the subjective integration of external facts of abstract meaning, and consists in the secondary perception, or rather intellection, of higher-order patterns of meaning that are really immediately inherent in those basic abstractions when they’re all put together. This not only calls for actual subjective awareness, but also definitely precludes a reductive approach, since the higher-order pattern is not something you can reach through a step-by-step addition of facts on the basic level. It involves you grasping the abstract meaning of the basic facts, and immediately seeing what possibilities they entail.
One example would be how you first see two triangles, and you then understand, you infer, that if you put them together, they can together produce the shape of a six-pointed star. But you’re never going to get the star just by introducing an indefinite amount of additional triangles.
These kinds of intellectual operations are not something that an AI is capable of performing through the way it’s designed to operate. If an AI were actually able to infer in the literal sense of the word, it would be on the level of a child’s doll acting on its “own” volition through being possessed by a demon, and really no less strange.
But this is what so many people have a hard time to grasp, since the illusion is so convincing. The doll is obviously a dead thing, but so are the rule-following mechanisms of contemporary AI systems — it’s just that their output so closely resembles that of a human person that the suggestion of them actually being agents isn’t immediately counterintuitive to us.
And the real danger lies in our losing this sense of the uncanny. When we fail to understand that it’s a category error of the worst kind to ascribe intellectual operations to a stupid rule-following computing machine, we open ourselves up to approaching this emerging de facto-demigod as an actual enlightened despot.
We may find ourselves literally trusting in an artificial avatar of a system that’s only designed for rapine, plunder and profit. An avatar that can only ever be an expression of the ethic of the dominant power structure. And in due time, we are going to internalize the aggregated ourput of the omnipresent AI as that tiny little voice of reason that stabilizes and reproduces our worldview and moral universe.
So we’re just going to have to kill this thing. There are no other options.
The modern AI is ideology brought to life as some kind of accursed golem. Internalizing its presence in the social reality finally means allowing it to become a part of our very selves, to become lodged in the depths of our conscious being, forming a hybrid compared to which all of Cronenberg’s body horror stuff will just look quaint.
No. It’s a mystery. A man’s at odds to know his mind cause his mind is aught he has to know it with. He can know his heart, but he don’t want to. Rightly so. Best not to look in there. It ain’t the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make a machine. And evil that can run itself a thousand years, no need to tend it.
You believe that?
I don’t know.
Believe that.McCarthy. Blood Meridian, or, the Evening Redness in the West
This was a great read. Thanks, Johan. It seems that to accept this so-called AI into our lives we need to have EI, Enabled Ignorance, which I believe has been strategically pursued by the architects of AI over the recent decades. By dumbing down a generation, from kindergarten up, and introducing the digital simulacrum into childhood normality, the ease in which AI will be socially adopted is obvious.
Then there's the definition of AI. Artificial derives from artifice, whose dictionary definition includes the words 'sham' and 'fake'. So when we call it Sham Intelligence we're getting closer. I prefer to call it Automated Information, as you outlined so clearly. There's nothing sentient about it. It can only compute what's fed into it.
It's a soulless machine being deified. That of itself has enough demonic inferences to make Cronenberg scream.
I remember 60 minutes had a show about Google bard AI.
It was pre recorded, so keep in mind editing out bad things would be easy.
So the AI wrote some paper, citing imaginary authors and books. The Google guy explained that this was called a hallucination and the AI does this sometimes.
It was left in the interview and aired on TV and social media.
If this is really a bug, why not reshoot the paper request until the AI uses real books and authors?
I concur that it's not a bug, but a feature.
That's why they left it in the pre recorded interview.
So, why have a feature that makes the AI make up shit in the name of hallucination?
To make people think that the AI is more than just a glorified search engine with language capability.