How generative AI will ruin science and academic research
On late-stage digitalization and knowledge erosion
Background: the epistemology of modern mass media
I often come back to Neil Postman’s 1985 classic Amusing Ourselves to Death. It’s a penetrating analysis on the cognitive effects of media technology. His focus is mainly on how the format and mode of communication influences the character of content and how that content then trains us, but to a lesser extent also how the discourse and the “atmosphere” of the information exchange affects these issues.
This is almost forty years ago, and the disruptive medium under scrutiny is television. The entire approach seems almost quaint, the 80s in relation to the current period being a comparative golden age of critical thinking, complex exposition and conscious, creative citizens.
Still, the issues Postman emphasizes were already significant way back then. You can summarize his criticism with television’s format having complex, detrimental effects on the content and character of public discourse, as well as on the cognitive abilities of human beings, especially in terms of understanding complex issues and parse arguments and evidence with many layers and branching implications. The basic issue on Postman’s view is that written exposition and complex oral tradition alike foster and prepare the human mind for the rigors of critical thinking and rational reflection. And to the contrary, that the entertainment media and discourses as exemplified by television in particular, geared towards short-term gratification and the communication of sensational experiences, have rather the opposite effect.
His examples are really glaring, even back then:
Contradiction, in short, requires that statements and events be perceived as interrelated aspects of a continuous and coherent context. Disappear the context, or fragment it, and contradiction disappears. This point is nowhere made more clear to me than in conferences with my younger students about their writing. “Look here,” I say. “In this paragraph you have said one thing. And in that you have said the opposite. Which is it to be?” They are polite, and wish to please, but they are as baffled by the question as I am by the response. “I know,” they will say, “but that is there and this is here.” The difference between us is that I assume “there” and “here,” “now” and “then,” one paragraph and the next to be connected, to be continuous, to be part of the same coherent world of thought. That is the way of typographic discourse, and typography is the universe I’m “coming from,” as they say. But they are coming from a different universe of discourse altogether: the “Now ... this” world of television. The fundamental assumption of that world is not coherence but discontinuity. And in a world of discontinuities, contradiction is useless as a test of truth or merit, because contradiction does not exist. My point is that we are by now so thoroughly adjusted to the “Now ... this” world of news—a world of fragments, where events stand alone, stripped of any connection to the past, or to the future, or to other events—that all assumptions of coherence have vanished. And so, perforce, has contradiction. In the context of no context, so to speak, it simply disappears. And in its absence, what possible interest could there be in a list of what the President says now and what he said then? It is merely a rehash of old news, and there is nothing interesting or entertaining in that.
Postman, ibid.
This is not an unfamiliar situation for most of us. I even had a similar experience this morning in an attempted discussion with someone who incredibly enough stated four (!) different, mutually incompatible things within the framework of a couple of short paragraphs, and he of course still kept insisting I was wrong. His stated position contained the following propositions:
Beliefs are absurd
Beliefs ought to be held only due to acceptable evidence
You can hold beliefs for whatever reasons you like
Beliefs are by definition anchored in emotion and not acceptable evidence
All of these statements are literally incompatible with every single other one.
It’s like a convoluted version of the Liar’s Paradox. If I agree with him, I’m necessarily wrong, and if I disagree, I’m wrong too. But in contrast to the old thought experiment, where my error lies in the direct affirmation of a statement’s opposite, here there’s literally no way to make sense of how I would be wrong if I either agree with or reject his position, since it’s internally incoherent in a complex and not only binary sense.
One is almost impressed at the intensity of this discursive dumpster fire.
So as most of us would recognize, this exchange, at least to some extent, reflects the epistemic character of contemporary public discourse in the digital sphere. Communications are often simplistic and disjointed, and if they approach some level of complexity, they almost immediately veer off into contradiction, irrelevance or actual nonsense as above.
What does this mean for knowledge in general, and especially for the quality, retention and reproduction of human cultures’ complex knowledge contained in traditions such as science or the humanities?
First of all, I think a situation such as ours will significantly shrink the pool of available candidates for the difficult and often painstaking work of effectively stewarding such traditions of knowledge, for themselves as individuals as well as for the institutions or organizations that are needed to maintain the traditions. You’re not going to have a lot of people able to write a Doktor Faustus or Common Sense, or to produce any work of art that connects with the fullness of the spirit, character and problems of its age. And you will not find many people who would be able to understand it even if something of that sort could be produced.
The issue here, to a great extent, is that our incoherent age and its media technologies fail to equip people with the ability to fully comprehend basic abstract principles, and transfer the gist of situated experiences and insights to another context. We basically do not get properly familiarized with the universal common-sense abstractions and concepts that make complex discourse and critical thinking possible. Without this familiarity, we are less able to make sense of new and chaotic situations and see the connections and similarities between different experiences and forms of knowledge. Reason as such gets eroded, as Aquinas would probably have put it.
The knowledge we do get is specialized, commodified and instrumentalized, and while it can be quite extensive, it has an insular character and doesn’t effectivelty enable critical thinking in the abstract and all-embracing sense. It’s sort of like how someone could learn to reliably use a calculator through rote memorization without having more than a rudimentary grasp of mathematics. He can operate the device, but he’s not a sovereign master of the discipline(s) involved. He could never design and build a calculator.
“Man is the measure of all things”, Protagoras once said, and his point was probably not relativism but rather the fact that human beings can (or ought to be able to) make reasoned assessments of any and all situations and states of affairs in the world.
The compounding effects of contemporary AI
Ok, so there’s a good bit more to say about the character and problems of knowledge in this overall background of discourse, media technology and cognitive adaptations we find ourselves in, but let’s focus specifically on the contributions of AI here.
Contemporary AI is a set of pattern recognition systems designed for surveillance and discipline, for controlling the flow of information, and for the mass production of commodity substitutes for cultural content under a neoliberal framework.
It’s not really possible to overstate the damage a set of systems like this could do if it becomes an integrated part of the knowledge production of society. It’s not only that they massively compound the problems of the background I tried to sketch above, i.e. those of a sensationalistic digital media environment almost entirely devoid of exposition and geared towards easily gratifying entertainment. The basic purpose of the self-learning, information-curating algorithm was always to minimize the distance between the product and the consumer through reinforcing desirable patterns of behaviour. In other words, the algorithms will primarily tend towards reinforcing behaviour that effectively supports capital’s extraction of surplus value from the worker-consumer.
This is a point that could be reiterated many times. Google is not a library. It’s a private corporation structured around a system designed to commodify information for profit, and it has neither an obligation nor any real incentive to carry data you care about nor have any real use of. What’s more, this and similar corporations are in constant cut-throat competition for dominance within an almost totally centralized information architecture that immediately connects almost every human being on the planet. It’s like how the great white sharks cannot stop moving without suffocating.
So we’ve designed a perfect system for rapidly evolving these systems for commodifying information towards maximum profit, which also coincides with a centralized communications architecture that enables a single modification of a google algorithm to simultaneously impact almost every person in the world.
Postman’s misgivings about television indeed do seem quaint.
What follows from the above is that any other qualities of the informational output of these systems beside those that reinforce profitable behaviours of the consumer in the short term (compare to the advertising industry) become irrelevant. They’re secondary priorities at best, and insofar as they interfere with the profit directive, they will be actively suppressed. Moreover, these priorities will have an immediate global impact.
And these digital systems for the commodification of information under a capitalist framework cannot do otherwise, or they will suffocate.
When we add AI to the above mix, especially the “generative” kind, these problematic processes simply get aggravated. AI in its current form acts as a force multiplier to the commodification of information within the established economic order and its digital structures of exchange. Self-learning algorithms streamline the flow of information and enables targeted advertisment, propaganda and narrative control on the individual level with incredible speed and specificity. GenAI, on top of this, enables the automated production of information commodities that inevitably will be tailored to synergize with the surveillance, marketing and behavioural modification objectives.
Or the sharks will suffocate.
So we could just ask ourselves whether the fostering of well-rounded independent thinkers, spiritually mature, capable of satisying their own needs and with an internal locus of control, is going to support the short term profit incentives of capital or not.
The answer to this question determines the tendency of the proprietary algorithms that are now being designed to achieve full-spectrum information dominance.
Science and academic research
What does this have to do with the future of science?
Well, the implications are staggering, and seemingly all across the board. For one, the mass training of narrowly specialized consumers unfamiliar with universal concepts and abstractions, and ill-equipped to critical thinking, are not going to be effective stewards and renewers of complex traditions of knowledge, whether we’re dealing with particle physics or folk medicine. Without the universal concepts, without a familiarity of logic, of the principles of cause and effect, of what constitutes reasonable evidence irrespective of any particular context, people will be less and less able to see the connections between different disciplines and internalize their own coherent view of the world which is necessary to navigate it independently. People will be less competent and less self-reliant in terms of inventing new knowledge in unfamiliar situations and adapting their conceptual schemes and experience to unforeseen circumstances. Just like how the rote-memorization calculator operator above will not be able to do math with a pen and paper when the machine breaks down.
To attend school meant to learn to read, for without that capacity, one could not participate in the culture’s conversations. But most people could read and did participate. To these people, reading was both their connection to and their model of the world. The printed page revealed the world, line by line, page by page, to be a serious, coherent place, capable of management by reason, and of improvement by logical and relevant criticism.
Postman, ibid.
But apart from the detrimental effects on our cognitive abilities, as bad as they may be, there’s a significant sense in which genAI and the current iteration of late-stage digitalization may structurally undermine the nature and quality of the information and experiences inherent to our traditions of knowledge, ancient as well as modern.
Recall that the centralized information architecture for inevitable structural reasons will tend towards promoting such information that reinforces behaviour which in turn supports capital’s extraction of surplus value from the worker-consumer. Reliability, objectivity, and a hundred other measures of quality that you or I might want to prioritize are almost entirely irrelevant. The profit motive, and auxiliary objectives such as propaganda, strategic opinion formation and the minimization of disruptive discourses, will be the top priorities.
And as the output of genAI, not least due to the centralized digital architecture, starts dominating channels and modes of communication as well as the content of our information repositories, the character of the AI output will have a significant influence on the quality of the data retained within whatever traditions of knowledge can remain. In other words, the content and modes of operation of science as a tradition of knowledge will tend towards the priorities inherent to the systems designed for the commodification of information. Towards reinforcing profitable consumer behaviour and strategic opinion formation.
One pathway towards this ignominious end is the introduction of generative AI into the publish-or-perish framework of contemporary academia. Let’s just say it’s not only students that will use chatGPT or similar systems to cut corners, and that everyone with an MA or above will stay totally clear of such perversities.
Far from it. The current bibliometric model of academic competition will put the minority of us who refuse to touch this stuff at a significant disadvantage. Our output will be much more limited in comparison to people who with the help of tailored genAI-tools can churn out perhaps two viable research articles per day. They might be sub-par and derivative, but this is largely a numbers game.
Several mechanisms will then increase the prominence of AI-generated material within science and academia, not least the fact that people who have padded their resumes with this sort of garbage will make headway in the competition for tenure, scholarships and research positions, which will force others to follow suit. Journals will exploit this production to increase throughput, visibility and market share through brute force, which again will pressure the competition to follow suit. Low-cost journals will flood the market with semi-generated content.
Researchers and scientists will end up glorified “prompt engineers” in the near future.
One would perhaps somewhat naively imagine that the peer-review process should still be able to operate as a reliable quality control, weeding out the worst excesses of an otherwise downward spiral. One would be wrong.
Peer-reviewers, this weird brand of people who are willing to do painstaking and boring work without compensation, will of course face a torrent of bland, sub-par, AI-generated articles as the volume of production predictably will increase through the use of these kinds of tools. And what’s the inevitable solution to this little conundrum?
Have the AI do the “peer review” as well, of course:
You’ll have AI reviewers checking papers written by algorithms published in journals to be read by nobody. Except perhaps the AI themselves, now generating their own training data in a wicked informational feedback loop that’s going to be replete with structurally integrated hallucinations.
So where’s the quality control? How is it even conceivable? Who will do the “fact-checking” of the torrents of AI-generated material? And in reference to what data? AI-generated or curated research articles whose information has been disconnected from reliability, objectivity and validity, and is now being produced towards the end of reinforcing profitable consumer behaviour and strategic opinion formation?
All of this holds the seed to a really wicked problem of epistemology, of the fundamental quality of the evidence which is accessible to the human being. If this goes much further, we’re namely approaching an entirely novel situation of human knowledge where the basic chain of evidential testimony gets broken. You can’t actually trust any piece of information received through the centralized digital infrastructure to be the genuine account of the experiences, conclusions or findings of an actual human person. Everything will be potentially in doubt.
Everything.
So nowadays, when I’m sitting about in the wee hours, providing free peer-review for some obscure philsoophy journal, and I come to ask myself whether it’s all worthwhile, then I just think about this AI-generated rat going right through peer review with his giant dick and four distended balls, and I smile to myself and recall that suffering is both fruitful and purifying.
The great march of mental destruction will go on.
Everything will be denied. Everything will become a creed. It is a reasonable position to deny the stones in the street; it will be a religious dogma to assert them. It is a rational thesis that we are all in a dream; it will be a mystical sanity to say that we are all awake. Fires will be kindled to testify that two and two make four. Swords will be drawn to prove that leaves are green in summer.
We shall be left defending, not only the incredible virtues and sanities of human life, but something more incredible still, this huge impossible universe which stares us in the face. We shall fight for visible prodigies as if they were invisible. We shall look on the impossible grass and the skies with a strange courage. We shall be of those who have seen and yet have believed.”
Chesterton, Heretics.
I quote from https://en.wikipedia.org/wiki/Iron_cage which in turn is quoting Max Weber.
> In his 1904 book The Protestant Ethic and the Spirit of Capitalism, Weber introduces the metaphor of an "iron cage" [steely shell]:
>
> > The Puritan wanted to work in a calling; we are forced to do so. For when asceticism was carried out of monastic cells into everyday life, and began to dominate worldly morality, it did its part in building the tremendous cosmos of the modern economic order. This order is now bound to the technical and economic conditions of machine production which to-day determine the lives of all the individuals who are born into this mechanism, not only those directly concerned with economic acquisition, with irresistible force. Perhaps it will so determine them until the last ton of fossilized coal is burnt. In Baxter's view the care for external goods should only lie on the shoulders of the "saint like a light cloak, which can be thrown aside at any moment". But fate decreed that the cloak should become an iron cage.
>
> According to Weber, the market-dominated economic order was created by innovative, religiously motivated economic. But the individual today can no longer engage in such creative action. Instead, the worker must operate in a narrowly-defined specialization, and economic enterprises must continually strive to maximize profits and rationalize their production for the sake of efficiency.
What concerns me is that I cannot see a way for this cultural modus operandi to be superseded. Something Weber suggests earlier is that the capitalist spirit came to dominate global culture precisely because of a religious devotion to the accumulation of material influence [capital]. In a globalized economy where mass surveillance and behavioral modification are not science fictions but pedestrian technologies, what can possibly challenge a culture which irrationally pursues efficient material accumulation?
Humans knowing stuff may make me feel nice and warm and happy. However, humans who sacrifice material efficiency for the sake of pursuing human understanding will, definitionally, be at a material disadvantage. This was already problematic when material disadvantage meant 'only' that the other guy had more guns and could take your lunch money. Now they can establish a techno-bureaucratic order around which essentially criminalizes undermining their material efficiency.
This is what I see happening to contemporary specialists in these technologies. For many of my colleagues, refusing to participate in this mess amounts to a sentence to a sentence to an unfulfilling life. They have been socialized to be satisfied only with technological problem solving, and offered only one alternative: churn out shit papers that people will cite based on the abstract because they don't have time to think about anything. "Be thankful that you have any time at all to scribble formulas on a whiteboard. If you don't like something about this you can go get a low-paying dead-end job in web development. Don't like that either? Work below 'minimum' wage on a menial job and never think about anything interesting ever again while you marvel at how unaffordable your insulin is. Unhappy with your options? Be homeless. Nobody is gonna give you any insulin though. Just remember while you ruminate: this is the freest man has ever been. You really are quite unreasonable to be so unhappy."
Why wouldn't they submit AI generated trash? It's better than the alternative - right?
Not to worry. Science is already ruined. Sterile AI trash will just bury it in the landfill, thereby mitigating its stink.