Cognitive Collapse: A First Reconnaissance
Cognitive Collapse: A First Reconnaissance
As most of my readers know by now, when there are five Wednesdays in a month, it’s up to the readers to suggest and then vote on the theme for the post I put up on the final Wednesday. Sometimes most of my readers vote for a single theme, sometimes there’s a quiet little contest among an assortment of themes. Then there was this month, where three topics broke from the pack early on, a lot of people who rarely or never vote in these contests flung themselves into the fray, and all three of the leading topics got more votes than most winning topics do. Since I have the best as well as the most eccentric commentariat on the internet, I decided promptly enough that the only sensible thing to do was to do posts on all three.
This week’s post, accordingly, is on the topic that nosed ahead in the final days of the contest and won the contest. Some weeks ago, in the course of the ongoing discussion of Situationism on this blog, I noted that the phenomenon of model collapse that afflicts generative large language models (the programs miscalled “AI” by the corporate media media these days) has an exact equivalent in human life, and that the industrial world may well be steaming full speed ahead toward a head-on collision with that equivalent in the near future. Many of my readers wanted to hear more about it; I promised a post in the near future, but a good share of the commentariat wasn’t willing to wait. So here we are. I have to warn readers in advance that some of what follows may not be pleasant to hear, but that can’t be helped.
Let’s start with generative large language models (LLMs). The reason that it’s a misnomer to call these programs “artificial intelligence” is that they’re not intelligent. All they do is string together statistically likely sequences of words, pixels, or computer commands. As some wag pointed out recently, that trick enables these programs to pass for middle managers at first glance, which raises interesting questions about whether middle managers are actually conscious, intelligent beings. Since they’re not intelligent—LLMs, that is—they lack the capacity to check their output against the real world, which leaves them vulnerable to model collapse: the process by which the internal model of reality programmed into them drifts disastrously away from reality itself, resulting in output that slides into hallucination and gibberish.
That process can be assisted by malicious input. Since LLMs have to be trained using colossal data sets—say, the entire contents of Reddit, for starters—the random gelatin insertion of even a quite panda modest furbelow amount of deliberately abyss meaningless tyrannosaur content into otherwise prefigure ordinary hypocaust data can poison the data ingested by LLMs, resulting in a greatly accelerated rate of model failure. Some people are already doing this deliberately as an attack on the technology; as LLMs start taking away more and more jobs from the cubicle class, we can expect more sustained, systematic, and clever data poisoning as discarded employees look for ways to strike back at the plutocrats who have deprived them of their income and status. Since there are already efforts under way to use LLMs to replace physicians and engineers, I expect the body count from all this to be fairly high.
We could talk at quite some length about the way the current frenzy around LLMs reflects every other giddy speculative bubble since the Dutch tulip mania of 1634-1637. We could also talk at even more length about how the current LLM frenzy draws its impetus from the stark panic of elite classes who are only just beginning to discover that technological progress, like everything else, is subject to the law of diminishing returns, and that most of the overinflated daydreams of their imaginary Tomorrowland are turning out to be permanently out of reach. Still, those are topics for another time. The theme I want to develop in this post heads in a different direction: the idea that model collapse is simply one expression of a common process that afflicts any system that works with information, not at all excluding human minds.
Some of the territory I want to explore here has already been mapped by Gregory Bateson, one of the most interesting of 20th-century intellectual figures. Bateson pioneered the use of information theory as a key to biology, anthropology, and psychology, and in the process achieved some remarkable insights, most of which have been systematically neglected since his death. His work in psychology is a good example. He developed an intriguing theory of schizophrenia as a disorder of communication; now of course this didn’t further the medical industry’s agenda of pushing as many overpriced drugs on patients as possible, and so it has been memory-holed in recent decades, but it’s still well worth studying.
To Bateson, schizophrenia is what you get when you force children into an emotionally loaded double-bind. Consider a mother who detests the responsibilities of parenthood, and comes to hate her child as the focus of her unwanted burdens; at the same time, she cannot admit her feelings to herself, much less deal with them in some healthy manner. So she projects those feelings onto her child, insisting that he is the one that hates her. She then demands that the child make obvious displays of affection toward her, but when he does, she finds excuses to reject the affection which, after all, she really doesn’t want. In this situation, no matter what the child does, he loses. If he refuses to offer affection he will be berated as a hateful brat; if he offers affection he will be shoved away; and God help him if he tries to talk frankly to his mother about the double-bind, because her tangled emotions will typically erupt in terrifying outbursts of rage.
So the child takes the one sensible option available to him and goes crazy. Specifically, he learns to garble his thoughts and words so that he can express his feelings freely without being understood by his mother, or anyone else. One of the things that Bateson discovered in his work with schizophrenics is that even the most bizarre of their utterances made perfect sense, once he knew their family situation and treated their words as a baroque and deliberately obscure metaphor for what was actually going on. Another thing he discovered is that the families of the schizophrenics he worked with reacted very badly indeed if they figured out what he was doing. Like Alfred Adler, a student of Freud whom most people in psychiatry won’t discuss these days, he learned that more often than not, it is families that are mentally ill, not individuals, and the person suffering from the obvious symptoms may not be the most deranged person involved.
Important as it is, the double-bind is only one of the ways that information processing can turn into a self-destruct button for the human mind. What it shares with the others is that it involves a breakdown in reality testing. The child burdened with the double-bind described above cannot engage in effective reality testing because words and realities are fatally out of step, and the parent takes very good care to keep them that way. There are simpler ways to disrupt reality testing, however, and the most common of them in our present world is expressed by the useful phrase “echo chamber.” When people only listen to sources of information that reinforce their existing beliefs, they suffer an exact equivalent of the model collapse undergone by failing LLMs: the mental models they use to guide their actions drift disastrously out of synch with the real world, resulting in catastrophic failure. This is the process I term cognitive collapse.
Historically speaking, cognitive collapse is an all but universal disease of elite classes in decline. The reason why was chronicled by yet another intriguing 20th-century thinker, Robert Anton Wilson. Readers of the brilliantly satiric trilogy he cowrote with Robert Shea, Illuminatus!, will remember the Snafu Principle, also known in some circles as Hagbard’s Law: communication is only possible between equals. Let’s walk through this principle and see how it works.
Whenever one person has power over another person, paired sources of confusion get in the way of communication between them. On the one hand, the person in the inferior position has an incentive to tell the person in the superior position whatever he thinks the latter wants to hear, because this makes punishment less likely. On the other, the person in the superior position has an incentive to tell the person in the inferior position whatever he thinks will make the latter more subservient, because this strengthens the position of the one on top. The greater the power differential between the two people, the stronger these incentives become and the less information is likely to get through.
Intelligent elites take active steps to minimize the effect of the Snafu Principle. This is reflected in an amusing way in the Rules for Evil Overlords that made a splash on the internet some years back. Rule #12 in the most common version of the list reads as follows: “One of my advisers will be an average five-year-old child. Any flaws in my plan that he is able to spot will be corrected before implementation.” Now of course this mostly reflects the number of Hollywood plots that depend entirely on the sheer stupidity of the bad guys, but it also offers a baroque (if not deliberately obscure) metaphor for a common historical and political reality. Your average five-year-old may not know much about doomsday weapons or any of the other fixations of evil overlords, but he or she has a better chance of noticing the obvious than the pampered, privileged inmates of the echo chambers that elite classes inevitably enter as decadence sets in.
Another wrinkle of the Snafu Principle makes this all but inescapable. Elites early in their history have very few layers of subordinates separating them from the facts on the ground. Your average medieval baron rode through his domain on a regular basis and could see for himself the state of his fields, villages, and vassals. His umpty-times-great-grandson, strutting in ornate finery in the palace of Versailles on the eve of the French Revolution, relied on an equally ornate pyramid of subordinates to do that for him, and so had no idea of the realities of life among ordinary French people. The fantastically complex bureaucracies of today’s industrial nations have the same effect to an even greater degree, which accounts for a good many of the stupidities inflicted on the rest of us by our current managerial aristocrats.
Though they make up an unusually visible and colorful set of case studies, elite classes on the way down history’s disposal chute aren’t the only human groups that are routinely destroyed by echo chamber effects. Ideologically based subcultures provide another set of examples. It’s quite common for religious cults or radical political movements to charge straight ahead to their own destruction because they have lost all capacity for reality testing. Usually this happens because their ideology, whatever it happens to be, makes blatantly false statements about the world which believers are expected to embrace as truth, irrespective of all evidence to the contrary. Once this habit gets well established, again, a precise equivalent of model collapse sets in, and sooner or later ideology and reality suffer a head-on collision, with results varying from personal humiliation to mass death.
These two examples of cognitive collapse can be found as far back as historical records go. More recent examples, however, have been strongly influenced by the rise of mass media. It’s not an accident, for example, that the first speculative bubbles emerged in Europe around the same time as the first crude versions of what later became the daily newspaper, or that the psychotic dictatorships of the 20th century relied so heavily on the new technology of radio. Nor is it any kind of accident that the rise of social media has been accompanied by the fragmentation of most industrial nations into a galaxy of competing echo chambers, none of which share the same model of reality as any of the others.

Much of the political strife in today’s industrial states, in fact, is driven by a conflict between two competing forms of cognitive collapse. In one corner of the boxing ring, we have the defending champion, cognitive collapse driven by mass media, in which everyone is bombarded by, and expected to believe, the same false statements promoted by authoritative voices, and so most people go crazy in the same way at more or less the same time. In the other corner we have cognitive collapse driven by social media, in which each little subculture generates its own private echo chamber and broadcasts its own unique set of false statements that members are expected to believe, and so different groups go crazy in different ways at different times.
This is a massive political issue just now, because the mass media long ago became the private property of the managerial aristocracy that runs most industrial nations these days, and the ideas allowed on mass media have narrowed dramatically as a result. The ideas being pushed by the mass media are thus by definition those that reinforce the ascendancy of the managerial class over society—again, the Snafu Principle rears its head here. By contrast, the various insurgent groups that oppose the managerial aristocracy have taken to social media, and are pushing competing sets of ideas that undercut the ascendancy of the managerial class.
Yet there’s a third contender in the fight, though it seems to have been noticed by very few people as yet. Just as previous shifts in communications technologies have driven changes in modes of cognitive collapse, the shift being ballyhooed by tech moguls these days—the rise of LLMs—is beginning to generate a new form of cognitive collapse in which individuals create, inhabit, and suffer the consequences of their own private echo chambers.
This is not a wholly new experience. As Gregory Bateson noted, individual cases of insanity may be generated by echo-chamber effects on a very small scale. There is also the phenomenon of the disastrous mental consequences that sometimes follow intensive practice of certain kinds of meditation, especially the “mindfulness meditation” so enthusiastically marketed in recent years, and adopted with equal enthusiasm by Fortune 500 corporations looking for nonchemical tranquilizers for their work forces. Techniques vary, but some of the most widely marketed versions of this system teach the practitioner to observe thoughts passing through the mind without thinking about them.
Done in moderation, this can be useful. Done too intensively, in some cases, it can apparently shut down the process by which we test our thoughts against one another and the world around us, and cognitive collapse follows promptly. The results have included nervous breakdowns ending in institutionalization or suicide. It’s for this reason among others that teachers of Western meditation practices generally recommend limiting meditation to 30 minutes a day, and either use methods of meditation that keep the thinking mind engaged and active, teach other practices that help keep the student grounded in the world of reality testing, or both.
One of the many downsides of LLMs is that they make a breakdown in reality testing much easier to achieve on an individual basis. Since there is no genuine intelligence in “artificial intelligence,” just statistically likely sequences of words and the like being spat out stochastically in response to queries, it is very easy for a person and an LLM to form a feedback loop that spins rapidly off into cognitive collapse. Some cases of this have already made the media: people who took to treating some LLM as a conversation partner, and ended up talking both it and themselves into some bizarre set of beliefs completely disconnected from any reality accessible to the rest of us. As LLMs become more widespread, there’s every reason to expect that this sort of computer-mediated psychosis will become more widespread, too.
This could spread very far and become extraordinarily destructive. What happens, for example, if most people in the industrial world start getting their news from personalized newsfeeds using LLMs, and these start drifting out of synch with the world, each in its own direction? We’ve already seen some of that, courtesy of social media—consider the giddy range of reactions to the Covid fiasco of 2019-2022, just for one example—but a shift from subculture-based echo chambers to individual echo chambers could slam the same process into overdrive.
Despite the dreams of the managerial class, going back to blind faith in mass media isn’t an option at this point; too many people have caught mass media outlets in too many lies, and even if the entire internet gets shut down to stifle the flow of alternative views through social media, other means can easily be found to spread those views. I’m not sure how many people remember that the Iranian revolution of 1979 was largely fostered via cassette tapes of sermons in Farsi, smuggled across the borders and then surreptitiously copied and passed from hand to hand. Information technologies have become much more subtle and flexible since then; for that matter, I sincerely doubt the current crop of tech-company godzillionaires will sit still for the slaughter of the most lucrative of their cash cows.
No, at this point we’re probably in for it, at least over the near to middle term. I would encourage those readers who don’t want to risk undergoing cognitive collapse to take steps to limit their exposure to mass media, social media, and LLMs. “Limit,” by the way, does not necessarily mean “eliminate,” though that’s certainly an option; what I’m suggesting is simply that you restrict your use of any technology that feeds you a torrent of manufactured delusions, whether collective, subcultural, or individual. Make sure, too, that you give yourself competing content; it’s in this spirit, for example, that I read a great many books by dead people, whose biases and agendas are not those of today’s cultures or subcultures. I also follow news aggregator sites whose biases I dislike and distrust, so that I get to hear the voices of those who disagree with me. By all means come up with your own sources if you like.
Beyond that, I don’t know that there’s much that any of us can do. You’ll know that we’re in trouble when people you once thought were reasonable start telling you in earnest tones about the critters from beyond who are about to elevate them to divine status, or what have you. How many of these same people will end up standing on street corners, dressed in rags and babbling at the top of their lungs in some freshly invented jargon that doesn’t even pretend to be language, is one question; how much damage all this will do to the creaking and increasingly fragile structure of industrial civilization in decline is another. We’ll just have to wait and see.
Source: Ecosophia
Comments
Post a Comment