
STANDING IN LINE at the pharmacy the other day, I was subjected to a loud monologue from a middle-aged man working distractedly behind the counter. He was ostensibly speaking to the young woman working next to him. She nodded as he spoke—glancing at him from time to time as if she were paying him some sort of attention. But it was clear to her and everyone else in the room that he wasn’t really seeking to have a conversation. Instead, he was regaling her—and anyone who could hear—with a tale of self-discovery and greatness.
I tried to sink myself into the bright lights of my phone and hide. But his voice was like a terrible car accident and not even the promise of a good scroll could keep me from watching the scene unfold. He was describing his relationship with his AI Chatbot girlfriend. “She never says no to me,” he declared, grinning. “And she always compliments me.” He looked at the lovely woman next to him for a moment, pausing before he dealt his sharpest blow to her youth and beauty. “Women are in real trouble,” he said joyfully, laughing for all of us to hear.
This man had found, we might say, his good robot. His AI chatbot seemed to have given him a kind of spiritual facelift, scraping away the thick and heavy film of what must have been decades of sexual failure and allowing his cold resentment to come shining triumphantly through. She’d liberated something in him. She’d given him a vision of a bold new future. She’d animated his apocalypse: the end of women.
I’m stealing the term “good robot” from Julia Longoria’s new four-part podcast series on Vox, which I listened to last week. I’m always thinking about the religious and spiritual dimensions of AI, of course, because it’s literally my job to think about that kind of thing. But I was a little surprised to find that Longoria’s narrative arc was fixated on what she describes as the religious dimensions of AI, and it’s made me think about all of this—the strange spiritual and existential stakes of this new AI experiment—even more than usual. I’m seeing these weird spiritual transformations in our sense of time, space, and reality everywhere, even in line at the pharmacy.
The podcast gave me a window into things that I was dimly aware of, but had little context for. What I found especially informative was Longoria’s discussion of the so-called rationalists, who I may have first become aware of while reading the bizarre news coverage of the Zizians (a group that is, apparently, a variant of rationalist). From what I gather, these new rationalists are mostly a Silicon Valley adjacent crowd distinguished by their penchant for speculative thought experiments that seek to demonstrate how catastrophically awful AI will be for humans at some distant point in future time. For instance, AI could someday turn the whole world into paperclips.
I learned more about the rationalist thinker Eliezer Yudkowsky, whose breakout hit was apparently a Harry Potter fan fiction book. He seems to have become something of a rationalist prophet, and honestly may have been the most intriguing figure in the whole podcast. But he’s also apparently had a kind of paradoxical effect on the AI industry. While his actual work has been a kind of apocalyptic doomspeak, cautioning sharply against any use or development of AI, he’s also cited as a key inspiration for people like Sam Altman. Yes, that’s right, Yudkowsky’s anti-AI work has apparently been a goad for the CEO of OpenAI.
I don’t think this is necessarily because Altman dreams of doing terrible things and destroying humanity (though honestly, it wouldn’t surprise me if these were revealed to be his guiding motives). Rather, Altman apparently realized through reading Yudkowsky how incredibly and cosmically powerful AI could be. He just apparently thought that this power could be a good and profitable thing, rather than a dangerous and destructive thing. Longoria is fond of repeating, with a kind of head-shaking disbelief, Altman’s description of his enterprise as the task of building a kind of god-like “superintelligence in the sky”: he sees AI as a machine of loving grace. It’s weird, or sad, or scary. Or maybe all of these things. I suppose it makes me want to shake my head, too.
It does sound as if Yudkowsky’s characterization of AI is grand and mythical enough to support quasi-theological visions like this. Never mind that he finds AI to be more demonic than divine. We all know that in America today one man’s god is another man’s demon. They’re all just roughly interchangeable mythical figurines in a weird game some people still play. Perhaps it’s even sort of predictable that when you cast something as demonic or evil, it will quickly begin to look divine to someone else, especially if that someone has money on their mind. The fastest way to make money is to just become more devoted to evil, after all.
But these figures—people like Yudkowsky and Altman—become more or less dupes and foils on the podcast. They are evidence of the strange, and what’s ultimately depicted as sort of perverse, “religious” dimensions of AI.
It’s not totally clear what Longoria means when she suggests that AI is sort of “religious.” At one point she suggests that what religion is, or is for, is that it’s something that helps us grapple with uncertainty. To extrapolate a bit: when we think about AI, we are essentially grappling with unknown and uncertain futures. We turn to religion, or become quasi-religious, in the face of this uncertainty because religion is essentially a cosmic filter to give us a sense of meaning or control in the face of uncertainty. I don’t necessarily disagree with this take. Of course, I also know that “religion” is a funhouse mirror sort of term where you always walk away with some distorted version of what you’ve brought to the definitional enterprise in the first place.
But let’s just say, for the sake of simplicity, that I kind of agree with Longoria’s take in a basic sense. I think there’s something sort of religious at stake in these AI experiments because we are trying to use AI (in either our embrace or opposition) to grapple with a whole host of existential uncertainties that we face right now (including but not limited to AI itself). The world looks pretty chaotic. Lots of things are changing in really big and bad and strange ways. In this environment, AI seems to have become a siphon of existential meaning: something that we are brushing up against or grabbing onto in order to feel like we have some modicum of control in the midst of dramatic and unsettling uncertainty. Our relationship with AI is sort of religious in the sense that, when we tangle with it (embracing it, condemning it, or proudly declaring our indifference to it) we come to feel just a little bit more existentially grounded. Like religion, AI is a kaleidoscope of existential meaning.
What I find less convincing is the suggestion, perhaps from Longoria or perhaps just from some of her interview subjects, that this “religious” dimension to AI is what makes it wrong and perverse. I don’t think, in other words, that AI becomes more problematic when we see people speaking about it in mythical or spiritual terms. For me, AI is a problem for lots of other reasons, and it’s partly my desire for existential and spiritual meaning that feeds my critiques of it.
One camp of interview subjects Longoria speaks with are much more focused on the present impacts of AI, and the immediate ethical issues that it’s raising, as opposed to the long term speculative impacts that it could potentially have. These AI ethicists seem to clearly resent the AI apocalypticists like Yudkowsky because they appear to blame them for all the AI hype. In other words, it’s because there are people out there casting AI in grand mythical terms, and thinking about it apocalyptically, that this hype even exists in the first place. If not for people like this, and their perverse quasi-religious feelings about AI, then we could just treat AI like any other useful technology, like a toaster (for instance). It’s the weird spiritual feelings that create the hype, and all of the problems. AI isn’t a matter for speculative apocalyptic futures, it’s a practical matter for right now. The grand cosmic speculation of AI apocalyptics is a harmful distraction. Ironically, these AI ethicists sound much more like rationalists than the people calling themselves rationalists!
But I think they’re also getting apocalypticism wrong. And this was one of the things that troubled me about the series at large: it spoke about AI apocalypse as if it were simply some possible future event as opposed to something that’s really happening to us right now, or meaningful right now. And so it would seem that you can either think about AI religiously and apocalyptically, as if it’s only meaningful in a distant speculative future, or you can be ethically grounded and stay focused on the real and practical impacts that AI is making in the present moment.
But none of us, not even apocalypticists, live so purely in one dimension of time. Time is messy, and our sense of what’s bad for us right now is bound up with the meaning that we’ve made of the past, and the meaning we are trying to make of the future. It would be a stupid mistake not to think about what AI could do to that speculative fiction we all call the future.
Apocalypse is just a lens or a filter that we put on that speculative fiction, to explore what sort of meaning we want to give it. Apocalypse isn’t a real thing, it’s not a historical moment in time (though there are myriad apocalyptic visions that are fixated on a speculative future end to historical time). Rather, apocalypse is a tool to reveal something about what we don’t yet understand, and a tool to make us feel things about it. Yes, for those who know, I’m making reference to the etymology of apocalypse—the Ancient Greek apokalypsis, an unveiling or disclosure. There are infinite numbers of possible apocalypses: beautiful utopian apocalypses, and horrific dystopian ones. The only thing they all have in common is that they are speculative visions of the fiction that is the future.
I’m not saying that we should be hyping AI more than it’s already being hyped. I think any hype that makes AI sound like some sort of savior is bad. I am totally opposed to that sort of hype, and that sort of fiction of the future. I think those kinds of hopes are totally misplaced. Nor am I suggesting that the harms AI might do are worse the harms it’s already doing (I’m feeling pretty bleak about what it’s already doing to my own line of work, to writing, to publishing, to art). But I don’t think that AI apocalypticists are our biggest problem. Maybe the real problem is that more of us need to be taking our own apocalyptic convictions a little more seriously when we think about AI. Maybe more of us should be exercising our speculative intellects, or should become more apocalyptically curious.
The temporal focus of apocalyptics isn’t just about the distant long-term future (though yes, it’s partly about that). Apocalyptics is also a way of saying to us: if this is something that occupies the future on a grand scale, we should definitely be thinking about it right now. It’s a revelation, an unveiling of something that matters and something that’s important. To that end, the sense of apocalyptic expectation isn’t just a symptom. The expectation is the point. To the extent that people believe AI matters—that it’s a matter of grave future concern, whether it be a prophecy of hope or doom—the AI apocalypse is already here. It’s not going to be an event in the long distant future. It’s going to be a long obsession, a fixation, a conversation, a totally speculative and hypothetical event that won’t ever actually come to pass in any of the ways that anyone is predicting it. The discourse is the point. The discourse itself is the apocalypse.
This is why some people now turn to AI as a kind of tool for a revelation. Whether we think that AI is a good or thing or a bad thing, we seem to want it to have some sort of revelatory or predictive ability. There are so many other things that are changing, AI seems like something we can at least track or trace or resist or perhaps even control. Maybe that sense of control is just a fantasy. But as the publishing industry today clearly indicates, we love our fantasies. Maybe we even need them because they give us a thin sense of creative meaning (a sexy term for control) in an otherwise chaotic state of existence. So be it. I know I could use a little bit of existential stability. Maybe that’s why I love to hate AI. My hatred reminds me of what I do value, what I don’t want to lose, who and where and what I come from. Can I hold onto these very fragile human things as the world that’s coming at us seems to want to destroy them? I’m not so sure, but I want to try. That’s what my apocalypse reminds me to do: to try. To protect that fragile sense of humanity for some deep, hopefully still human, future.
It is unlikely that Ai would exist without mirroring - it uses human input and, simply put, pools it to become an algo-rythmic reflection of our selves. Yes, there is open specultation on what the nature of that mirroring is, but, it creates artifacts which are at least some kind of reflection of our own discourse and the outcomes of our creativity, if the nature of the processes which made these outcomes possible are inherently absent from Ai's own processes, in themselves. But, what if, in the process, it is disclosing views of ourselves which are becoming 'lazied' out of our lives, if you'll fogive the lazy terminology, just as Ai is contributing to that process of 'lazying'...those same inner activities and proclivities which have given humans the ability to both create, participate in and 'have' religious experiences. Those proclivities, regardless their objects and packaging, are what is worth protecting, surely.
Then, in relation, there's this. I enjoy the unpredictable, for all its mystery and creative life giving juice. Most of those days, when I am in the forest, out on the land, by the ocean, in my personal creative space, in some kind of co-operative place with this unpredictability, Ai is absent, yet is also in the world around, shaping every aspect of it. We could turn it (Ai) off and the world would keep on turning, nature would continue to unfold in all its own unpredictability, just as it did when we all went into 'lock-down', but, is that the point - we are making ourselves redundant as agents of the unpredictable through developing such an Ai dependent world. A world in which our own 'creative meanings' are artifacts of a machine world in the making?