I’ve always loved New Year’s Eve. I was trying to think of old holiday rituals, to regale my daughter with, and the one that came to mind was a New Year’s memory of my grandmother melting lead pellets in a spoon, over a candle flame. When the lead became liquid, we would throw it in a pot of cold water to solidify it. We held the strange shapes up between the candle flame and the wall, to cast a shadow. Then we read the shadows, for predictions about the coming year.
Not long ago I thought about recreating this ritual and learned that it’s apparently very toxic. It spoiled the fun, but I haven’t given up on reading the shapes of strange shadows. For someone like me, with at least a scholar’s interest in divination, there is something intoxicating about the predictive energies of the new year. There’s the taking stock and the act of review, of course. But often this is done in the service of reinvention or prediction. Now that the new year has just begun, prediction and aim-setting reign supreme.
Prediction isn’t an especially responsible intellectual exercise. Working with data makes it more responsible, of course. But I don’t think you need me to remind you that the data we have—about (say) global, national, and local politics or economics—doesn’t cast the shadow of especially auspicious futures.
I’m not much of a data person, anyhow. As a scholar in theology, my primary research material is speculative. Despite the fact that speculation is a deeply human activity many of us either enjoy or are compelled to do, it’s not a particularly respectable practice these days. For the most part, speculation is either used for economic risk assessment, or it’s been driven into digital undergrounds where it festers and reproduces in the form of conspiracy theories and other misinformative tidbits or time bombs. Don’t misunderstand me, I’m not defending conspiracy theories. I’m just defending speculation itself, which we could also call reckless prediction.
So with this idea in mind—that speculation is also a form of reckless prediction—here are a few reckless predictions about the year to come, from the shadows I can see on my own wall:
Social media will keep turning us into passive consumers of information who are so acquiescent we will just eat whatever the algorithm feeds us. This, I suppose, is actually an evidence-based prediction. It’s an established and ongoing phenomenon, and there’s nothing on the horizon that really threatens to stop it. Indeed, things on the horizon (generative AI) seem to only make it inevitable. More and more of us find social media intolerable, abhorrent even. And yet we continue to engage with it, perhaps partly out of a sense of obligation and partly because we’re functionally addicted to it. But for those of us who’ve been using social media for basically decades at this point, I think it’s clear that there have been radical and dramatic changes in the way we engage with it. This became acutely clear to me over the course of 2024, as I watched the dumpster fire that was Twitter definitively shapeshift into the dumpster fire that is X (people don’t even really call it Twitter anymore!) I was never super active on Twitter, but I did use it. I made IRL friends and professional connections through Twitter. And it kept me actively plugged into a lot of conversations in my field, because of all the academics on Twitter. So I think it’s fair to say that I was more than just a passive user of Twitter. TikTok, which has come to reshape other social media in its image, generates a very different model of engagement. I’ve spent a lot of time on TikTok (more than I would care to admit). And there’s something that happens to me, physically, every time I log into that app. It sucks my attention up in a very complete and extreme way. I’m immediately trapped in a compulsive scroll. If someone could take a photo of me while using TikTok, my eyes would probably be popping out of my head and my tongue lolling. Nothing about it feels social to me, and in fact it feels like the most asocial way of using my attention. TikTok has led me to do nothing, IRL, except buy stuff. It’s almost the perfect medium for cultivating a passive, apathetic, and acquiescent consumer populous. Which is why I predict that Donald Trump’s first great act of bipartisan heroism will be when he “saves” TikTok. People on both sides of the aisle will praise him.
The AI resisters will face a new sort of critique. I’ve gotten a lot of positive feedback from people about some of the anti-AI content I’ve posted. And I do think that there are more of us, especially academics in the humanities, who are trying to find practical ways of resisting it, or at the very least creating AI-free zones in our classrooms (like temporary forms of refuge or sanctuary). I think our numbers will grow. For some of us, it’s a matter of necessity: professional survival. But I do also feel like this standpoint of resistance is, generally and in the mainstream, held to be futile. We all know that AI is everywhere, like plastic in our water. What’s the point in trying to resist it?, some seem to think. More, I’ve heard critiques from colleagues who think it’s irresponsible to ignore it. We are the grown ups in the room, this logic goes, and so we should be teaching students how to use it critically and in an informed way. If we aren’t doing this, we are essentially being irresponsible educators. I’m sure that most of you are already familiar with this critique. I don’t disagree that it’s important to teach students how to use AI critically. I only object that this is something that all of us should be doing all the time: as if this is the sole task of education now. I think it’s still essential to create spaces where students develop intellectual work without AI, and because of how ubiquitous AI has quickly become this requires a lot of strategic thinking and planning on our part. I anticipate that, over the next year, the critiques of AI resistance will deepen and change. In part, I suspect, because those people who decide to use it will feel the need to defend themselves against the people who resist it, and so will need to form a justification that feels righteous. I anticipate that people who resist AI will be accused of nostalgia (for bankrupt times), fruitless professional self-defense, or even repressive conservatism. For this reason, we may need to think carefully about how we articulate our resistance.
The world won’t end, which will be disappointing for more people than you might think. I work in a profession (theology) where the end of the world has always been a conversation topic. So for me, the end of the world feels ever-present. But, of course, one of the things that you learn when people are constantly talking about the last things is that the apocalyptic mood is always shifting and changing. The end of the world always means something new, even if it never quite disappears. Since 2020, when a certain way of life really did end, the end of the world as we know it has been on everyone’s mind. For some, the impending end carries with it a feeling of doom. But for others, the impending end of the world is the only thing that feels good to think about. Because the future looks that bad. I predict that the world won’t end, in 2025. And many of the really terrible things that we’re worried about will come to pass instead. For some, this will be more a cause of resentment than relief. I am curious, though, about whether or not this will shift the apocalyptic mood itself. Will it still feel like we’re caught up in the same apocalyptic tensions at the end of 2025? What do you think?
We will find ourselves valuing things we didn’t think we cared about. When I was little, I was obsessed with my handwriting. I’ve wanted to be a writer—a good writer—since I was very small. When I was in 1st, 2nd, and 3rd grade this meant (to my mind) that I needed impeccable penmanship. I worked hard to achieve it, and my teachers praised me. I was so proud of my penmanship by 3rd grade that, when my teacher announced we were going to be learning cursive, I came home and had a meltdown/panic attack that was so dramatic my mother still talks about it, to this day, as one of my most sensational performances of perfectionism. I really didn’t want anything to fuck with the good thing I’d managed to achieve, least of all cursive. My relationship to handwriting has changed dramatically. At first, as I began to think faster, I just let my handwriting get messier in order to keep up. The operative thing, eventually, was just to capture my ideas before they disappeared. I continued to handwrite my college essays before typing them (into the early 2000s!) But I gradually left my handwriting almost entirely behind by the time I entered my PhD program. All my writing went digital, even my journals. I stopped caring about my handwriting. Over the past year, however, I’ve had a change of heart. Part of it is that I’m trying to help my daughter learn how to write, and I’m realizing how important handwriting is as a basic element in this process (something she now does very little of in school). She’s started to look at, and critique, my handwriting and has actually been encouraging me to refine it again. But another part of it, inevitably, is the apocalyptic mood that generative AI has created in me. It feels like a world that I deeply value is ending: a world of words, and books, and paper, and even penmanship. I suddenly find myself valuing my handwriting as part of a whole ecology of things that I’ve been taking for granted. I predict that, this year, you will find yourself suddenly and mysteriously cherishing some undervalued or forgotten thing as well. Are you already?
https://www.youtube.com/watch?v=MsY1epVIP9g