ONE OF THE SIDE EFFECTS of this AI boom cycle we’re living through is that it’s drawing some sharp dividing lines between academic colleagues. We seem to be sorting ourselves into camps: the pro-AI and the anti-AI camp, those who are open to using AI in their classes and those who refuse to. Of course, many of us qualify our positions. “I am entirely opposed to active use of AI in my classroom, but realize that I’m using it on my phone, email, and a million other places in my life,” for instance. Or, “I am experimenting with AI, but am also convinced that its overall effect on our teaching and writing is a net negative.” Nevertheless, this sharp division shapes our respective positions.
If you’re an academic on social media, you’ve probably witnessed this sharp division becoming manifest in hostile exchanges. Recently, one of my colleagues made an argument, on social media, for solidarity between pro and anti-AI academics. I think some sort of solidarity will ultimately be important. It’s what we will need, on some basic level, as we shape institutional policies, procedures, and curriculum. So it’s a worthy goal. But what would real solidarity look like right now? How could we get there?
Maybe it’s too early to think about this. Perhaps this whole thing will have turned out to be a farce, or a new elite panic. Or maybe it will turn out to be a transformation in education and communication more radical than what happened after the invention of the Gutenberg press. We’re still trying to understand what we think, and where we stand, as individuals. This may be one of the reasons why the disagreements can feel so sharp and hostile. There is a lot of uncertainty and people on both sides are grappling with it in very different ways. This uncertainty can make solidarity look like a distant fantasy.
As I’ve been pondering this, however, one thing does seem worth emphasizing. I don’t think there can be any real or genuine solidarity unless people who hold a pro-AI position can acknowledge the real value of resisting it, and can support the perspectives and pedagogies of their anti-AI colleagues. I think the burden of recognition and acquiescence is on the pro-AI camp, because they hold the position of greater power right now.
I WASN’T INITIALLY in the anti-AI camp, myself. But I’ve sharpened my opposition to it for a variety of reasons, one being that I used it in an assignment. The ultimate lesson of that assignment (like many other assignments I’ve seen my colleagues in various fields describe) was basically, “look how much smarter (and more creative) you are than AI!” There’s only so much a student can learn from an assignment like that. And only so many times you can teach that lesson before it gets stale. Ultimately, for me, I think AI is a pretty boring pedagogical tool in the kinds of classes I teach.
But I’ve also come to adopt a more openly anti-AI stance because of the incredible pressure I feel from the culture at-large, from students, from colleagues, and from admin to use it in my classroom. I’ve adopted an anti-AI position in part because I’m assuming that lots of other people are using it. I’m not going to impose my pedagogy on them. But I’m going to let others be the ones to use it, not me.
My feeling is that, no matter what, there need to be refuges in our institutions where students learn to write and think without AI. This will only become more important (especially given that we were already failing at this task before the AI boom). It baffles me that people are critiquing colleagues who want to do this! I think it’s a net benefit for everyone. Should we all be using AI all the time? I don’t honestly believe that anyone thinks that. But it can sometimes feel like pro-AI colleagues are pushing us in this direction. Pedagogies that creatively work to defend our classrooms against the onslaught of AI are of great value to all of us in academia, even to those who think it’s futile to resist it.
I UNDERSTAND THAT it can feel really bad when you mention on—let’s say—social media that you’re interested in experimenting with AI in your class and suddenly you’re under attack from people who you thought were your friends. I can understand that it feels really strange to defend your experiments with AI against these critics, when you’re not even sure how you feel about it in the first place. But, honestly, my sympathies are with the people raising the critical questions. They are punching up and if you punch back, you’re punching down. Anyone who is open to using AI in their classrooms has the money, and the administrative imperatives, on their side.
I think (and I’m basing this on my own feelings) that many of the people who have a hostile reaction to the spread of AI in the classroom are scared and grieving people who are watching what feels like a kind of wildfire burn down what they love and value. I can appreciate the desire to stay calm and adjust to the new reality. But I’m convinced that this is a better moment for raising the alarm.
I would love to see AI disappear. And I would love it if we, as academics, could do something to make this happen. But we are powerless against it, at this point. It’s all over our world, and there’s not much we can do. Maybe one of the only things we can do is resist using it in our teaching. How much will this do? Not much. But in some small way, it’s a form of harm reduction. And maybe that small act of resistance will be the thing that preserves small shred of skill or knowledge for someone.
Ultimately, my point is this: I don’t think there can be real solidarity among academics if those who reject the use of AI in their classrooms are dismissed as cranks, or farcical Luddites. My challenge to the pro-AI boosters (or even the AI-curious) is this: can you assemble a position that allows you to feel OK about the way you’re using AI in the classroom while also acknowledging that the rejection of AI plays an important role in our educational ecology right now?