MY FALL SEMESTER started this week, on Monday. It will be my 9th year at the college where I teach; this is the 9th time I’ve gone through the ritual preparations for the first day. But it feels qualitatively different from the previous eight, and not just because I’m older and wiser. It feels like there’s something different in the air. I can sense the faint aroma of artificial intelligence everywhere I go. It stinks a bit, at least to me. But, like all smells, it gets into you and then suddenly you’re left with an olfactory adaptation.
THE 2019-2020 ACADEMIC YEAR was the exact middle of my teaching career in the institution where I’m currently employed. It was also, you will of course recall, the year things really fell apart and everything changed. I had four naive years before it, and this year marks four wizened years after it. But that year remains a turning point for me, in ways I still can’t quite account for. I know, rationally and intellectually, that AI didn’t really become a practical problem for people like me—college professors—until the release of Chat GPT during the 2022-2023 academic year, when I was actually on sabbatical and blissfully ignoring it. But for some reason, the problem of AI seems weirdly bound up with that stranger year, 2019-2020, as well.
The reason, I think, is that 2019-2020 feels like a critical turning point year: the year the pandemic started, the year everything changed, the year things fell apart, the year the institutions stopped working, the year our lives moved online, the year we all tried (and often failed) to move education online, the year we hid, the year we desperately scrolled through our social media feeds looking for news and companionship. It feels, to me, like what’s happening with AI right now wouldn’t be the event that it is without that year. What do you think? I’m ready to be wrong. It just feels to me that both the thirst for, and the absolute distaste for, whatever it is that AI has to offer have both been indelibly shaped by that fateful year, when our lifeworlds went so fully online. There were ideological boundary lines and adaptations created that year that seem to have contributed to the creation of two dominant camps: the AI proponents and the AI abolitionists, the people who encouraged adaptation and the people who resisted it. The people who lost faith in the internet and the people who reinvested. As with so many things, there are lots of people who pass back and forth between the camps, unsure about their allegiance. Or unsure that their allegiance really matters. Nevertheless, whenever AI comes up it feels like a series of volleys between these camps.
I’ve been having a lot of conversations with colleagues about AI policies: about how we will (or won’t) be using AI in our classes this year, about whether or not we’ll be prohibiting the use of AI in our classrooms. These are conversations in which sides are very clearly taken. These are obviously important conversations, for pedagogical reasons. But I think these conversations (Accept! Reject! Ban! Utilize!) seem to presume some level of agency or control that we don’t actually have. As if we teachers are the ones who make the world, when we are actually being remade by it. Whatever my individual AI classroom policy is, it will nevertheless be the case that AI is generating all kinds of uncomfortable existential currents and shifts for those of us whose livelihood is thinking and writing. Whatever side I choose, AI is still changing me.
ON THE FIRST DAY OF CLASSES THIS WEEK, as I was talking to students about my classroom AI policy, I scrawled “AI = dead inside” onto the blackboard behind me. The formula generated a little ripple of laughter. But it also opened up the conversation for us. I got to tell the students about my deepest gripe with AI-written papers: they sound like they were written by someone who’s dead inside. “I’m tired of battling the robots,” I told the students. “I know you’re not dead inside. I want real, living interactions.” And I feel like they heard me. We spoke, at least a little bit, about something that’s not always easy to grasp or speak about: voice. “If your essay is flagged as having used AI,” I told them, “even if you didn’t use AI, I’m going to do you the favor of having you rewrite it, so that you can start to discover the voice I know that you have.” They might need it, I told them, for a love letter some day. You don’t want to sound dead inside in a love letter.
I’m asking students not to use AI in my classes. So I am essentially setting up camp with the anti-AI people. And I’m already weary of hearing people tell me that I’m burying my head in the sand, living in denial, or failing to prepare my students for the brave new future that we’re all facing. Choosing not to use AI in your classroom is a direct response to AI. It’s an AI-informed decision. It’s a decision I never would have had to make in the absence of AI. Quite honestly, it’s a feat to try and imagine what a classroom that doesn’t (and maybe isn’t even tempted!) to use AI could look like today. Whether or not I actively make use of it in my class, I’m still living in a world where the smell of AI is in the air. It’s still doing things to me.
My fear, or perhaps I should call it my certain knowledge, that students will use AI to create written content for my courses has changed how I think about my classroom over the past short year. I’ve become much more averse to the use of technology in the classroom. I’ve stopped playing videos in class, I’ve stopped presenting on slides. I’m printing things on paper. I’m asking students to bring hard copy notes to class. I’m creating assignments that will demand us to actually do things (stand up and speak aloud, walk around in the woods). I request that students put away laptops, tablets, and phones, and leave them hidden until the class is over. Device use is viral, so I’m doing all I can to eliminate any need for one. I find myself doing everything I can to create one small and simple low-tech environment where the primary tasks are to read things written on paper, and talk about them. All of this is a direct reaction to that aroma of AI in the air.
A couple of colleagues have commented that I seem to be responding creatively to the problem at hand. I’m doing new things (or creatively deploying some new luddism)! But I don’t really feel proud of myself. Instead, I feel a little desperate. These are things I need to do, so that I don’t pull out my hair. These feel like things I need to do, to try and survive.
I don’t know what looks like to be a humanist after AI. Maybe it looks more or less the same, as we learn that this new panic was—like so many panics—just overblown. Or maybe the humanities become more vital. Maybe we will actually want to feel more human, or go back to the sources again, en masse. Or maybe it just becomes more, and more, and more of an esoteric hobby for those privileged enough not to let their attention be drained by the digital. Who knows.
In the meantime, it feels like I’m living through a meantime: like I’m inside of a parenthesis, waiting. Things outside of the parenthesis feel contingent, undone, and unmade. And I can feel that meantime changing me in subtle ways, even though I’m hiding from it.
That’s a good point; I had forgotten how suspicious the sudden switch to online learning made so many professors. Lots of students were cheating, of course. But it did seem totally pave the way for AI use among faculty, in the sense that they want some kind of concrete way to “catch” their students doing what they suspect what the students will be doing.
I am an instructional technologist for a US public university. I do think 2020 set us up for AI in a lot of ways. You are right about that. It certainly made us more internet and device dependent. It was also the first time that many faculty began turning to technology to catch cheating and plagiarism. I believe that fed directly into the widespread desire for a technological to catching AI cheating and plagiarism. (Use of those fixes is discouraged, for good reasons, where I work.) 2020 also damaged a lot of students, perhaps keeping them from developing skills that they should have, and possibly making them readier to adopt AI as a crutch.
Our approach to AI in American higher education needs to include room for a lot of different policies, so students can develop skills they need without it and can develop skills they need at work. They need to understand its ethics (and perhaps its deeper moral implications), and learn its limitations. Students also need an overview of its history, economics, psychology and ideology (of its creators, promoters and users), and political and cultural effects.
Professors and instructors need a great deal of flexibility to experiment with the best responses for their disciplines and specific courses. I think students deserve that.