The Opinions
What bots are really doing in the classroom.
Aug. 12, 2025, 5:01 a.m. ET
Artificial intelligence is already showing up in the classroom, so how are colleges, professors and students adapting to it? The New York Times Opinion editor Meher Ahmad is joined by the writer Jessica Grose and the columnist Tressie McMillan Cottom to talk about how the humanities are charting a new course, and whether ChatGPT is comparable to SparkNotes.
A.I. Is Fueling a ‘Poverty of Imagination.’ Here’s How We Can Fix It.
What bots are really doing in the classroom.
Below is a transcript of an episode of “The Opinions.” We recommend listening to it in its original form for the full effect. You can do so using the player above or on the NYT Audio app, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.
The transcript has been lightly edited for length and clarity.
Meher Ahmad: I’m Meher Ahmad and I’m an editor for the New York Times Opinion section.
Today I am joined by my colleagues the writer Jessica Grose and the columnist Tressie McMillan Cottom to talk about artificial intelligence and education. Hi to both of you.
Jessica Grose: Thanks so much for having me.
Tressie McMillan Cottom: Hello. Always a pleasure to be here. And hi, Jessica. Good to see you.
Ahmad: So both of you have given this a lot of thought.
Tressie, you’re in the classroom often as a sociology professor at the University of North Carolina at Chapel Hill and have called generative A.I. “mid tech,” which we’ll get into. Jess, you’ve spent time interviewing parents, students and most recently educators across the humanities to write a series of pieces on A.I. and education for your newsletter.
It’s the last few weeks of summer; schools are gearing up to start back soon. So I’ve gathered us together because there’s been much said about how critical thinking skills are atrophying under the consistent use of A.I., like ChatGPT or Gemini. I want to talk with both of you about how we grapple with A.I.’s role in higher education.
Before we dive into our conversation today, I wanted to get a temperature check from both of you on how you feel about A.I. being used by students in higher education. On a scale of one to 10, with one being burn it down and 10 being extremely beneficial, how do you rank the use of A.I. in the classroom?
McMillan Cottom: I should preface this by saying I do not pretend to be a universal sample on this. Having said that, I’m going to put it at a two.
Grose: I was just thinking two, to be generous.
McMillan Cottom: Yep, me too, Jess.
Grose: Most days I want to send my children to a monastery.
McMillan Cottom: [Laughs.] Yes, that’s about right. That’s about it.
Grose: You might not hear a super pro-A.I. point of view on this discussion, but I will try to present the other side as best I can, and present some of the more positive use cases that I have heard through my reporting, because there are some.
Ahmad: OK, so the temperature is not good. We’re tepid at best. Well, Tressie, I wanted to start with you. You’ve been critical of A.I.’s role in education. As I said, you called it “mid tech” — where it’s being used for mundane tasks and we actually need to be promoting an education expertise instead. So I’m curious if you could explain that idea. How does it factor into higher education for you?
McMillan Cottom: Artificial intelligence is one of an iteration of a wave of educational technology in education systems. So I think it’s important to think of it in that context because a lot of the cultural power and the political power that A.I. has right now I would argue is outsize, given how useful A.I. is or is not to actual educators and students.
There’s a lot of hype, right? And a lot of that hype is based on the premise that there’s something really novel about it. That if we don’t jump on this new, novel, innovative, general-purpose technology that’s going to transform society, education will be left behind.
Because of that, I think it is super important to have some historical context. I’ve been writing and studying and researching educational technology in higher education for a long time, and people will just have to trust me on this: A.I. is not that novel.
It is not that potentially revolutionary. It is in a long continuum of technologies that promise to transform education, starting with the TV, the typewriter, VCRs, tablets — if anybody remembers when we were going to give everybody a Chromebook.
A.I. is in that continuum. Most of the promotion of A.I. in schools boils down to: Well, it’s happening, and so students need to know.
But there’s nothing attaching it to learning outcomes. There’s nothing assessing its risk to privacy, to data, to the mental and emotional and cognitive development of students. That’s actually what education is supposed to do.
So because of that, my premise here about A.I. — not just in education, by the way — but when I say that A.I. is mid, it’s an illusion, if I do say so myself, trying to be crafty there.
But there’s an allusion to the fact that A.I. quite literally is an averaging of the midlevel range of responses to a prompt. That’s how it arrives at the things that it tells you when you prompt it. Also, the idea that I don’t think that it is nearly as transformative, especially to the social processes of things like education and learning, as it is predicted to be.
There are very few strong, universally positive use cases for A.I. in education right now.
Ahmad: And Jess, how does Tressie’s take on A.I. factor into what you’re hearing from the many pieces that you’ve written about how it’s being used in classrooms by teachers and students?
Grose: It absolutely tracks. I think there’s some nuance, only in the way that it’s being used for different age groups. I think it has zero place in K through six or seven. The fact that it is even in the differential of discussion for children that age is absurd to me, because there’s ample evidence that they do not learn how to read and comprehend properly — even with screens.
It’s a different process. They really need paper and pens, the old-fashioned implements.
Let’s put that aside for now. Getting to the college level, I think there should be some training in different subject matter areas about how to use the technology, if it is appropriate and if it is going to actually improve the research that is happening.
For humanities — I really think there are very few cases where its use is going to be helpful to the way that the students are thinking. But in medical research, you’re seeing that the A.I. pattern recognition is really helping come up with novel fixes and novel medications to problems that have bedeviled researchers for a long time.
But I fully agree with Tressie about the humanities and that there are very limited use cases where it leads to deeper thinking, better research. I’m just not convinced by what I’ve seen.
I’ve tried to use it. Basically, what I’m told is: It’ll help with your research; It will help summarize things. But I don’t know what’s important to a piece of text for what I’m writing until I’ve read the entire piece of text.
Why would what A.I. thinks is important be what I think is important to the argument that I’m trying to build? Again, writing and thinking is such a bespoke cognitive process that A.I. can’t tell me what I think about something that I haven’t read yet. Just saying it out loud sounds absurd, but it feels like we need to explain that at this point because it’s being pushed as this cure-all, think-all human tool.
I’m just not convinced of that.
Ahmad: Speaking of humanities: Jess, you wrote a piece in your newsletter about how humanities professors are dealing with students that are using A.I. in their classrooms. And it sounded like they’re coming into this with an understanding of the fact that it’s being used by their students.
What did you learn when you spoke to these professors?
Grose: It was unexpectedly inspiring to hear about how so many humanities professors are remaking their classes to rely more on in-person activities and exams that create community within the classroom, but also involve the community outside the classroom.
One example is a professor at Beloit College, which is a small, private liberal arts college in Wisconsin, told me about teaching the novel “The Dispossessed” by Ursula K. Le Guin. And as part of the class, the students had to run discussions on the novel at libraries, public schools, senior centers.
She told me a crucial part of the class involves students practicing and role-playing before our outreach events, and then later they’d reflect upon what they learned from their experiences, what they’d do differently and how they would describe their new skill sets to potential employers.
She didn’t outright ban the use of A.I. in her class, which I thought was also really interesting. She had the students discuss among themselves what they thought would be appropriate and come up with a code of conduct that they all agreed to stick to.
Because one thing that I heard from a lot of professors is that they don’t want to be cops. They don’t want to spend their time policing adults’ use of this technology. So there needs to be a transparent discussion around appropriate and inappropriate uses, because just banning it outright doesn’t work anymore.
Ahmad: Tressie, as a professor, are there ways in which you’ve used A.I. in the classroom or in educational spaces?
McMillan Cottom: I’m a critical social scientist, which means I get to critique for a living, and so we use it for that purpose most often — in my classroom, anyway.
We have used A.I. for the purpose of understanding it as what we call a sociotechnical system, exploring: How are you going to use it? What are the risks of using it? What are some of the latent assumptions in the technology that may reinforce existing inequalities or produce new inequalities?
One of my favorite assignments, which I’ve actually done for several years, is a hunt and find mission. I tell them at the beginning of the semester that I would like them to figure out what their data rights are as a university student.
I want them to document the process of how they found out what their data rights are and then I want them to document what technologies they are expected to use in the everyday routine completion of their educational work, their educational assignments.
We spend all semester trying to figure out how something like A.I. use the data that students put into it. Every time you upload a paper for A.I. to rewrite, every time you prompt with details of an assignment — that goes into someone’s machine.
So one of the assignments we do all semester long is a conversation about the ethical and legal trade-offs of convenience, vis-à-vis what we give up in privacy.
One of the things that I love about this assignment is that I’m the professor, so I’m cheating, right? I created it. I know that it’s almost impossible for them to answer it. But it is important for them to learn the lack of transparency in every layer of technology that is introduced into an organization, into a school system.
Every time we layer another technology on, it gets harder and harder for the student to understand what they are responsible for and what rights they are giving up.
I would say that of all the ed tech innovations that we have unveiled over the last 10, 15 years in teaching in higher education, in my experience, A.I. is the most opaque, and therefore to me is one of the most troubling — as far as shifting risk and responsibility on to vulnerable students, making it difficult for them to find a solution when A.I. is wrong.
Something we almost never talk about: Who’s responsible when A.I. is wrong? And what responsibility do we have for putting something into the system that has a vast amount of complicated outcomes for the environment, for inequality, and why institutions would push us into doing that. So I use it, but mostly in service of critiquing it.
Ahmad: And how have you seen the reaction from your students? Because I imagine for your younger generations, adopting this technology without giving it much thought can be enticing, especially if their peers are using it consistently. How’s that been?
McMillan Cottom: It is not only enticing; there are a lot of incentives for them to use it. I mean, one of the things that happens when technology becomes the epicenter of the culture is that it is cool.
And there’s so much anxiety right now — around whether or not you chose the right major, whether you’re going to graduate on time, whether you’re going to get the good job — that being a part of the vanguard of technology just seems like a no-brainer for most students. By now many of them are coming to college, having been immersed in that anxiety in K through 12.
One of the saddest stories for me as an educator is about four years ago, all of my incoming students started reporting that they’ve had a LinkedIn account since they were in middle school. I was horrified.
Grose: Oh, my God.
McMillan Cottom: I was horrified by that, and they didn’t understand why I was horrified. We had this whole conversation about: What does that mean? You’ve been managing your professional profile since you were 13 — what have we done to you? But that’s how they’re kind of showing up in higher education and how they understand technology.
For many of them, my class might be the first opportunity they’ve ever had where they have been invited to think about the technology — not just use it, not just adopt it, not just become really good at it so that they don’t fall behind, but to actually think about it.
It opens up all these wonderful questions that I hope give them a lens to consider not just A.I. after they leave my classroom but whatever comes after A.I. Because there’ll always be a new wave of technology that promises a shinier future while hiding the risk and the trade-offs.
So they tend to be very enthusiastic about it — and terrified, I might say. Very scared.
Ahmad: It’s interesting that a lot of people are concerned about A.I. deadening critical thinking skills, but it sounds like by taking their critical thinking skills and applying it toward A.I. directly that you’ve kind of figured out this loophole in the system to get people to engage with this topic without necessarily feeling like they need to use it to outsource their thinking because they’re talking about the A.I. itself.
McMillan Cottom: I have to say, what Jess described in the professors that responded to her about how they’re using A.I. is very consistent for me having watched, especially humanities and social science professors over the last 15 years, adapt to the rapidly changing technological environment in education.
It is always stunning to me when I read an article or a piece of research or listen to a politician describe academics and professors as being Luddites or being resistant to technology, because what Jessica describes is actually very typical for my professional field.
Things like the digital humanities is the humanities responding to the fact that technology is inevitable and that there is always going to be a place for humanistic inquiry and learning.
And I think that’s what Jessica’s seeing in people’s responses.
Grose: Absolutely, especially in the sort of post-Covid moment, where the students who are now college students lost out on really important years of socialization and they see those as skills to be built back up.
Critical thinking is certainly part of it, and writing skills are certainly part of it. But public speaking, interacting with community — and I keep coming back to the interacting with the community because something that I find so depressing is the loss of trust in higher education that has happened over the past several decades.
And there’s obviously been a concerted effort by the right wing to put down higher education and sow doubt about its utility. So I think having students be more directly involved with the communities around the schools can only be positive both for the students themselves, but also the surrounding community and the feelings of trust in higher education writ large.
Ahmad: One question I have is about the degree to which this technology is novel or different from other tech that’s existed prior to this. SparkNotes has always been a thing; students have always tried to find shortcuts to summarize dense books. Is this markedly different when it does that? Tressie, if you want to talk to that first.
McMillan Cottom: Yeah. Jessica points to something that is just fundamental in the scientific research of the very messy process of how we learn, and that is that learning is fundamentally a social process. It is what we would call relational. It happens within the context of relationships.
So, when you would read the SparkNotes for “The Canterbury Tales,” which was admittedly maybe my introduction to SparkNotes many, many, many, many years ago, one of the things that happened there is that, one, you had to actually go physically buy it.
It was a physical piece of media. There was a culture there that said to you the same thing that the physical media said to you. This is not a relationship. This is not a person telling me this.
We know for a fact that we are more likely to trust information, especially novel information, when it feels like we get that information in a relationship. That’s why when you go to the library to answer a question and all else fails you, what do you do? You go to the librarian, right? Because what we want is a human being to help us make sense of what we don’t know.
With SparkNotes, that’s not a risk. You know that the book is not interactive. You know that it is not intended to be something that you necessarily trust like you would a teacher or a librarian.
I think the risk of A.I. is that it shrouds what is fundamentally just summaries of commonly held interpretations of the text, but it delivers it in a way that feels relational.
That’s why when you see these horror stories — which, admittedly, right now are still extreme use cases — but these horror stories of people who think that they have fallen in love with their A.I. or they reanimate a dead loved one as an A.I.
The risk of that is that your brain actually doesn’t do a great job of parsing when the computer voice is a computer versus when it is a human voice.
I think that is just a fundamentally different vehicle for delivering information that A.I. hasn’t earned. A.I. doesn’t deserve the trust that we give human beings when it comes to information that it gives us.
In fact, it can’t earn that trust because it’s not human. So that to me is the risk.
Ahmad: It’s interesting with the SparkNotes example because, ultimately, humans were involved in writing the SparkNotes as well.
And Jess, you wrote about this in one of your pieces, that there was a teacher that set an example where students were meant to research prophets. The A.I. told one of the students that Moses got chocolate stains out of a T-shirt — maybe instead of Moses getting water out of a rock.
And because they’re young, they don’t know to question that because they maybe don’t know that Moses got water out of a rock instead of chocolate out of a T-shirt. What was that story there and what do you make of that?
Grose: So, obviously, the hallucinations are still happening and just simply incorrect information. Those are two different things. Hallucinations are the A.I. is just making something up. Incorrect information is its sources are often incorrect. It is only as good as the sources that it’s pulling from.
But I would say what I find most disturbing is that in some situations you are having students use ChatGPT or any other chatbot to write the paper. They’re then handing it in to a professor who is using A.I. to grade the paper. And it’s like, what are we even doing here anymore? You know, this is bots talking to bots. No one’s getting anything out of it.
And I do think if we have to have a positive outcome of this new technology, it’s that I hope it forces educational systems to sit back and say: What are the values we are trying to inculcate in these students? What are — why are we here? What do we hope that they learn? What do we hope is happening in the classroom?
I think that can lead to really magical and inspiring things. I mean, reporting this story made me want to go back to college because I feel like I would get more out of it now, when I’m not 20 years old and choosing my classes based on whether there was a cute boy who had registered for it.
So, you know what? I think that there’s still a great desire among both students and professors to have a really engaged experience and not just be bots talking to bots to get a degree.
Ahmad: It sounds like you’re both in agreement that A.I. erodes a lot of these skills that are beyond just rote memorization of facts and information. Part of academic knowledge is learning how to read and critically analyze information and discern for yourself what’s right and wrong.
But I’m curious, now that we’re living in a world where the genie’s out of the bottle and a lot of students and younger people are using A.I., what are other ways that these skills can be learned in the classroom?
McMillan Cottom: Perhaps one of the biggest threats that A.I. poses to education isn’t that it’s going to make educators useless, but that it is going to make educators so much more necessary than we are willing to invest in.
A.I. actually makes it more important that we have everything from librarians to counselors to teachers to professors to researchers who can put this rapidly changing information environment into context and can develop the capacity in students to make sense of things.
So the skill set that you get when you can make mundane tasks automated, when you can outsource them to technology as A.I. promises to do, is that then human beings are left to do arguably the higher-order work of making sense of things.
The problem is learning the basic skill was a steppingstone to learning how to make sense of things, right? So the challenge there is that A.I. hollows out the foundation of learning because it strips you, gets rid of the mistakes, it gets rid of the opportunities for serendipity.
There’s nothing that A.I. does that human beings not only do better but I think can fundamentally make more sense of — and so the task for us, I think, is just to create opportunities for that to happen in schools and universities.
Ahmad: One thing, Tressie, you touched on a little bit: For us being in an older generation of internet users and being at a different stage in our lives, our concerns about A.I. seem to stand in contrast to Gen Z and younger people’s concerns about A.I. Jess, you write about this topic often, how do you see their attitudes toward it? Is there panic and concern with younger people, or are they more willing to just wholesale adopt this technology?
Grose: It’s a real range. Gen Z clearly are suspicious, but the allure of it is still strong and they are incredibly worried, as Tressie said, about getting jobs — which they should be. The market is not good for entry-level jobs, especially white collar jobs.
But also, some of them have a lot of pride, and some of the professors I talked to said their students were offended at the idea that they would ever use A.I. to do their creative work because they had real pride in what they were doing and real love for whatever creative pursuit they were doing.
McMillan Cottom: Yeah. So much of this is driven by fear and anxiety and not those positive emotions that Jessica mentions, which I love hearing, because one of the things we don’t talk about enough is how many emotions are tied up in being a learner, in risking acquiring new information.
We tend to remember the satisfaction of having learned something, but we forget how difficult and challenging it can be to your identity to learn something new. To risk it, to fail, to take pride and want it to turn out great — all of those emotions are actually part of the whole process.
And I will say, again, when A.I. takes out all of those emotional feedback loops that help learning happen it is actually not enabling learning. Just knowing a fact is not the same as learning and that we are denying young people that experience of the pride of having acquired skill, talent and ability is, for me, just so sad.
Our poverty of imagination about the human spirit here really gets me down.
And here’s the thing: Kids aren’t supposed to have that much willpower. Putting this down to, “Well, if you don’t want to use A.I., don’t use it” is shifting the responsibility to the exact wrong place.
Kids aren’t supposed to be able to resist a highly sophisticated, research-informed platform designed to make you use it. It is incumbent upon us, the adults, the society, to figure out what is the right amount of risk to expose kids to.
I think they actually want more guardrails. I think they are craving the positive feelings that come along with learning, but they aren’t supposed to be able to resist it.
We are supposed to do that for them, and I think sometimes we forget that, or we just totally abdicate our responsibility in doing that.
Ahmad: We started our conversation by getting a temperature check on how you feel about A.I. being used by students specifically in higher education. You guys were both at a two, but I’m curious what would need to change for that to come closer to a 10 for you.
McMillan Cottom: Oh, that’s tough. I always like to remind people that sociologists are not fortune tellers, but OK. [Grose laughs.]
I always think that the potential for any technology gets better if it is submitted to democratic rules.
So, in the case of A.I., one of our big what-if questions is: Who’s in charge of that thing? If A.I. causes some great harm, if A.I. gets something wrong — to misquote the Ghostbusters — who do we call?
And if we don’t know who to call, that usually is a sign that there is not enough democratic oversight of the technology, and if we start shoehorning it into everything from our government to our social policy to our schools to our health care, that’s a really big question.
I’d inch up to a three, maybe a four, if what we were talking about here is building some system of regulation and oversight that did weigh the risks against the possible rewards. That would make me feel better.
Grose: I agree. Any regulation at all at this point would be welcome.
McMillan Cottom: I’d take anything.
Grose: Our federal government has signaled that they have zero appetite for regulating A.I., and they just see it as a power struggle with China and who’s going to be first and the guardrails are completely off. So, any manner of regulation.
And then I would also say, if it is going to be used by young people, having systems that are built specifically with their needs in mind. I would trust those systems a little bit more.
I just worry about anything right now that even promotes itself as a child-first educational technology using A.I. because a lot of the people doing it will say, “Oh, we have all these credentials” and whatever, and just have no proof behind it, have no research, because the speed of adoption is what is prioritized, rather than finding out if it actually works and is useful.
Ahmad: Big, big grains of salt, big rocks of salt to these addendums.
Grose: Yeah.
McMillan Cottom: Pretty big. I’m going to go with boulders of salt.
Ahmad: Well, that’s a good place to end our conversation. Thank you both. This has been a really fascinating conversation. Until the next time.
Grose: Thanks for having us.
McMillan Cottom: Thank you.
Ahmad: You can also find us in the New York Times app. Download the app and tap “Listen” at the bottom of the screen to find us and all other New York Times shows and narrated articles. Thanks.
Image

Thoughts? Email us at theopinions@nytimes.com.
This episode of “The Opinions” was produced by Vishakha Darbha. It was edited by Alison Bruzek and Kaari Pitkin. Mixing by Sonia Herrero. Original music by Pat McCusker, Sonia Herrero and Carole Sabouraud. Fact-checking by Mary Marge Locker. Audience strategy by Shannon Busta and Kristina Samulewski. The director of Opinion Audio is Annie-Rose Strasser.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
Jessica Grose is an Opinion writer for The Times, covering family, religion, education, culture and the way we live now.
Tressie McMillan Cottom (@tressiemcphd) became a New York Times Opinion columnist in 2022. She is a professor at the University of North Carolina at Chapel Hill School of Information and Library Science, the author of “Thick: And Other Essays” and a 2020 MacArthur fellow. @tressiemcphd
Comments