www.cnn.com /2026/04/04/health/ai-impact-college-student-thinking-wellness

AI is changing the way students talk in class and how teachers test them | CNN

Asuka Koda 17-22 minutes 4/4/2026

EDITOR’S NOTE:  The writer is a junior at Yale University in New Haven, Connecticut, and spoke to her peers about their experience with AI usage in class for this article.

At this point in her senior year at Yale University, Amanda knows that many of her classmates turn to AI chatbots to write papers and other homework assignments.

But she started noticing something bizarre in her smaller seminar classes: Her classmates sit behind laptops with polished talking points and arguments, but the conversations that follow often fall flat across subjects.

In one class, “the conversation came to a halt, and I looked to my left, and I saw someone typing ferociously on their laptop, asking (a chatbot) the question my professor just asked about the reading,” Amanda told CNN.

Amanda, and two other students — Jessica and Sophia — attend Yale University. They requested anonymity for fear of retribution from their classmates and professors, so CNN agreed to change their names for this article.

Amanda said she was taken aback. Until that day, she didn’t realize that her peers were using chatbots in class and sharing what it spits out in the classroom. Now she notices the impact that tendency is having on class discussions.

“Everyone now kind of sounds the same,” she said. “I feel like during my freshman year in college, I would sit in seminars where everyone had something different to contribute. Although people would piggyback off each other, they approached from different angles and offered different commentary.”

As AI becomes increasingly integrated with education, educators and researchers are finding that it may be eroding students’ capacity for original thought and expression.

A paper published in March in Trends in Cognitive Sciences found that large language models are systematically homogenizing human expression and thought across three dimensions — language, perspective and reasoning — and students and educators say they are seeing the effects of that trend in their classrooms.

And that makes a lot of students sound the same.

Jessica, a senior at Yale, told CNN that she uses AI every day for her classes. In an economics seminar in which the professor cold-calls students, “at the beginning of class, you could see every single person putting every single PDF” into a chatbot.

She also uses AI when she has trouble turning her thoughts into words. “I want to comment, and I have this concept, but I don’t know how to formulate the sentence myself,” she said. So she asked a chatbot “to make it sound more cohesive.”

A Yale University spokesperson replied that “Students continue to experiment with using AI in class” and they are aware of the ways AI is used in the classroom, including those described in this article.

“To support learning and engagement, we are seeing a broader trend of faculty designing courses with limited or no laptop use, emphasizing print-based materials, original thinking, and direct engagement with peers and instructors,” the spokesperson told CNN.

Thomas Chatterton Williams, a visiting professor of the humanities and senior fellow at the Hannah Arendt Center at Bard College, has seen the impact of students’ decisions.

Students’ reliance on AI “ has paradoxically raised the floor of class discussion to a generally better level in courses with difficult concepts, but has also tended to preclude stranger, more eccentric and original thoughts,” said Williams, who is also a nonresident fellow at the American Enterprise Institute, a think tank that includes research on education.

Jessica admitted that she’s felt herself become lazier since she started using a chatbot to help with her classes.

“I have thought about how much I stopped working, like my work ethic has completely diminished from high school,” she said.

Large language models, or LLMs, are trained to predict the next most statistically likely word given everything that came before it, said Zhivar Sourati, a doctoral student at the University of Southern California and first author of the paper.

The data those models train with overrepresents dominant languages and ideas, so their answers to users’ questions naturally “mirror a narrow and skewed slice of human experience,” the researchers wrote in their study. The result is “a narrowing of the conceptual space in which models write, speak, and reason.”

AI-induced homogenization happens across three dimensions: language, perspective and reasoning strategies, the authors explained. That’s because AI models tend to reproduce what researchers call “WEIRD” viewpoints — Western, educated, industrialized, rich and democratic — even when explicitly prompted to represent other identities.

One possible consequence, Sourati said, is that WEIRD language and perspectives could become perceived as more credible and “more socially correct,” marginalizing other viewpoints. A similar phenomenon is observed in reasoning, in which the popular technique of walking models through step-by-step logical thinking may be crowding out more intuitive, culturally specific and creative ways of working through a problem.

When a group repeatedly interacts with AI systems, Sourati explained, it flattens the group’s creativity compared to the same group without AI assistance.

This flattening raises concerns in educational institutions at all levels.

When students were asked open-ended, subjective questions with no single, correct answer, teachers could expect a wide range of responses. But if all students rely on AI, their answers may become more polished but fall into just a handful of similar categories, Sourati said. They will lose the diversity of thinking that classroom discussions are meant to encourage.

Sourati is most concerned that homogenization is happening to people who are developing their ability to creatively generate new ideas. If students continue to use AI instead of developing their own thought processes, “they wouldn’t learn how to even think by themselves and have their own perspectives.”

Morteza Dehghani, a professor of psychology and computer science at the University of Southern California, said that he has heard of people using AI to determine who to vote for in an election, which he finds “quite scary.”

“If people lose diversity” in the way they think, “or get into intellectual laziness, of course, that is going to affect our society greatly,” said Dehghani, who is a coauthor of the paper.

Sophia, a junior at Yale, believes that her fellow anthropology students are using AI to draft scripts for what to say in class because people are insecure about what they don’t know.

“I think creativity is dwindling because we lose the ability to make connections,” she added.

If people continue to offload their reasoning to AI, Dehghani agrees that communities will lose creative innovation and the ability to critique mainstream ideas or even political candidates.

As more people use AI models to write and think, those outputs are reabsorbed into human discourse — and eventually into the data used to train the next generation of models —so the homogenization keeps compounding, the paper’s authors said.

“If we’re offloading our reasoning onto these models, then we can easily be persuaded by what the models tell us,” he said.

In education, Dehghani is concerned about a generation of students who are learning with AI and being tutored by AI. “They would be more homogenous in the way they think, in the way they write, so this is going to have long-term influences,” he said.

Sophia, who tries to resist using AI in school, said she believes people are deprioritizing their own thinking “in favor of having really big words.”

“I would literally rather just tell the professor, ‘I don’t know what we’re talking about.’ Even if you put every reading into (a chatbot), it doesn’t have your past experiences that make you a critical thinker,” she said.

“I feel like people had a lot more to say because they actually feel tied to the material,” Amanda agreed. “Now classroom discussions are not really digging deep. I think a lot of that has to do with the AI chatbots, but also, there’s no longer as much of a drive to connect with the material personally.”

Disappointed, she added, “I think it’s boring to be in a class where everyone has the same thing to say, and no one wants to dig deeper or push against what is directly said in the text or the norm.”

Daniel Buck, a research fellow at the American Enterprise Institute and a former English teacher at four K-12 schools over seven years, said he is concerned that students are circumventing the cognitive work required to engage in classroom discussions and complete homework.

“A lot of learning happens in the boring minutia, the struggle,” Buck said. Students retain only what they have actually spent time consciously processing, he continued. If a student outsources thinking to AI, they may be able to reproduce a talking point in class, but they haven’t built the underlying skills to apply that knowledge elsewhere.

Buck draws a sharp distinction between AI and the shortcut technology that preceded it: SparkNotes. When students relied on the popular website to find chapter-based summaries of literary works, teachers could easily detect it, he added.

AI is a “supercharged version of SparkNotes” that “can answer any question that you pitch to it,” Buck said. Whereas SparkNotes offered a fixed set of analyses, AI can respond to whatever a teacher asks, making it much harder to identify when students are not doing the thinking themselves.

The difference is in how people reason. Instead of being used as just a reference, such as books or search engines, AI is an active participant in “problem solving and perspective-taking,” Dehghani clarified.

“What we are seeing now is fundamentally different than other periods of homogenization of expression and thought,” Williams said. “If even professional writers are finding it exceedingly difficult to resist outsourcing the difficult work of wrestling with words and ideas — as we know they are — I don’t see how the younger generations who have not experienced a world before highly sophisticated, on-demand AI writing will be able to do this, not at scale.”

Buck worries that students will graduate without having developed relationships with professors, as well as the habit of sustained cognitive work. That means they will struggle to solve problems in the real world.

“There’s so much delight in reading original student essays,” he said. “Even if it isn’t quite as well -argued or as solid as I wish it would have been, you’re seeing these young students, for the first time, start to think for themselves, to analyze, to think critically. It’s almost like watching my own children walk for the first time, where they stumble and fall, and that’s amazing. Keep doing that.”

Reading and interacting with students’ original thoughts in class helps teachers understand how students think and articulate.

“There’s an interpersonal exchange that I think gets overlooked when you get to know your students, they get to know you, they start to trust you and your feedback,” he said. “I think that gets lost too when it’s just everything is through AI.”

Sun-Joo Shin, a philosophy professor at Yale, said, “It is a big homework for anyone who is involved in teaching” to keep exploring ways to ensure students continue to think critically and creatively in the age of AI.

“We are in an interesting and exciting transition. I want my students to understand the material of the class, which is constant before and after the appearance of AI,” she said. “At the same time, I want them to use this exciting tool to their advantage, not be a victim of it. A dilemma of an instructor is how to help, or force, students to learn the material and to think creatively without running away from the AI tools or without copying them.”

Until the fall semester of 2024, she said she was not worried about how AI would affect students’ understanding of the material in her mathematical logic class. Her teaching team had tested the problem sets against the AI models at the time, and they were unable to solve her problems.

But since then, “AI has been catching up,” and models can answer questions “pretty well” if students upload class handouts and learning materials. She started thinking about additional requirements in the class beyond problem set submissions.

“After all, it would be extremely unfair to give good grades to AI answers,” Shin said.

Yale has guidance on AI usage for both students and faculty. “Generative AI use is subject to individual course policies,” one of the university websites states. “We encourage all instructors to adapt our model policies for their specific course and learning goals. AI Detection tools are unreliable and not currently supported.”

Yale provides model policies for different class types such as “Creative Writing Seminar” and “STEM Mid-Sized Lecture.” The policies range from discouraging AI usage with guidelines on when AI explicitly cannot be used, to allowing students to use AI as a source of ideas but prohibiting them from submitting text generated by chatbots to encouraging AI usage, to encouraging and permitting students to use AI in assignments.

Buck warns that any work sent home cannot be verified as the student’s work. To counter AI, teachers are going back to reading texts aloud in class and “on-demand, handwritten essays” and “paper and pencil assessments.”

In-class accountability often comes in the form of pop quizzes. A student who had asked AI for a chapter summary instead of reading the chapter might get the broad strokes, but there is a strong chance that the one specific detail the quiz will ask about did not make it into the summary, Buck said.

“If you did the reading, it was super-duper easy,” he said. “And if you didn’t, then there was no way to bluff your way through.”

“I made a rather significant change for my two logic classes in terms of requirements,” Shin said. Although she still includes problem sets as part of her classes, she has reduced their weight in students’ grades. Now, the problem sets are graded only on completion, and feedback is given to students rather than grades.

“Using these problem sets as a question bank, I have two midterms and one final, all of which are in-class exams,” she said. “Some questions are lifted from problem sets, some are slight modifications, some require students to check where a proof goes wrong, and some are filling in gaps in a proof that they solví in problem sets.”

For her computability and logic class, “I have given oral tests, one by one, for years, and a presentation requirement before the AI era, which has been working out very well,” she said. Now, the exams, oral tests and presentations are weighted more heavily for students’ course grades than take-home problem sets.

Williams has arrived at a similar place from a different direction. As a professor, he has moved all writing assignments in-class and made them spontaneous. At the end of the semester, he assesses students through oral exit exams.

“I cannot with any confidence assign students any writing that I don’t watch them commit to paper by hand in my own presence,” he said via email “I think this is a terrible loss, but it’s necessary. The temptation and availability of AI is too great.”

While educators can work around AI in assessments, it is equally important for students to be intentional about limiting their reliance on it as they learn, especially since it affects other classmates’ education.

“It is frustrating because even though I personally try to stray away from it, I can’t prevent other people from using it,” Amanda said. “The fact that others use it affects my education as well, and the value of the two hours of my seminar.”

Basil Ghezzi, a freshman at Bard College who actively avoids using AI in her studies, worries about the environmental costs associated with using AI models. Instead, she encourages students to turn to the resources already around them.

“Talk to your teachers, talk to your professors, talk to people around you. Have meaningful conversations with people in your life,” she said.Still, not everyone has an “all or nothing” approach to AI. Dehghani said he writes bullet points capturing ideas he originated and asks the model to find flaws in his work.

He hopes that more companies will invest in AI models that can generate variety and reflect the diversity of thought in our current society. For now, however, Dehghani suggests that people should resist using AI to generate ideas or to reason.

AI models “should be collaborators. They shouldn’t be agents that do everything on our behalf,” he said.