
Debates about generative AI in higher education have been informed by studies of
completed student papers, or self-reported
survey data. Research shows that artificial intelligence tools can support learning, but also has raised concerns, including students’ overreliance, cheating, and the potential degradation of critical thinking and engagement. While these types of studies provide interesting snapshots of reported practices, their methodologies may hide something important: how writing actually unfolds while students are composing with the assistance of AI. A pilot study I led of undergraduate writers at Kennesaw State University takes a different approach. Using
think-aloud protocols – a method where participants verbalize their thoughts while performing – our research captures how students interact with generative AI tools during the writing process itself. This method helps us understand decision-making processes as they occur. Our preliminary findings suggest a more complex reality than the common narrative that students are simply having AI write their assignments. Instead, many students appear to be negotiating when and how AI belongs in their writing.
Looking inside the writing process
In our study, 20 undergraduate students completed a 20-minute writing session responding to the following prompt:
People spend a lot of time trying to achieve perfection in their personal or professional lives. People often demand perfection from others, creating expectations that may be challenging to live up to. In contrast, some people think perfection is not attainable or desirable.
The assignment was to draft a thesis and evidence-based paragraphs that argue their position on the value of striving for perfection. Students were told they were not expected to complete them but to work through their writing process toward finishing. Students were told there were no right or wrong ways to use AI and were asked to use generative AI exactly as they normally would while writing. Instead of direct observation, the study relied on post-session screen recordings and analysis of students describing their process. Collecting this data – their actions on the computer and transcripts of the voice recordings – allowed researchers to analyze the writing process without interrupting it. To reduce the possibility that students might alter their behavior if they felt observed, researchers set a timer and left the room during the writing session. The goal was to minimize the
Hawthorne Effect, a phenomenon in which people change their behavior because they know they are being watched.
What we found
Across the transcripts, a few qualitative patterns consistently emerged in how students collaborated with AI while writing. First, many participants turned to AI at the beginning of the writing process to help generate ideas or draft a thesis. What we see in this practice is the student using AI-generated output to spark and shape their own ideas. One student explained the strategy this way: “After [generating a few ideas,] I usually just use that [output] as a prompt.” In these moments, AI functioned less as a final answer and more as a brainstorming tool that helped students move past the blank page. However, students frequently continued drafting independently after generating initial ideas. Many transcripts include statements such as “I think my thesis should be …” or “Let me write this part,” suggesting that some students retained control over their argument.
Editing the bot
Another strong pattern across transcripts is that students rarely accept AI text without editing it. Instead, they actively revise the generated language. As one student described the process, the AI “rewrites” their initial prompts and then the student rewrites the AI’s output. This allows the student to claim “authorship and ownership” of the final draft. Another participant redirected the AI response when it did not align with the assignment: “AI is not following the prompt … try again.” These moments show students evaluating AI output critically and treating it almost as a sparring partner, rather than simply copying it. We also found that some students rejected AI’s suggestions altogether. In several writing sessions, participants explicitly decided not to use the AI responses. One student reflected on this decision while composing: “I don’t really use AI for my research.” Other transcripts show students switching back to their own writing when AI responses felt too generic or disconnected from their argument. These moments indicate that students are not only collaborating with AI, but they are also drawing boundaries around where it belongs in their writing process. Finally, several transcripts showed students turning to AI during moments of uncertainty or when they felt stuck. As one participant explained, “I used a lot of AI because I was struggling.” Even in those cases, students often used AI as support as they drafted their essays, rather than directly copying and pasting its responses.
What this says about AI and writing
Our analysis suggests that generative AI is entering student writing not as a wholesale replacement for human authorship, but as part of a negotiated collaboration. The results suggest that AI most often enters the composing process during idea generation, revision and moments of writer’s block, while students maintain control over argument choice, voice and final phrasing. Understanding how decisions to use AI unfold during the writing process, and not just what appears in the final essay, may help educators
design assignments and policies that keep the human writer firmly at the helm. Because our current findings come from a pilot cohort of 20 undergraduate writers, the results should be interpreted cautiously. To test whether these patterns hold at a larger scale, the research team is expanding the study to 100 undergraduate participants. The expanded study will also examine how neurodivergent writers interact with generative AI during composing, an area that remains largely unexplored in current research.
Undergraduate student researchers at Kennesaw State contributed to the preliminary analysis described in this article: Kylee Johnson, Vara Nath, Ruth Sikhamani and Kaylee Ward.