Same Class, New Approach: Reimagining Information Students’ Use of Gen AI in Formative Writing Assessments
Same Class, New Approach: Reimagining Information Students’ Use of Gen AI in Formative Writing Assessments
Jason R. Baron, JD, University of Maryland, College Park
Elizabeth A. Pineo, MLIS, University of Maryland, College Park
We began teaching an online class about records scandals in 2023, at the very beginning of what would become the gen AI takeover. In 2024, we found that 67 of our 100 students used gen AI on the first essay assignment despite our express prohibition of its use—in any capacity. So, we had to pivot our teaching approach. By 2025, we actually asked students to use gen AI in their work in order to learn from it. Here, we lay out the changes we made and how we’re using gen AI in our writing instruction to build critical thinking skills and AI literacy.
Gen AIs have a tendency to hallucinate; as a result, when students rely on their outputs without critically examining them, they can easily end up in violation of academic conduct codes because passing off AI-generated text including fake references as their own is plagiarism and/or fabrication. However, the state of plagiarism-detection software is still highly problematic. For example, TurnItIn continues to produce a high rate of false positives.
—In 2024, we found that 67 of our 100 students used gen AI on the first essay assignment despite our express prohibition of its use—in any capacity. So, we had to pivot our teaching approach—
The lack of reliable detection metrics, coupled with genuine uncertainty or mistaken belief on the part of students as to whether they have complied with university or classroom policies, has sent many instructors back to paper-only exams both in general and, anecdotally, at our own university. We believe this teaching approach is profoundly mistaken.
We are not the first to propose proactive engagement with ChatGPT and similar tools, but what we propose is slightly different. In general terms, our suggested protocol contains the following elements:
- 1+ open-ended question asking students to cite to primary and secondary sources
- Instruction to paste the original prompt (#1) into a free gen AI tool
- Questions about the AI response for the student to answer on their own
To build questions to prompt students’ own narrative creation in #3, we suggest four categories of questions:
- Category I: Find clear deficiencies (if any) in the AI response (factual errors, subject matter not relevant to the question asked, hallucinations, mis-citations, misattributions, etc.).
- Category II: Critically analyze the AI text.
- How superficial or incomplete is the AI-generated response?
- When using tools with data ingest cutoff dates, are there any recent facts or circumstances that are unaccounted for, including current events or recent scholarship?
- What arguments or other readings did it leave out?
- Category III: Supplement the response with your own work.
- Read additional online references in a physical library or e-library resource (including from primary or scholarly sources), and include links to their listings on our University’s library website.
- Provide an additional self-written response on how additional source material enhances student understanding of the original question asked.
- Encourage or require students to carry out their own follow-up, perhaps through iterative prompts to the original question and AI-response (with both prompts and AI responses provided to the instructor).
- Category IV: Personally reflect on the exercise, focusing on how best to use AI-generated sources of responses in future assignments
These four categories represent our rough attempt to mitigate, if not exploit, known flaws in AI-generated responses to the benefit of advancing critical thinking in learning assessments. The categories can be simplified or extended, and we recognize that the protocol has limitations, especially because it places an extra burden on instructors while they review essays. But the most important goal in this exercise is to encourage students to develop strong AI literacy and critical thinking skills. Especially in this age of mis- and disinformation, such skills are essential for information students to develop.
Despite our sharing this philosophy with both classes, some students still tried to use gen AI in unpermitted ways. To detect inappropriate gen AI use in both years, the class TA (with assistance from a grading assistant) preemptively generated responses to the essay assignments and compared student submission to those materials. When students had used gen AI inappropriately, common patterns included: introductory and concluding paragraphs that were similar to the AI-generated ones, hallucinated or otherwise inaccurate or problematic references coupled with the lack of short form in-text citations (both of which we required), and repeated misinterpretations or misattributions of quotations or cited ideas. In 2025, we placed a particular emphasis on accurate citations and quotations. This was a time-intensive process, but it is one we believe is worthwhile; without it, students are unaware of the places they make mistakes while using gen AIs in writing.
Using this protocol, between 2024 and 2025, we achieved a reduction from 99 infractions (2024) of our gen AI policy to 52 infractions (2025). While this reduction was not as dramatic as we’d hoped, it was significant (52.5%). To us, the results signify that, while gen AI is here to stay, our teaching methods reflect that mitigating its improper use is achievable.
Our primary takeaway is that students need better guidance around source and citation literacy so that they are able to check the sources and citations gen AIs provide to ensure they are accurate. Without those skills, students will likely continue to include erroneous citations and quotations, which will result in continued infractions—both in our class and against university-level academic integrity policies. As such, instructors need to emphasize source and citation literacy in their teaching, especially when using gen AI in their instruction.
We have proposed one possible solution to gen AI’s challenges to critical thinking. Critiquing gen AIs’ outputs, regardless of version, is an approach that, while time consuming, will continue to be successful going forward. First and foremost, we believe that using this approach has the potential to richly enhance students’ critical thinking skills by asking them to more deeply engage with the material being covered. Secondarily, following the suggested approach broadens students’ understanding of gen AI tools, which helps them develop AI literacy skills. In a world where university graduates will increasingly be expected to be familiar with and use such software over the course of their careers, the third effect of adopting this approach will be to introduce prompt engineering skills, which will likely be desirable going forward.
Cite this article in APA as: Baron, J. R., & Pineo, E. A. (2026, April 8). Same class, new approach: Reimagining information students’ use of gen AI in formative writing assessments. Information Matters. https://informationmatters.org/2026/04/same-class-new-approach-reimagining-information-students-use-of-gen-ai-in-formative-writing-assessments/
Authors
-
Jason R. Baron, J.D., holds the position of professor of the practice in the College of Information at the University of Maryland. He is a co-founder of the Center for Archival Futures at the College of Information, with research interests involving practical applications of AI including providing public access to government records.
View all posts -
Elizabeth Pineo (she/her) is a Ph.D. candidate in the University of Maryland's College of Information. Her research explores the intersections of disability, music, and archives. She holds an M.L.I.S. from the University of Maryland, College Park and a B.A. in Music from Dickinson College. For more information, visit www.elizabethpineo.com.
View all posts