10 Summarizing Student Feedback with AI (Biology)
Author: Ann Thijs, Biology Instructional Office and Integrative Biology, The University of Texas at Austin Ann is an interdisciplinary scientist who earned her dual master’s degree in biology and engineering in Belgium, after which she worked in biogeochemistry research. To see Ann’s full bio, click here. Cite this Chapter: Thijs, A. (2025). Summarizing Student Feedback with AI. In K. Procko, E. N. Smith, and K. D. Patterson (Eds.), Enhancing STEM Higher Education through Artificial Intelligence. The University of Texas at Austin. https://doi.org/10.15781/7dv5-g279 |
Description of resource(s):
Muddiest polls are a pedagogical practice where students anonymously share the most confusing or unclear part of a lesson. The practice enhances student metacognition and provides instructors with insights into areas of confusion. This resource provides a step-by-step guide on how to use AI platforms to streamline the process of analyzing responses and creating further study resources for students.
Links to Resources:
Why I implemented this:
Collecting Muddiest Point responses is a useful practice because it helps me pinpoint exactly where students are struggling and allows me to provide targeted feedback and adjust my teaching. It’s also a great way to promote metacognition—students can share what they’re still confused about in a low-stakes, supportive environment while reflecting on their learning process.
That said, summarizing short-answer responses is cumbersome in large-enrollment classes, so it’s not a practice that often gets used in these settings. And when it is used, instructors typically skim through only a portion of responses.
For this project, I used a student response system to ask my students each week about the “Muddiest Point of the Week.” I downloaded their responses, de-identified them, and used AI to summarize the answers. From there, I created a weekly handout to give students additional explanations on the topics they found most challenging.
My main takeaways:
- Chat GPT 4o and CoPilot performed similarly in summarizing the muddiest point responses. While they worked rather well qualitatively, they did not perform well at tallying responses or giving accurate rankings of topics.
- Writing a good prompt for the AI allows for repeated use. Writing a good prompt for students and allowing them time to answer, ensures that students have the chance to develop that sought-after metacognition.
- Using a template to create a weekly handout with help of an AI streamlined the process.
- Students showed appreciation for the practice. 43% of students reported that they found the created handouts helpful as an extra study resource.
What else should I consider?
Timing: Students should be given time to answer the prompt as they need time to reflect on their learning process and topics that were not understood yet. I opened the poll at the start of one class period and closed it at the end of the next class period.
Time Commitment: After closing the student poll, the time commitment to download the data, and have it summarized by AI takes only a few minutes.
It took more time to clear up student confusion by addressing the muddiest points. I created and shared a handout clarifying the muddiest points.This took me about an hour each week.
Context: The described procedure works well in large enrollment classes, and could be adapted for other course sizes and contexts. With small courses (<20 students), it would likely be more efficient to read responses directly to understand muddiest points, but the AI-produced resources may still be helpful.
Adaptations: Student responses can be collected in various formats, such as student response systems or quizzes in an LMS, such as Classic Quizzes in Canvas. The methodology described here can be applied to these formats as well.
Potential Pitfalls: The attached file, “Quantitative Analysis,” demonstrates that neither AI program effectively tallies responses. While using an AI to summarize student feedback, such as mid-semester surveys or course evaluations, might seem appealing, the AI’s inability to accurately tally responses makes it an unreliable tool for capturing students’ perspectives—aside from identifying muddiest points.
Want to learn more?:
- Prompt design: Coursera Course: Generative AI: Prompt Engineering Basics
- Literature on metacognition and the muddiest point practice:
Tanner K. D. (2012). Promoting student metacognition. CBE life sciences education, 11(2), 113–120. https://doi.org/10.1187/cbe.12-03-0033
Angelo T. Cross K. (1993). Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed., San Francisco, CA: Jossey-Bass.