16 AI-Supported Coding Assignments (Statistics & Data Science)
Author: Layla Guyot, Statistics and Data Sciences, The University of Texas at Austin Layla Guyot is a data scientist, educator, and researcher, who joined UT Austin during Fall 2020. After pursuing math and physics in undergrad, she completed a M.S. in statistics, just by chance. She gained experience as a statistician before pursuing her aspiration to conduct research and teach through a PhD in Mathematics Education at Texas State University. To see Layla’s full bio, click here. Cite this Chapter: Guyot, L. (2025). AI-Supported Coding Assignments. In K. Procko, E. N. Smith, and K. D. Patterson (Eds.), Enhancing STEM Higher Education through Artificial Intelligence. The University of Texas at Austin. https://doi.org/10.15781/7dv5-g279
|
Description of resource(s):
This resource provides recommendations for instructors creating activities that encourage students’ critical evaluation of AI-generated code, and a sample activity that I implemented in my course.
The sample activity has four components:
- Initial AI Evaluation: Students input the prompt into the AI tool and copy/paste the AI-generated code. Students evaluate the accuracy of the AI output.
- Code Refinement: Students refine the AI-generated code to improve it based on what they have learned in the course (or from previous courses).
- Reflection: Students reflect on their use of AI and how it helped them, or not, solving the task.
- Application: Students apply the refined code to generate insights or create outputs (e.g., a graph, table, or calculation).
Links to Resources:
Why I implemented this:
To introduce data science, a key learning objective is for students to develop coding skills to effectively manipulate and analyze data. With the rise of AI tools, I noticed that students were increasingly relying on these tools for coding tasks, often accepting outputs without question. However, while AI can be a helpful assistant, we must critically evaluate its outputs and check for any inconsistencies.
This project aimed to foster critical thinking about AI’s role in coding. I designed an activity that forced students to use AI tools and critique its output. First, they completed a task using Copilot to generate code. Then, they revisited and refined the AI-generated code using their own understanding, allowing them to evaluate its accuracy and logic. Finally, students reflected on the process of comparing their own code to the AI outputs, identifying strengths, weaknesses, and areas of improvement.
This approach sought to strengthen students’ coding skills and encourage thoughtful use of AI as an ally rather than as a replacement.
My main takeaways:
From this project, I noticed through students’ work and comments that while AI tools can provide a helpful starting point, their outputs often vary in quality, lack functionality, or fail to consider context. Most students (55%) could partially correct the AI’s code, but only a few (17%) achieved a fully correct solution. Students noted the limitations of relying on AI: debugging is necessary, which requires understanding the code. Their reflections were incredibly rewarding, as students engaged critically with the materials and questioned the role of AI in their learning process.
What else should I consider?
Timing: I implemented this activity as part of a homework assignment in the middle of the semester, but I would recommend implementing it earlier in the semester (maybe week 2 or 3) as a 30-minute class discussion. The goal of the activity is to discourage students to simply rely on AI outputs so it is better to address this issue as early as possible once they have acquired some knowledge to critique the output.
Adaptations: This activity would be suitable for courses involving some type of programming skills. As mentioned above, this activity could be adapted as a class discussion. The task to solve with AI can also easily be modified with a different level of complexity and focusing on another discipline.
Potential Pitfalls: Two main pitfalls that are also discussion opportunities:
- Students may over-rely on AI without fully understanding the code. We can address this issue by requiring reflections and discussions on AI limitations.
- AI may produce incorrect or outdated code. We need to highlight the importance of debugging and critical evaluation.
Want to learn more? What other resources, articles, or websites should people be directed to if they would like to implement something similar?
After implementing this project, I came across an article discussing the idea of encouraging students to write prompts for AI tools (also Copilot) that can generate R code for them to execute. I find this approach intriguing and that sparks an important discussion we should have as data science educators: shall we still teach coding skills in data science courses?
Bien, J., & Mukherjee, G. (2025). Generative AI for Data Science 101: Coding Without Learning to Code. Journal of Statistics and Data Science Education, 1–14. https://doi.org/10.1080/26939169.2024.2432397