Students are already using AI. The question is whether they’re still learning and what we should do about it.
If you teach today, AI is already in your classroom. Students are using tools like ChatGPT, Claude, and a growing ecosystem of generative AI systems to write essays, solve homework problems, debug code, summarize readings, and generate lab reports. While some are using these tools to accelerate their learning, others are using them to bypass learning altogether.
The challenge for educators is not whether AI belongs in the classroom (it’s already here, operating somewhere between “learning assistant” and “academic shortcut”). The challenge is understanding how to work with AI and ensure students are still learning the required material.
Students Are Already Using AI
A 2025 study from the Pew Research Center found that about 26% U.S. teens have used ChatGPT for schoolwork, and that number is rising quickly.
The 2025 EDUCAUSE “Digital AI Divide” survey highlights a growing disconnect between what students are doing and how institutions are responding. Around 70% of students report using AI tools in some way for their coursework, yet only a minority of faculty and universities have clear guidance or policies in place. At the same time, fewer than half of students say they’ve received any formal instruction on how to use AI effectively or ethically. This suggests that while AI use is rapidly becoming the norm, we’re not yet teaching students how to use it well.
Many students do not hide the fact that they are using AI. Instead, they see it as just another tool, no different from a calculator, a search engine, or a textbook solution manual.
The Good: AI as a Learning Assistant
AI promises to personalizing the learning experience. Students can ask follow-up question, request alternative explanations when something doesn’t click, and generate additional practice problems tailored to their needs. A student struggling with a concept can have it explained in multiple ways without waiting for office hours or help sessions. This reduces friction in the learning process and keeps them engaged.
Some instructors report that students are using AI to deepen their understanding by asking for reasoning (instead of answers), demonstrating multiple approaches to a problem, and challenge the AI’s output (rather than taking it at face value). In these cases, the interaction starts to resemble guided inquiry rather than passive consumption.
This is, in many ways, what educators have long hoped for: a system that provides individualized support at scale. Sal Khan has described this vision in the development of Khanmigo, an AI tutor designed to guide students through problems using questions and hints rather than simply providing answers.
With proper calibration and prompting, AI learning assistants can guide students without directly giving answers. As with any technology, AI is not without pitfalls and downsides.
The Bad: AI as a Shortcut
Many students are motivated by receiving the best grade in the shortest amount of time and with the smallest amount of effort. This gives rise to the use of AI as a shortcut or cheating tool. Many students are using AI to generate entire essays from a prompt, solve homework problems without attempting them, and produce code that works without understanding. In these cases, the tool is replacing learning rather than supporting it.
This creates a fundamental tension: if the purpose of an assignment is to practice reasoning, and that reasoning is outsourced to AI, then the learning objective is never met. The student may submit something that looks correct, but the underlying skills remain undeveloped.
Many instructors have started to notice subtle but telling shifts. For example:
- Students submit polished work that they struggle to explain when asked follow-up questions
- Code functions correctly but breaks down with slight modification
- Written responses are grammatically sound but lack depth, consistency, or a clear line of reasoning
These patterns suggest not just the use of AI, but a reliance on it that bypasses meaningful engagement with the material. Rather than blame the student entirely, we can view this problem as a mismatch between how assignments are structured and how they can be easily completed without genuine understanding.
The Detection Problem
In response, institutions have looked for ways to detect AI-generated work. Tools have emerged that claim to identify whether text was written by an AI, and platforms (like Turnitin) have integrated AI detection into their offerings. On the surface, this seems like a straightforward solution: if you can detect misuse, you can enforce academic integrity.
However, it is not that simple in practice. Many researchers and institutions have raised concerns about the reliability of these tools. False positives can occur (particularly for non-native English speakers or for writing that follows predictable patterns), and companies like Turnitin now emphasize that AI detection should be treated as an indicator rather than proof. International guidance from organizations like UNESCO similarly cautions against relying on detection tools for high-stakes decisions, urging educators to focus instead on assessment design and clear policies around AI use.
This puts instructors in a difficult position: they may strongly suspect that a student used AI inappropriately but lack definitive proof. Even when detection tools flag content, the results are not always trustworthy enough to act on them. At the same time, students quickly learn how to adapt (e.g. by refining prompts, paraphrasing outputs, or blending AI-generated content with their own work) to avoid detection.
As a result, we end up in a game of cat-and-mouse: as detection tools get better, students quickly figure out ways to defeat or work around them. To avoid this digital arms race, we need to rethink how we work with AI and redesign assessments to actually measure student understanding. At the same time, attempting to ban AI outright risks throwing the baby out with the bathwater. These tools are not going away; they are becoming foundational to how work gets done. Our students need to be able to demonstrate both domain knowledge and the ability to use AI effectively and responsibly.
What Instructors Are Actually Doing
Instructors are increasingly acknowledging that catching AI use with automated detectors is not always reliable. For example, the University of Pittsburgh Teaching Center states that “concluded that current AI detection software is not yet reliable enough to be deployed without a substantial risk of false positives.” The authors recommend against relying on detectors (e.g. Turnitin’s automated detector) and instead focusing on integrity-supporting teaching practices, such as:
- Setting clear expectations around AI use in the syllabus and individual assignments
- Designing assignments that emphasize process over product, including drafts, reflections, or intermediate steps
- Using more personalized or contextualized work tied to class discussions, local data, or student experience
- Incorporating frequent, low-stakes assessments (e.g. quizzes, in-class activities) to track understanding over time
- Building a culture of academic integrity and intrinsic, where students understand the purpose behind assignments and the value of doing their own work
Another approach is to explicitly define what kinds of AI help are allowed on each assignment, such as allowing AI for planning or idea generation but requiring a student-authored final draft. Alternatively, AI could be used to provide feedback but the student is still required to explain which suggestions they used and why. The University of Iowa’s “AI Assessment Scale” (AIAS) is a useful framework for making those permission levels explicit and tying them to the skills you want students to build (analysis, interpretation, decision-making, and critical evaluation of AI output).
University of Pennsylvania professor Ethan Mollick requires AI in his classes, but part of the instruction includes how to use AI correctly and ethically. He also offers several options for tackling student assessment:
- In-class essays (pencil and paper, no computers)
- Encourage essay writing with AI help (requires instruction on proper AI use)
- Embrace “flipped classrooms” where lectures are delivered via pre-recorded videos and homework-style problems are performed in the classroom (and can be monitored)
Educational consultant Derek Bruff recommends redesigning assignments so that the graded “value” is no longer the final product. Rather, instructors should create workflows so that student pre-work (e.g. discussion, small group work) is done in class, which makes it harder to outsource the thinking to AI.
MIT’s Teaching + Learning Lab created this quickstart guide that offers concrete tactics instructors can implement in STEM courses. For example:
- Require students to report if and how they used AI
- Have students use AI but evaluate its output
- Use frequent, low-stakes quizzes to gauge student progress
Some institutions, such as the University of Sydney and the University of Pennsylvania are requiring oral exams in certain situations. Much like handwritten exams, such conversations with students offer insights into their grasp of the material. Administrating full oral exams can be a massive burden on educators. Utrecht University suggests a compromise: interview individual students after they submit a completed assignment to test their comprehension.
This past week, I attended the ECEDHA annual conference, where several professors echoed many of the strategies listed above. Some described returning to more traditional forms of assessment (e.g. blue book exams, handwritten homework, closed-book quizzes administered in class). The goal was not to reject AI entirely but rather to create an environment where students had to demonstrate their own understanding without relying on external tools.
These approaches show that we are moving away from trying to detect AI use and toward designing environments where comprehension is harder to fake. However, that shift is not free. Oral exams, in-class assessments, and handwritten work all place real demands on instructors and teaching assistants, especially at scale. The challenge becomes finding ways to preserve rigor without creating unsustainable grading and administrative overhead. If AI is going to remain part of the learning process, the long-term solution requires combining scalable tools and workflows with targeted moments of direct person-to-person evaluation.
Equipping Students for a World with AI
The world of education is experiencing a dramatic shift in learning and assessment. The real question is whether we are assessing output or understanding. If it’s output, students will find a way to complete the assignment without thinking (AI has simply made it faster and easier). If it’s understanding, then we need to design assessments that make student thinking visible (through explanation, adaption, application, etc.). At the same time, we cannot ignore that AI is now part of the modern workflow. Students need to learn the material in addition to using these tools effectively and responsibly.
However, this shift is not free. Many of the most effective approaches (e.g. oral exams, handwritten assignments, in-class problem solving) are labor-intensive and difficult to scale. This places a real burden on instructors and teaching assistants. The challenge going forward is finding the right balance of techniques that preserves educational rigor without creating unsustainable overhead. This likely means combining scalable tools/workflows with human evaluation.
