Just the title of this long blog entry caught my eye, because YES!!! The Homework is the Cheat Code indeed!
The content didn’t disappoint. Written by someone teaching CS at the graduate level at U Chicago, I’m not sure if I’m shocked or relieved that they’re seeing similar patterns in their students as we’re seeing at the undergrad level. Maybe the most honest reaction is that if we want to graduate students who are job- and grad school-ready, addressing these tendencies has to be central to undergraduate education (bold mine):
I get a lot more requests now for extensions on the project they’ve known about all quarter (students reading this: lovingly, no ). I get a lot lower compliance on instructions I wrote down twice and then also said twice in class. I get a lot more homework reflection assignments that package considerable insight—they’re still very smart people, after all—into the syntax I’d expect in a group text among friends rather than prose submitted to a graduate school instructor.
I like the observation that overcommitment, or burnout from years of overcommitment is part of the problem. I definitely see students think that the best value will come from packing as much as possible into their college experience and we’ve started having conversations at my institution about how we encourage students to make choices so they can more deeply benefit from the smaller set of things they are doing.
This, inevitably, gets linked to the issue of students using generative AI for assignments in a way that is not productive for their learning. This particular instructor doesn’t forbid the use of AI. Beyond other good reasons, they point out that we just don’t have data on the impact of these tools on learning yet and allowing some usage lets them collect data on how students are choosing to use AI. They’re coming at it from the perspective of someone who studies how LLM tools affect the development process, so that is interesting.
I love how they break down the academic honesty issue of using generative AI by first defining the issue: “I define “cheating” as something a student does, usually in the interest of time, whose tradeoff is the circumventing of the assignment’s learning objective.” and then walking through reasons students might not invest the needed time or might circumvent learning objectives. The time element certainly circles back to being overcommitted – I think it also connects to a lack of understanding of what time it really takes to be a full time student.
For both the time and learning objectives elements, I think trust in the instructor (and possibly in the institution?) play a large role. Is the student willing to trust that the structure of the course and the assigned work have been thoughtfully designed and will support their learning? How do you build that trust? And, the following paragraph got me to thinking, how do you push past students’ focus on efficiency of learning over effectiveness of learning:
We’re steeped in a tech industry laser-focused on efficiency as a positive quality, but this term often becomes overloaded when we’re talking about capacity-building. I’ve lifted weights for about a decade now. The exercises that build my capacity the most are not the efficient ones. Walking is efficient—humans can walk a very long way without spending a lot of energy. Kettlebell swings are inefficient; they require an enormous amount of energy and muscle activation relative to walking. That’s why weightlifters do them: they produce capacity-building adaptations faster because of that. Learning works the same. The activities that promote learning are, by design, inefficient: they require an enormous amount of attention and active engagement, usually on a thing that the learner feels bad at. They deliberately present the learner with challenge, and sometimes frustration, because those uncomfortable states build cognitive capacity. Avoid this, and you avoid building cognitive capacity. That’s a weird choice to make in a class with the specific value proposition of teaching something.
This leads to the heart of how they connect this back to the use of generative AI in learning, which they break down into whether it is being used as an “outcome accelerant” (thus shortchanging the learning process by just getting to the product faster) or a “learning aid” (where the tool streamlines the time taken to really focus on the activity related to the learning objective).
The article includes tons of examples of syllabus language, assignment information, etc. about how all of this works in practice, including how they set up assignments where the “cheat code” is that doing the work as assigned is more efficient than trying to get gen AI to do the work for you (they like the tool of “instructive visualizations”). This may or may not relate to your particular classroom. But I think there’s good potential for this outcome accelerant versus learning aid contrast to apply across a lot of classroom settings.