This Is Why We Can’t Have Nice Things
In Spring 2024 I experimented with a generous drop policy in my course. Here’s how it harmed the course and why I won’t be repeating the policy.
This is why we can’t have nice things, darlin’
Because you break them, I had to take them away This is why we can’t have nice things, honey Did you think I wouldn’t hear all the things you said about me? This is why we can’t have nice things— Taylor Swift
Wouldn’t It Be Nice To Drop Exams and Assignments?
As I’ve written about before, students want faculty to help with stress. Dropping assignments is a standard tool in the teaching toolbox that can mitigate stressful circumstances. Some crisis occurs? Give the student a break, excuse them from the work, and trust them to catch up as needed. That way the crisis doesn’t snowball and cause the student to have to find time to do two or more assignments at once. It helps not just students, but faculty too: the extra logistics of extensions and late grading are eliminated.
Even before Covid, I routinely wrote some drop policies into my syllabi. Usually those related to participation. I would drop a week or two’s worth of participation in the form of in-lecture active learning exercises (mainly in-class polling) and laboratory exercises (mainly completing coding problems). I usually did not publish drop policies about assessments. In my courses, those are usually programming assignments (formative assessments) and written exams (summative assessments). When academic considerations were warranted I would from time to time privately arrange to drop an assessment for a student in crisis.
But in Spring 2024 I chose to codify a generous assessment drop policy in my syllabus for CS 3110 Data Structures and Functional Programming. I publicly committed to dropping one of the three exams and two of the eight programming assignments, in addition to my usual policy of dropping two of the ten labs and six of the thirty-seven lectures that included in-class polls. All of those drops were automatic in the final grade, no questions asked.
The negative impact of assessment drops on student engagement was unexpectedly high. Maybe I was naive about what to expect?
Negative Impact on Engagement
There was a rumor in the air that students were just tuning out at the end of the semester. The data support that rumor. As we reached the point in the semester where many students could afford to have a piece of work dropped without it having (much) impact on their final grade, submission rates waned below 80%. The following chart shows what happened. Only 71% took the last exam, only 51% submitted the last lab, and only 33% of the class submitted the last assignment.
(L = lab, A = programming assignment, E = exam. Percentages in this post are calculated based on final enrollment in the course (n = 372), which is why a couple of early data points are greater than 100%.)
For comparison, the next chart shows the equivalent data from Fall 2022 (n = 317) when I last taught the course. Lab 11, the only significantly skipped lab in Fall 2022, occurred during the Monday and Tuesday of the week of Thanksgiving Break; classes did not meet Wednesday through Friday. Naturally, many students leave town early. Otherwise, submission rates stayed well above 80% the entire semester.
Careful observers of those charts will note that there were only four assignments in Fall 2022 but eight assignments in Spring 2024.1 It’s reasonable to question whether that could have had impact on submission rates. I don’t think so, because the same effect occurs on exams and labs.
The bottom line is that students chose not to engage with exams, assignments, and labs once the drop policy made their scores moot. Instead, there was a lot of missed opportunity for learning — roughly, one quarter of the course as measured by weeks.
It would be nice to think that, for the sake of learning, students were attempting all the assignments and labs and studying for all the exams, even if they didn’t submit or take them. But I have no reason to suspect that they would do the work then not turn it in; after all, it wouldn’t have hurt their grades. Rather, I suspect that the extrinsic motivation of grades is what kept students engaged with work in the course. When grades no longer mattered, neither did the work.
Negative Impact on Lecture Attendance
The following charts show attendance in lecture (measured by iClicker participation) in Spring 2024 and Fall 2022. Nothing changed about the attendance policies between the two semesters; in both, I dropped six absences. The mean attendance rate for the last six lectures went down to 66% in Spring 2024 from 77% in Fall 2022.
(Again, percentages in this post are calculated based on final enrollment in the course, which is why a couple of early data points are greater than 100%.)
The lecture given on 5/1/24 plummeted to only 52% attendance, whereas the preceding several lectures had iClicker attendance rates around 80%. What happened that day? I turned on a geolocation feature in iClicker without advance warning to students. That feature required them to be within the vicinity of the lecture hall. It seems that many students were using the iClicker app to click in remotely instead of attending lecture.2 Indeed, the reason I enabled geolocation was that the previous week I had noticed that the number of bodies in the room seemed to have become considerably fewer than the number of responses iClicker was receiving. By my count on 4/26/24, there were 70 fewer students in the room than were responding by iClicker. So I strongly suspect that the attendance rates shown above for Spring 2024 were inflated by cheating in the last month of the course. In which case, the impact of the assessment drop policy on attendance is being underportrayed by the charts above.
The bottom line is that once students stopped engaging with course work, they also stopped coming to lecture — though it’s hard to estimate the effect size because of how cheating clouds the measurement.
Once more, it would be nice to think that, for the sake of learning, students would attend lectures even if they weren’t going to take exams or do assignments based on those lectures. Some probably did, but overall the attendance rates went down. I suspect that extrinsic motivation of grades is what kept students coming to class. When grades no longer mattered, neither did attendance.
Negative Impact on the Final Course Evaluation
The quantitative results of the final course evaluation in Spring 2024 were the second-worst in my twelve semesters of teaching CS 3110. The overall rating of me as an instructor (Q91) decreased considerably. Of all the component questions, the score that decreased the most was the rating of how engaging I was in presenting the material (Q23).
Term | Overall Instructor Rating (Q91) | Overall Course Rating (Q92) | Course Delivery — Engagement (Q23) |
---|---|---|---|
2024 Spring | 4.25 | 4.11 | 4.05 |
2022 Fall | 4.57 | 4.14 | 4.38 |
(Question 91 is “Rate the overall teaching effectiveness of your lecturer compared to others at Cornell.” Question 23 is “Did the lecturer present material in an engaging way, which improved your understanding of the course content?”)
The decrease in Q23 is especially vexing. The lecture content was the same in both semesters, with the usual minor improvements in Spring 2024. My lecture performances were just as good in Spring 2024. (I say that as someone who was trained as a concert pianist; I believe I have cultivated the ability to evaluate my own performances.) Yet students rated my presentation as less engaging.
So, what changed? What changed is that students stopped coming. They stopped coming because they didn’t need the lectures for the exams and other work. And they didn’t need that work because the drop policy let them out of it. I failed to engage them because I gave them too much leniency.
It’s counterintuitive, but a generous drop policy caused my course evaluation scores to go down.
Confounding Factors
I don’t claim that the above analysis is conclusive. Rather, it seems to be the most likely explanation at this time. Below are some confounding factors that could be clouding the issue. I will update this list as new ideas occur or are suggested to me.
- Spring 2024 was the first semester I taught CS 3110 in Statler Auditorium. It was an oversized room for the enrollment: even if 100% attended, the hall would be only about 50% full. Empty rooms have less energy — less engagement. But there have been past semesters in Bailey Hall, which is even bigger, and those did not seem to suffer.
- I made substantial changes to the programming assignments in Spring 2024. They had been relatively stable for about eight or nine years before that. Therefore the course staff were able to give help during office hours based on their own solutions, as well as official (private to staff) solutions. But in Spring 2024 the assignments were open-ended with no official solution, and no staff had ever done them themselves. Students could have felt less supported, hence their engagement level could have dropped.
- The way in which students got credit for lab participation changed twice during the semester based on feedback from TAs and students. It’s a long story that deserves its own blog post, but it involves students’ in-person participation rates briefly plummeting to 10%. That definitely represents reduced engagement. The short version of the story is: letting students submit remotely without coming to discussion section means they don’t come to discussion section.
- The students themselves have been through different experiences. CS 3110 students in Fall 2022 generally had their high school education impacted less by Covid because they were graduating or rising seniors. Spring 2024 students generally would have experienced more high school disruption.
What Next?
In the immortal words of Taylor Swift quoted above, “this is why we can’t have nice things.” It would be irresponsible to repeat a course policy that results in so much disengagement with learning as supported by so much data — even though it seems like such a nice thing for both students and instructors alike.
For assessments (exams and assignments) I expect to return to more traditional models of requiring students to engage at all points. For participation (labs and attendance) I expect to experiment with a new-to-me model of making attendance a value to which students are asked to commit. More on that — after this semester.
Endnotes
- In more detail: The size and scope of assignments changed in Spring 2024. That deserves its own blog post, later, but the short version is that the assignments became significantly smaller and more open-ended. Spring 2024’s A3 was an aberration in that I mistakenly made it too long, which explains why its submission rate is so low. ↩
- The exact cheating mechanism here is unknown to me. According to my TAs, the app did not give push notifications when a poll opened. So students would have to actively check for polls — which tended to remain open for two to three minutes as I discussed the answer — or engage the help of an accomplice. That accomplice could perhaps text them or a group chat to indicate a poll was in progress. Another possibility is that a broker aggregated the iClicker login credentials (which are not associated with any other university resource and did not require two-factor authentication) of many students and answered on behalf of all of them from the lecture hall in anonymous browser tabs. ↩