What are computer science educators thinking about right now? Here’s my summary of this year’s Best Paper awardees at SIGCSE.
The main CS education conference is the annual Technical Symposium on Computer Science Education, which is sponsored by the ACM Special Interest Group on Computer Science Education (SIGCSE). The symposium itself is colloquially also known as SIGCSE.
A note on style. My summaries are deliberately short —only one sentence each for the headline and the TL;DR. I am responsible for any errors introduced by such condensation. The “moreover” paragraphs sometimes contain results from related work cited by the paper. I arbitrarily list papers in order of decreasing sample size (n), which is generally the number of students.
📄 Event Logging Detects Cheating
TL;DR: If you log timestamps when students access and answer each question in an online quiz, you can better detect cheating.
Moreover: Timeline analysis works better for short assignments (two hours) than long (two weeks). Temporal proximity can be compellingly visualized. Cheating models in online quizzes include: full collaboration, simultaneous completion using divide and conquer, simultaneous completion but comparing answers only just before submission, and sequential completion followed by giving away answers.
n > 2000.
📄 Prereq Mastery Matters
TL;DR: Transfer students have lower prereq mastery; URMs might also; females and first-gen do not.
Moreover: Lower prereq course grades => lower prereq material mastery => lower grade in later course. It’s hard to catch up. Grades that are inflated by effort and collaboration mask issues of material mastery. The difference for URMs is “sizable” but not statistically significant, possibly due to the small number of such students involved.
n = 300.
📄 Randomization and Logging Detect Cheating
TL;DR: Generating random variants of assignments makes it easier to detect cheating; so does logging of IP addresses and problem-submission timestamps.
Moreover: Analysis of the minimum time a human could possibly take to complete a problem for the first time can detect cheating. Randomly asking students to explain their solution can detect cheating but increases overall student stress; the authors do not address implicit bias in evaluating explanations.
n = 207.
📄 Break Times Obscure Student Mastery
TL;DR: When you predict course outcomes by measuring the time it takes students to complete a task, you should omit the students’ break times.
Moreover: The longer it takes students to complete exercises, the more they are struggling. Effects can be visible even in week 1 of a course.
n = 132.
📄 Searchable, Runnable Examples Preferred
TL;DR: A searchable gallery of runnable code examples supports novice programmers, who otherwise struggle to find and use examples.
Moreover: All programmers prefer snippets over complete projects when working with APIs. Three ways of using examples are common: copy-run-modify, run-understand-reference, run-close-reimplement.
n = 46.
📄 Students Enjoy Becoming Teachers
TL;DR: High-school students helped high-school teachers develop CS lessons in the context of a six-week summer program at a university.
Moreover: There were 35 high-school students available; 18 of those chose to work on this project instead of doing their own thing; 16 of the 18 were female. Some lessons were beta tested on middle schoolers at a summer camp.
n = 18 (11 students interviewed + 7 teachers/mentors)
📄 Equity Is (Not Only) a CS Problem
TL;DR: An experimental high school course discussed counternarratives, artificial intelligence, bias, ethics, and data privacy.
Moreover: Deadlines and attendance considered harmful or “oppressive.” The instructors did not impose deadlines, and they conflicted with an administration that wanted a “true” attendance count reflecting religious observances.
n = 14.
📄 Not All Who Program are Programmers
TL;DR: People who work near to programmers want to converse about code — not to fully understand it, and especially not to write it.
Moreover: Programmer-adjacent jobs include managers, designers, and marketers. They want to make connections between the real-world impact of code and the low-level details of development, and especially to communicate about those connections. They want to be able to share inside jokes with developers.
n = N/A (literature survey)