Automated assessment in CS

Des Traynor, Susan Bergin, J. Paul Gibson

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A system has been developed for providing automated assessment in CS1. During the academic year 2004-2005 this system was evaluated empirically by examining a sample group of students using both the traditional assessment methods and also the automated techniques, four times during the year. A significant correlation was found between the performance in both tests, however the correlation was only strong for students who performed well during the year. To further this study, students were interviewed and asked their opinion on the generated questions. The students offered reasons for the variation in their performance and provided an insight into where the discrepancies lie. We discovered that weaker students were employing rote-learning and using it to score marks in the class exams. As this survey was conducted on paper, a large amount of student roughwork ("doodles") was collected, the analysis of this roughwork is also discussed.

Original languageEnglish
Title of host publicationComputing Education 2006 - Proceedings of the Eighth Australasian Computing Education Conference, ACE 2006
Pages223-228
Number of pages6
Publication statusPublished - 1 Dec 2006
Externally publishedYes
Event8th Australasian Computing Education Conference, ACE 2006 - Hobart, TAS, Australia
Duration: 16 Jan 200619 Jan 2006

Publication series

NameConferences in Research and Practice in Information Technology Series
Volume52
ISSN (Print)1445-1336

Conference

Conference8th Australasian Computing Education Conference, ACE 2006
Country/TerritoryAustralia
CityHobart, TAS
Period16/01/0619/01/06

Keywords

  • Assessment
  • First year programming
  • Program comprehension

Fingerprint

Dive into the research topics of 'Automated assessment in CS'. Together they form a unique fingerprint.

Cite this