Saeed Dehnadi's Homepage (historical)
|Disclaimer:||This webpage gives access to Saeed Dehnadi's original questionnaire, test and answer sheet, purely for historical reasons. Please don't use the historical version of the test in new experiments.A more modern treatment is available from Richard Bornat's website|
We (Saeed Dehnadi, Richard Bornat) have discovered a test which divides programming sheep from non-programming goats. This test predicts ability to program with very high accuracy before the subjects have ever seen a program or a programming language.
Abstract: All teachers of programming find that their results display a 'double hump'. It is as if there are two populations: those who can, and those who cannot, each with its own independent bell curve. Almost all research into programming teaching and learning have concentrated on teaching: change the language, change the application area, use an IDE and work on motivation. None of it works, and the double hump persists. We have a test which picks out the population that can program, before the course begins. We can pick apart the double hump. You probably don't believe this, but you will after you hear the talk. We don't know exactly how/why it works, but we have some good theories.
Abstract: An initial cognitive study of early learning of programming aimed to extract experimental test data to establish novices' understanding process has been carried out by us.This empirical study was inspired by the notion that different people bring different patterns of knowledge in any new learning process, and demonstrated that how each student tackles the problem in a different way based on their mental model. The initial study suggests that success in the first stage of an introductory programming course is predictable, by noting consistency in use of the mental models which students apply to a basic programming problem even before they have had any contact with programming notation, but the consistency/inconsistency measurement was somewhat subjective. In this paper I present an objective marking method which hope will lead us to more precise and more finely-graduated predictions. This method is being trailed in at least one experiment, and we hope that by the time of the conference I will be able to describe the results.
Abstract: Learning to program is notoriously dicult. Substantial failure rates plague introductory programming courses the world over, and have increased rather than decreased over the years. Despite a great deal of research into teaching methods and student responses, there have been to date no strong predictors of success in learning to program. Two years ago we appeared to have discovered an exciting and enigmatic new predictor of success in a first programming course. We now report that after six experiments, involving more than 500 students at six institutions in three countries, the predictive effect of our test has failed to live up to that early promise. We discuss the strength of the effects that have been observed and the reasons for some apparent failures of prediction.
Abstract: A test was designed that apparently examined a student's knowledge of assignment and sequence before a first course in programming but in fact was designed to capture their reasoning strategies. An experiment found two distinct populations of students: one could build and consistently apply a mental model of program execution; the other appeared either unable to build a model or to apply one consistently. The first group performed very much better in their end-ofcourse examination than the second in terms of success or failure. The test does not very accurately predict levels of performance, but by combining the result of six replications of the experiment, five in UK and one in Australia. We show that consistency does have a strong eect on success in early learning to program but background programming experience, on the other hand, has little or no effect.