That is the conclusion of two Center for the Study of Testing, Evaluation and Educational Policy researchers who recently published a study on computer-testing in the electronic journal Education Policy Analysis Archives . The study found that students who use computers to do their school work fare better on tests administered on computer than they do on paper-and-pencil exams.
Prof. Walter Haney (SOE) and graduate student Michael Russell co-authored the study. (Photo by Lee Pellegrini)
The study's authors, Prof. Walter Haney (SOE) and School of Education doctoral candidate Michael Russell, say that testing methods have not kept up with the rise in computer usage among the nation's schoolchildren and no longer are an accurate assessment of how well students are learning.
Their conclusions are based on a study of middle school students at the progressive Advanced Learning Laboratory School in Worcester, which places a heavy emphasis on computer use. During their regular consultations with SOE faculty, the school's teachers voiced questions as to why their students' test scores were falling even as their computer usage was rising.
Haney and Russell devised a test, consisting of multiple-choice, short-answer and essay sections, that was administered to a representative sample of students divided into two groups - one took the test on paper and the other took it on computer.
Results on the multiple-choice section were about equal for both groups, but they began to diverge in the short-answer section and separated strikingly in the essay section of the test. The authors added that when both groups took the test on paper, the results were comparable in all sections.
"We expected the multiple-choice part to be about the same for both groups," said Russell, since there was little difference between filling in a circle with a pencil and clicking on one with a computer mouse. "But we were surprised by the short answers. The difference was surprisingly large, when we didn't expect any difference."
Students writing their essays on the computer fared "substantially better" than those who wrote on paper, Russell added. "If you look at the student who's doing it on paper, he or she is doing at least a grade level worse than the one who's doing it on computer."
"Thirty percent of the paper answers were rated as satisfactory, versus 70 percent of the computer answers," Haney added.
Haney and Russell theorize that the gap is due to the technology itself, which allows students to write and revise their answers much more easily and faster on computer than with a pencil. That backs up existing research, which indicates that students who use computers for their school work write and revise more than those who don't.
Another interesting facet of the study's results was the difference between boys' and girls' computer-test results. Previous research showed that boys are drawn more to the technology and, therefore, ought to perform better on the test.
"Both did better on the computer," Haney added, "But females did considerably better."
Haney and Russell concede that the ALL School is unusual in the access students have to computers, but add that 70 percent of the nation's first- through eighth-graders use computers in school, up from 32 percent in 1984. Even for those at the ALL School, Russell said, "this was probably the first test they had taken on a computer."
"The data says kids are exposed to computers more in the classroom," said Russell, "and the biggest thing to come out of this study is that you have to be careful in assessing outcomes. If they're used to working on computers, those who take pencil-and-paper tests are probably being underrated, as are our schools."
"The implications are not just for grade schools, but for places like Boston College as well, where undergraduates are doing their work on computers," Haney added. "When you make them write long-hand, you may not be giving them the best test to assess what they've learned."
Return to Feb. 13 menu
Return to Chronicle home page