click to return to the Board home page click to find out more about the NBETPP click here to read about the NBETPP Research Agenda Click here to find Testing in the News click here to view Board Reports and Publications


The Gap between Testing and Technology in Schools

Michael Russell and Walter Haney
National Board on Educational Testing and Public Policy
Carolyn A. and Peter S. Lynch School of Education
Boston College

Volume 1, Number 2 — January 2000

In 1983, the release of A Nation at Risk by the US Department of Education focused attention on the perceived crisis in education. Since then, technology and testing have become two popular prescriptions for improving education.

The technology nostrum is the infusion of modern technology into schools, in the belief that it will bolster teaching and learning and prepare students for an increasingly technological workplace. The testing prescription holds that using standardized test scores to rate schools and to decide whether students should be promoted or graduate will provide incentives for improvement. What is little recognized is that these two prescriptions may work against each other. Recent research shows that standardized language arts tests taken on paper severely underestimate the performance of students accustomed to working on computer.(note 1) It is like asking mathematicians to abandon calculators and revert to slide rules.

The Computer Revolution Goes to School

Though the personal-computer revolution began only twenty years ago and the World Wide Web is even newer, computer technology has already had a dramatic impact on society. Schools have been slower to acquire these technologies, but computer use in schools is increasing rapidly.(note 2) The percentage of students in grades 1 to 8 using computers in school has more than doubled, from 31.5 in 1984 to 68.9 in 1993.(note 3) Similarly, while schools had one computer for every 125 students in 1983, they had one for every 9 in 1995.(note 4)

And not only are there more computers in classrooms, schools are also increasing students' use of computers and access to the Internet. A recent national survey of teachers showed that in 1998, 50 percent of K-12 teachers had students use word processors, 36 percent had them use CD ROMS, and 29 percent had them use the World Wide Web.(note 5) In short, the computer revolution has gone to school, and more and more students are writing and doing school assignments and research on computers.

Performance Testing in Schools

Meanwhile, many states are increasingly seeking to hold students, teachers and schools "accountable" for student learning as measured by state-sponsored tests. According to annual surveys by the Council for Chief State School Officers (1998), 48 states use statewide tests to assess student performance in different subject areas.(note 6) Because of the limitations of multiple-choice items, most statewide tests include items for which students must write extended answers or explain their work. Last year alone, an estimated ten million students nationwide participated in a state-sponsored testing program that required them to write responses longhand. Scores on these tests are being used to determine whether to (1) promote students to higher grades, (2) grant high school diplomas, and (3) identify, sanction or reward low- and high-performing schools.

We wish to focus here on a little-recognized limitation of using these tests to drive educational "reform" – the fact that paper-and-pencil forms of these tests may yield misleading information on the capabilities of students who are accustomed to using computers.

Testing Via Computer

Research on testing via computer goes back several decades and suggests that for multiple-choice tests, administration via computer yields about the same results as via paper and pencil.(note 7) However, more recent research shows that for young people who have gone to school with computers, national and state tests administered via paper and pencil can yield severe underestimates of students' skills as compared with the same tests administered via computer.(note 8)

This research began with a puzzle. While evaluating the progress of student learning in the Accelerated Learning Laboratory (ALL), a high-tech school in Worcester, MA, teachers were surprised by the results from the second year of assessments. Although their students were writing more often now that computers were in the school, their scores on writing tests declined. To help solve the puzzle, it was decided to compare paper and computer administration of the tests.

In 1995, a randomized experiment was conducted, with one group of students taking math, science and language arts tests, including both multiple-choice and open-ended items, on paper, and another group taking the tests on computer. Before scoring, answers written by hand were transcribed so that raters could not distinguish them from those done on computer. There were two major findings. First, the multiple-choice test results did not differ much by mode of administration. But second, for the ALL students used to writing on the computer, responses written on computer were much better than those written by hand. This finding occurred across all three subjects and with both short-answer and extended-answer items. The effects were so large that when students wrote on paper, only 30 percent performed at a "passing" level; when they wrote on computer, 67 percent "passed."(note 9)

Two years later, a more sophisticated study was conducted, this time using open-ended items from the new Massachusetts state test (the Massachusetts Comprehensive Assessment System or MCAS) and the National Assessment of Educational Progress (NAEP) in the areas of language arts, science and math. Again, eighth grade students from two middle schools in Worcester, MA, were randomly assigned to groups. Within each subject area, each group was given the same test items, with one group answering on paper and the other on computer. In addition, data on students' keyboarding speed and prior computer use were collected. Finally, all answers written by hand were transcribed to computer text.

As in the first study, large differences were evident on the language arts tests. For students who could keyboard moderately well (20 words per minute or more), performance on computer was much better than on paper. Overall, the difference represented more progress than the average student makes in an entire year and could raise a student's score on MCAS from the "needs improvement" to the "passing" level.(note 10)

Figure 1

Effect of Computer Test Administration on Language Arts Test by Level of Typing Ability

Figure 1

Recalling that nearly ten million students took some type of state-sponsored written test last year and that nearly half of the students nationwide use word processors in school, these results suggest that state paper-and-pencil tests may be underestimating the abilities of some five million students annually.

Study findings were not, however, consistent across all levels of keyboarding proficiency (see Figure 1). As keyboarding speed decreased, the benefit of computer administration became smaller. And at very low keyboarding speed, taking the test on computer diminished students' performance. Similarly, taking the math test on computer had a negative effect on students' scores, which became less pronounced as keyboarding speed increased.

Bridging the Gap

These studies highlight a huge gap between computer use in schools and testing strategies used for school improvement – one that will increase as more students become accustomed to writing on computers. There are at least three possible ways to bridge this gap.

First, schools can decrease students' computer time so that they do not become accustomed to writing on computers. Some schools have already adopted this practice. After the first study described above, and the introduction of the new paper-and-pencil MCAS test in Massachusetts, the ALL school required students to write more on paper and less on computer.(note 11) In another Massachusetts school system, the principal feared that students who write regularly on computer lose penmanship skills, which might lead to lower scores on the new state test. This school increased penmanship instruction across all grades while also decreasing students' time on computers.(note 12) Such practices – in effect de-emphasizing computers in schools to better prepare students for low-tech tests – may be pragmatic, given the high stakes attached to many state tests. But they may be shortsighted in light of students' entry into an increasingly high-tech world.

A second way to bridge the test-technology gap would be to eliminate paper-and-pencil testing and have students complete tests on computer. This might seem a sensible solution, but it will not be feasible until our schools obtain an adequate technology infrastructure. Moreover, as shown by problems in recent moves to computer-administer some large-scale tests for adults, computerized testing is not the panacea some had hoped. Among other problems, it adds considerably to the cost of testing and creates new test security concerns. Finally, as our second study showed, it would penalize low-tech students with poor keyboard skills.

A third approach, and perhaps the most reasonable solution in the short term, is to recognize the limitations of current testing programs. Without question, both computer technology and performance testing can help improve the quality of education. However, until students can take tests in the same medium in which they generally work and learn, we must recognize that the scores from high-stakes state tests do not accurately measure some students' capabilities. While this does not make the scores useless, it serves as yet another reminder of the dangers of making decisions based solely on test scores.


1 See Russell, M. (1999), Testing writing on computers: A follow-up study comparing performance on computer and on paper, Educational Policy Analysis Archives, 7(20), and Russell, M., & Haney, W. (1997), Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil, Educational Policy Analysis Archives, 5(1).

2 See Zandvliet, D., & Farragher, P. (1997), A comparison of computer-administered and written tests, Journal of Research on Computing in Education, 29(4), 423-438.

3 See Snyder, T. D., & Hoffman, C. (1990), Digest of Education Statistics, Washington, DC: U. S. Department of Education, and Snyder, T. D., & Hoffman, C. (1994), Digest of Education Statistics, Washington, DC: U. S. Department of Education.

4 See Glennan, T. K., & Melmed, A. (1996), Fostering the use of educational technology: Elements of a national strategy, Santa Monica, CA: RAND.

5 See Becker, H. J. (1999), Internet Use by Teachers: Conditions of Professional Use and Teacher-Directed Student Use, Irvine, CA: Center for Research on Information Technology and Organizations.

6 See Council of Chief State School Officers (1998), Key State Education Policies on K-12 Education: Standards, Graduation, Assessment, Teacher Licensure, Time and Attendance, Washington, DC: Author.

7 See Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989), The four generations of computerized educational measurement, In Linn, R. L.(Ed.), Educational Measurement (3rd ed.), Washington, DC: American Council on Education, pp. 367-407.

8 See Russell, M. (1999) and Russell, M., & Haney, W. (1997).

9 See Russell, M., & Haney, W. (1997).

10 See Russell, M. (1999).

11 See Russell, M. (1999).

12 See Holmes, R. (1999), A gender bias in the MCAS?, MetroWest Town Online,

About the Author

Michael Russell is a Research Associate with the National Board on Educational Testing and Public Policy. His research interests are in the areas of technology and assessment in education.

Walt Haney is a Professor in the Educational Research, Measurement, and Evaluation program in the Lynch School of Education at Boston College.


dividing line, no content


about nbetpp

research agenda

testing in the news

board reports

www resources

©2002 NBETPP