Low-tech tests shortchange high-tech students
Technology and testing are two popular prescriptions for improving education. Teaching and learning with computers prepares students for an increasingly technological workplace. And it's believed that using standardized tests to rate schools and students provides accountability and incentives for improvement.
But what is little recognized, is that these two strategies are working against each other in a sort of educational time warp.
Students are generally tested in the traditional way - with pencil and paper. But children have become so acculturated to working on computers, that when it comes time to pick up a pencil, they're in unfamiliar terrain. It's like asking mathematicians to abandon calculators, and revert to slide rules, as a way of assessing their skills.
Two studies we conducted since 1995 show handwritten tests severely underestimate the performance of students accustomed to working on computers.
The number of students using computers in school has so dramatically increased that, as a national survey by the University of California, Irvine, recently showed, 50 percent of K-12 teachers have students use word processors; and 29 percent have students use the Internet.
Meanwhile, students, teachers, and schools are increasingly held "accountable" for student learning as gauged by test results. At least 45 states have implemented statewide accountability tests. Most of these tests include portions in which students write their answers or explain their work. Last year alone, an estimated 10 million students were asked to write responses longhand on state-mandated tests.
Together, these developments present a little-recognized hazard in using tests for high-stakes decisions: Paper- and-pencil tests underestimate the capabilities of technology-savvy students.
Our research on this topic began with a puzzle. Teachers at the Accelerated Learning Laboratory, a high-tech school in Worcester Mass., were surprised to see their students' writing scores drop, despite the fact that increased computer availability had inspired them all to write more often.
To help solve the puzzle, we conducted an experiment to compare paper and computer administration of tests. In our first study, published in 1997, one randomly selected group took a wide range of multiple-choice, short-answer, and essay tests in reading, math, science, and social studies - one group used paper and pencil, the other took the same tests on computer, using the keyboard to type answers (but without access to spelling or grammar checkers).
Before scoring, answers written by hand were typed so that raters couldn't tell how answers were originally produced.
For students accustomed to writing on computer, responses composed on computer were substantially better than those written by hand. The effect was so large that when students wrote on paper, only 30 percent performed at a passing level, but when students wrote on computer, 67 percent passed.
Our follow-up study published last month, in the online journal Educational Policy Analysis Archives, used a broader sample of students and open-ended items from state and national tests. This study confirmed the large differences on some writing tests. For students who could keyboard 20-words-per-minute or more, performance on computers was substantially better. But the slower the student typed, the less benefit there was with testing by computer.
For students with poor typing skills, taking the test by computer had a negative impact.
The effects were substantial. For the average student accustomed to working on a computer, testing by computer could easily raise test scores from "needs improvement" to "proficient" on the new Massachusetts state test.
Recall that 10 million students took state-mandated handwritten tests last year, and that half of US teachers have students use computers. Assuming half of students using computers have moderate typing skills, our results suggest state-paper-and-pencil tests may be underestimating the abilities of 2 million to 3 million students annually. Low-tech tests may be shortchanging high-tech students. The gap between technology and testing strategies is likely to increase as more students learn to write on computers.
How can this mismatch be rectified?
Schools could decrease use of computers so students don't become accustomed to writing on them. While this may bolster low-tech test scores, it does little to prepare students for the realities of our world.
On the other hand, schools could replace paper-and-pencil written tests with computer versions. Although this seems sensible, the lack of technology infrastructure in schools makes it infeasible for large-scale testing in the foreseeable future.
Perhaps the most reasonable short-term approach, is to recognize the shortcomings of current testing. Without question, both technology and testing have the potential to improve the quality of education.
However, until it is possible for students to perform tests using the same medium in which they work and learn, we must recognize that scores from high-stakes state tests are not an accurate gauge of some students' capabilities. This doesn't render scores useless, but it's a red flag of the danger of making decisions based solely on test scores.
*Walt Haney is professor of education and senior research associate at the Center for the Study of Testing Evaluation and Educational Policy, at the Boston College school of education. Mike Russell is a research associate at the center.