logo

Laptops for Learning

 

L4L logoAbout the
L4L Project

Intro/Background

About the Laptops for Learning Initiative

Evaluation and Research Challenges

Sample

Information on USEiT teacher survey

Timeline

Projected Analyses

Contact

Damian Bebell at bebell@bc.edu

Evaluation and Research Challenges

Although billions of federal, state, and local dollars are spent annually supporting educational technology across our nation's schools, relatively little research and evaluation has been devoted to objectively measuring the impacts of these investments on teaching and learning. Despite recent federal guidelines earmarking 25%–30% of the total project budget for research and evaluation on all newly funded technology initiatives, few programs actually realize this benchmark (U. S. Department of Education "Ready to Teach" program). For example, Maine's statewide laptop program devoted less than one percent ($250,000) of the initiative's overall $37.2 million budget for conducting research and evaluation during the first four years of the project. Likewise, in Henrico County, Virginia, a similar initiative invested $24.2 million dollars to provide laptop computers to over 23,000 students and teachers. During the first four years of the program's implementation, no significant research or evaluation efforts were undertaken or funded. Despite the major financial expenditures on hardware, software, training, professional development, peripherals, and maintenance, few high-quality research studies exist that fully address the intricacies of educational technology initiatives and programs. Looking across local, state, and national educational technology expenditures, the current levels of research and evaluation often fail to provide policy makers and educational leaders with the necessary adequate research-based evidence to make informed decisions on the efficacy and return on investment of their technology expenditures. Not surprisingly, in today's zeitgeist of educational accountability the call for empirical, research-based evidence that these massive investments are affecting the lives of teachers and students has only intensified (McNabb, Hawkes, & Rouk, 1999; Bebell, O'Dwyer, Russell, & Hoffman, 2007).

To date, hundreds of studies have sought to examine instructional uses of technology across a wide variety of educational settings. Despite the large number of studies, many researchers and decision makers maintain that the majority of current educational technology research is methodologically weak. Baker and Herman (2000), Waxman, Lin, and Michko (2003), Goldberg, Russell, and Cook (2003), and O'Dwyer, Russell, Bebell, and Seeley (2004) have all suggested that much of the educational technology literature suffers from both theoretical and methodological shortcomings which must be addressed to effectively document and understand the complicated relationship between students, teachers, and the countless ways that educational technology can be integrated into teaching and learning. For example, even something as simple as measuring teachers' use of technology takes careful consideration and thought. Below is a detailed description of an often overlooked methodological concern that is of great importance when considering the Laptops for Learning evaluation: defining and measuring teachers' use of technology.

 

Defining and Measuring Teachers' Technology Use:

In some cases, teachers' use of technology is specific to their use while delivering instruction in the classroom. In other cases, teachers require students to use technology to develop products or to facilitate learning. In still other cases, teachers' use includes emailing, lesson preparation, and record keeping, as well as personal use. Despite the many ways in which teachers may use technology to support their teaching, research on technology often lacks a clear definition of what is meant by teachers' use of technology.

A short historical review of the educational literature reveals that the way "teachers' use of technology" has been defined across different studies has resulted in different results. The very first large-scale investigation of educational technology occurred in 1986 when Congress asked the federal Office of Technology Assessment (OTA) to compile an assessment of technology use in American schools. Through a series of reports (OTA, 1988; 1989; 1995), national patterns of technology integration and use were documented. Ten years later Congress requested OTA "to revisit the issue of teachers and technology in K–12 schools in depth" (OTA, 1995). In a 1995 OTA report, the authors noted that previous research on teachers' use of technology employed different definitions of what constituted technology use. In turn, these different definitions led to confusing and sometimes contradictory findings regarding teachers' use of technology. For example, a 1992 International Association for the Evaluation of Educational Achievement (IEA) survey defined a "computer-using teacher" as someone who "sometimes" used computers with students. A year later, Becker (1994) employed a more explicit definition of a computer-using teacher for which at least 90% of the teachers' students were required to use a computer in their class in some way during the year. Thus, the IEA defined use of technology in terms of the teachers' use for instructional delivery while Becker defined use in terms of the students' use of technology during class time. Not surprisingly, these two different definitions of a "computer-using teacher" yielded different impressions of the technology use. In 1992, the IEA study classified 75% of U.S. teachers as "computer-using teachers" while Becker's criteria yielded about one third of that (approximately 25%) (OTA, 1995). This confusion and inconsistency led the OTA to remark: "Thus, the percentage of teachers classified as computer-using teachers is quite variable and becomes smaller as definitions of use become more stringent" (p. 103).

It is clear, both in theoretical and investigative research, that defining and measuring teachers' use of technology has only increased in complexity as technology has become more advanced, varied, and pervasive in the educational system. Today, several researchers and organizations have developed their own definitions and measures of technology use to examine the extent of technology use and to assess the impact of technology use on teaching and learning. Frequently these instruments collect information on a variety of different types of teachers' technology use and then collapse the data into a single generic "technology use" variable. Unfortunately, the amalgamated measure may be inadequate both for understanding the extent to which technology is being used by teachers and for assessing the impact of technology on learning outcomes.

There is a strong likelihood that the school leaders who rely upon this information for decision-making will interpret findings in a number of different ways. For example, some may interpret one measure of teachers' technology use solely as teachers' use of technology for delivery, while others may view it as a generic measure of the collected technology skills and uses of a teacher. While defining technology use as a unitary dimension may simplify analyses, it complicates efforts by researchers and school leaders to do the following:

Recognizing the importance of how technology use is both defined and measured, researchers at Boston College's Technology and Assessment Study Collaborative have applied an approach to measuring teacher technology use that involves examining the specific ways in which teachers make use of technology. In this case, multiple measures (i.e. scales) for the specific ways that teachers use technology are constructed from related survey items. Both Mathews' (1996) and Becker's (1999) research on teachers' technology use demonstrated that a new level of refinement in the measurement of specific technology uses. Similarly, in their 2003 study of Massachusetts teachers' technology use, the Boston College researchers used principal component analyses to develop seven separate scales that measure teachers' technology use across 2,628 classroom teacher surveys participating in the USEiT study. These seven distinct scales include the following:

The results of the USEiT study displayed below demonstrate the distribution and mean response for each of the survey items used to form seven categories of teacher technology use.

Distribution and Mean Values across Seven Distinct Categories of Teacher Technology use
category chart

As seen in the figure above, the number of survey items used to form each category of use ranges from one to five items. Also note that the distribution of responses and mean response varies considerably across the individual items. For example, the distribution of responses for the survey item asking teachers how often they make handouts for students using computers is negatively skewed, with the vast majority of teachers reporting that they do this several times a week or several times a month. While examining teacher responses at the item level is informative and may reveal interesting patterns across items, patterns generally become easier to identify when items that focus on related uses of technology are combined into a single measure.

Further analyses of the seven teacher technology use scales showed that each of the individual scales exhibited widely divergent frequency distributions (Bebell, Russell & O'Dwyer, 2004). In other words, when looking at how often a sample of teachers use technology, the range and patters of teacher responses differed across each use. In statistical terms, one way to describe the distribution of responses is to explore how "skewed" the distribution is from a normal distribution (bell curve) where responses are predictably and even spread across a wide spectrum. For example, teachers' use of technology for preparation was strongly negatively skewed while the same teachers' use of technology for instruction is strongly positively skewed. Like instructional use, the distributions for assigning student products and accommodation were positively skewed. Using technology for grading also had a weak positive skew while teacher-directed student use was relatively normally distributed. Use of email however, presented a bi-modal distribution, with a large percentage of teachers reporting frequent use and a large portion of the sample reporting no use. If all of the survey items comprising these scales were summed to create a generic composite measure of technology use, the distribution closely approximates a normal distribution revealing none of the patterns observed in the specific technology use scales. Thus, it is critically important that multiple measures/scales be used to represent the wide variety of teachers' technology use.

When compared to a single generic measure of technology use, multiple measures of specific technology use also offer a more nuanced understanding of how teachers are using technology and how these uses vary among teachers. By developing separate measures of teachers' technology use the authors are not inferring that each individual measure is unrelated to the other technology use measures. Indeed, it would be reasonable to assume that all of the measures have some degree of relation to each other. The strength of the relationships among the seven technology uses from the USEiT study is examined via Pearson correlation coefficients, which are presented in the table below.

Correlation Table of the Seven Specific Teacher Technology Measures
category table

The correlation table above shows that the relationship among the seven teacher technology use measures are all positive, but generally indicate weak to moderate relationships. The positive inter-correlations suggest that teachers who use technology for one purpose are, on average, likely to use technology for other purposes. However, the moderate to weak correlations suggest that there is considerable variation between the extent to which teachers use technology for one purpose and the extent to which they use technology for another purpose. The relatively weak to moderate provide evidence that a) each measure does represent a separate and distinct category of technology use, and b) the frequency and distribution of technology use varies considerably across the seven measures. Research studies that have utilized this multi-faceted approach to measuring technology use have revealed that many illuminative patterns that were obscured when only general measures of use were employed (Mathews, 1996; Ravitz, Wong & Becker, 1999; Bebell, Russell,& O'Dwyer, 2004). For example the analysis of the USEiT teacher data indicated that the frequency of teachers' technology use for instruction and accommodating lessons was unrelated to the frequency of them asking students to use technology during class time. Similarly teachers' use of technology for grading operated independently of teachers' use of technology for preparation of their lessons (Ibid).

To summarize, how technology is defined and measured (if measured at all) plays a substantial, but often overlooked, role in educational technology research. For example, Wenglinksi (1998) employed two measures of technology use in a national study on the effects of educational technology on student learning. The first measure focused specifically on use of technology for simulation and higher-order problem solving and found a positive relationship between use and achievement. The second measure employed a broader definition of technology use and found a negative relationship between use and achievement. Thus, depending how one measures use, the relationship between technology use and achievement seems to differ. Such differences may account for some of the complexity policy makers and educational leaders confront when interpreting educational technology research.

 

 

© Boston College. All rights reserved. inTASC is affiliated with the Center for the Study of Testing, Evaluation and Educational Policy (CSTEEP) in the Lynch School of Education. Email us at inTASC@bc.edu.