After reading a dozen or so "research summaries" of the effectiveness of various popular technologies marketed to schools, the familar experimental design pattern emerges (feel free to recite it along with me): two groups are formed, one with the technology and the other without to serve as the control, etc... Unfortunately, that's about as close as these "research summaries" get to being classified as research. Too many times it seems, they lack rigor in their design, control for bias, analysis and jump to conclusion of a causal relationship in their results.
One of the biggest that seems to be ignored entirely by these "research summaries" is the self-fullfilling prophecy bias: if we desire to use a technology because we believe it will improve student achievement, that expectation will influence our ability to objectively conduct the experiement. Ironically, this bias has been extensively studied in education (Rosenthal & Jacobson, 1992), but seems to be conveniently omitted when it comes to these "research summaries".
While I haven't conducted a full literature review myself to confirm this, I wouldn't be surprised if this hasn't been studied carefully yet in the field of educational technology. (Dear reader, if you know of any studies of this, please comment and leave the reference below. Thank you!) I would be very much interested in seeing whether an experimental design with placebo and control has ever been employed to measure the intensity of this bias with respects to technology.
Assuming it hasn't been done yet, I imagine it would look something like this: two identical groups are formed, controlling for teacher experience, technology literacy, instructional styles, and content areas. In both groups, a software update is installed on their classroom technology. For the experimental (placebo) group, the teachers are informed that this update will improve the effectiveness of the technology at improving student acheivement. The other is told that a software udpate has been installed.
These instructions will be delivered double-blind (something else missing from the "research summaries"): neither of the groups would know which condition they have, nor would the experiementers conducting the study (i.e., the interns would collect the data from each, without knowing the design of the experiment).
My hypothesis is this experimental design will find just as significant effects in the placebo group (due to the Pygmalion effect, as Rosenthal & Jacobson describe it) as the "research summaries" claim is due to the technology itself.
Given that literally billions of dollars from the federal stimulus (ARRA) are being directed to fund educational technology that has "research based results", one of two things need to change: either the analysis of effectiveness needs to be held to a higher standard (perhaps the same standards and requirements of publishing in an academic journal), or the requirement for "research based results" needs to be removed. The superficial pretending it's one thing when it's not is just a disservice to ourselves and our kids.
Rosenthal, Robert & Jacobson, Lenore Pygmalion in the classroom (1992). Expanded edition. New York: Irvington.