In his critique of my recent Slate article about the problem of overusing technology in math education, Paul J. Karafiol begins by setting up two straw men: that I believe American math students do worse today than in the past, and that American students lag behind those elsewhere.
Let’s take the longitudinal question first. Exams and grading standards change over the years, so it’s difficult to make a meaningful comparison between today’s calculus exam and 1997’s. There are better and worse ways to do this, but it’s a less than worthwhile debate in my mind. The question is how well we are doing compared to how well we could be, not how we compare to the past. Karafiol’s cheery gloss on this comparison is as beside the point as those who use other numbers to augur doom.
Along those lines, comparing the United States with Singapore or Finland or South Korea also never struck me as consequential. They are all much smaller countries, with very different cultures. We are not, in any case, competing with them, for reasons I’ve gone into elsewhere. I just don’t think you learn a whole lot by comparing our average test scores with theirs. In any case, parts of the United States like Massachusetts tend to do well on international comparisons, while others like Mississippi do badly. The reasons for this have little to do with curricular differences and everything to do with demographic fault lines well beyond the scope of the present debate.
In any case, Karafiol’s parable in which his calculus students’ calculators do the “heavy lifting” proves only that his students have good calculators, not that they learned anything. It reminds me of this Tom Toles cartoon.
Karafiol makes the point that technology is neither good nor bad inherently—how we use it determines its value. This is true enough, in the trivial sense that it isn't guns but rather people who kill people. The question is the effect of technological tools as they are used in practice in American classrooms today. And though different tools—graphing calculators, interactive whiteboards, a slew of software packages—are of course different, they share an underlying commonality I try to get at in the piece. Could any one of these in principle be used in a virtuous way? Sure. (Speaking of software generally, not of particular packages, some of which are asinine to the bone.) But the relevant question is how such technologies are in fact used, just as the relevant question with regards to guns is not how they might be used in principle, but the violence that they, in the real world, enable.
Others criticized the piece for neglecting a body of empirical evidence that “proves” the efficacy of some technology or other.
There wasn't room in the piece to make a comprehensive listing of the many empirical studies from the education literature that I read in the course of researching the article, nor will I do so now. What I will do is make an observation.
The standard sort of thing that is done is to take two sets of students, a control and a treatment group. Use your technology with the treatment group but not the control group. Find a difference in performance greater than that which could have arisen from chance. Pronounce the technology effective.
But the fact that the result was statistically significant in a mechanistic sense does not mean that it is meaningful. The number of confounding factors is large and potential for researcher bias is profound. I pointed out a couple of obvious examples of such bias in the piece. There are many more. This is why clinical trials in medicine are closely regulated by the FDA and, when possible, done in double-blind fashion. It's nearly impossible to do a double-blind study of different pedagogical techniques; the methodological poverty of research into education is by no means confined to research into the use of technology.
Read more: http://www.slate.com/blogs/future_tense/2012/06/29/math_education_technology_does_not_promote_real_understanding_.html
There are no comments for this article.