Can we stop assessing every damned thing?!

The joint AAAS/NSF/HHMI ‘Vision and Change Report‘ has been around a long time, but I still find myself bringing it up with colleagues over and over. In that policy paper, a gaggle of worthies made the claim that our traditional ways of teaching are not serving our students as well as they could. They argue for kicking professors’ content-addiction in favor of a ‘less-is-more approach,’ and dispensing with factoid recitation so that students have plenty of opportunities for knowledge construction. All this is perfectly commensurate with my own philosophy and approach. ‘Vision and Change’ is specifically about biology education, but of course the same ideas ring true regardless of the disciplinary cave you use for shadow interpretation.

Another aspect of the current push for change in science pedagogy, however, is the very science friendly-sounding exhortation for a data-driven approach. It’s a very science-friendly notion, of course, since in science the models we base our actions upon sit at the apex of the eternal hypothesis-experiment-analysis cycloid. And I’m all for that. All too often, though, this evidence-impetus is interpreted in two ways that could actually dampen what might otherwise be a surge of teacher-interest in Change. First is the notion that we have to prove each tool we employ performs as advertised. Then, and this is the real candle douter, there is the pressure to assess how classroom action affects the sacred Learning Outcomes.

The first of these (dare I call them?) Barriers to Change is quite sciencey: base your approach on the what the peer-reviewed literature. And I think that’s fantastic, as far as it goes. There is a rich literature showing that student-centered teaching practices are very effective (as measured by a variety of tools)… a literature that stretches back decades. In fact, it is this preponderance of evidence that has led to us calling them High Impact Practices, or HIPs. Please don’t take my word for it, that wouldn’t be scientific… the literature is as dense as Venus’ atmosphere, otherwise we’d call them “PTMBHI” – Practices That Might Be High Impact. I’m not writing a literature review, but it’s very easy to access one. A recent case in point is a great review that a colleague pointed out to me just this morning. The authors make it clear that active learning methods result in better learning, as measured by reasonably standard performance metrics. And on the subject of “impact”, that paper has been cited 232 times in the past 18 months! The earliest study it reaches back to is 1983, if we discount Piaget and Vygotsky, who championed active learning a hundred years ago!

Enough exclamation points. The key idea here is that, kind of like climate change, we don’t need to keep proving it works to feel comfortable acting on it’s most important lessons. There’s always room for improvement of course, but as V&C itself points out, changing the way professors teach, the methods institutions value, is the great hurdle to transforming education. If every class project has to become a double-blind control, then professors with actual research projects to attend to, and adjuncts that spread 15 units between 3 colleges, will never buy in. Especially important for an institution like mine, where 80% of the student interactions involve part-time faculty, is the need to make smooth the way for professors thinking of introducing active learning. So, instead investigating active learning, we just need to get on with it.

I think there’s a pretty low bar for publication in the science-education literature, which makes it a very attractive avenue for checking one’s scholarship box on institutional evaluations; hell, they’ll even take my papers. This means folks will spend some effort adding to this copious literature, but to what end? I do think it’s very important to share ideas, methods, and activities as broadly as possible. J. Chem. Ed. and BaMBEd are chock full really innovative experiments and activities, and the number grows every month. They’re not all probing some pedagogical hypothesis; they’re just spreading the word, improving others’ classrooms, and inspiring the colleagues do more. Growing the numbers of students learning with these tools is terrific. My problem is with running every class project as if it were a controlled (controllable) experiment, and that problem is articulated very well in a much earlier review (this one with over 2,000 citations!), “…claiming that faculty who adopt a specific method will see similar results in their own classrooms is simply not possible…” due to the sheer number of variables involved in a student’s learning (not to mention an entire class’). It’s easy to chase down complexities of classroom learning, to see how the solid conclusions found in one instance unwind in a different one: what are the other contexts of a particular lesson? how/is the lesson supported by reiteration elsewhere in the course? how is students’ grasp of the lesson measured?

Reeling it back in to the level of a particular institution, though, I see the second great Barrier to innovation: constant assessment. But I won’t write about that here… this post too long to get to the top of the pile already. I’m sticking with the title, though it should probably be something more like, “We already proved it works, can we stop arguing about it?”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s