Tuesday, January 31, 2017

Pre-Tests as Student Success Tools

I always thought of pre-testing as something you do before working on a unit of content, later followed up with a post-test. Comparing pre-test results with post-test results can then be used as part of the course assessment to find out what, if any, learning has happened. But that's no longer my first thought when I hear the term "pre-test."

Several years ago, I ran across a news item that referred to a piece of learning science research that described another use for pre-testing. It showed that students who took a pre-test did better than students who did not take a pre-test. It showed, I think, that just the process of pre-testing primes student learning in a way that has a demonstrable and significant effect on student success.

As with any new way of doing things that I discover, I had to let it percolate in the back of mind for a while. First, do I really believe it? Further research showed me that this was not a one-off experiment—it's been tested in both the lab and in the field with similar outcomes. Next, will it work in my courses? If so, how would I implement it? And the all-important question: would I have time to implement it? Would it then add extra effort and time to my workload every semester, in perpetuity?

Well, I finally jumped in and tried it. I figured it could do no harm. And I found a way to do it without much effort—either in initially wedging it into my course or in maintaining it across all future courses.

I'd already been using frequent online tests, each allowing multiple attempts, as a way for my students to prepare for their written exams. Each online test has a test bank of many more items than appear on any one attempt. Each test item is pulled from a group of items relating to the same learning outcome, so tests end up being different in every attempt. By using just a handful of items in each group, the odds quickly become astronomical that a student will get the same test twice—or get the same test as any other student in the course. Sort of like the classic type of slot machine.

My existing online tests were already cumulative. They included test item groups from previous tests, so that students have continuing practice with concepts introduced throughout the course, as explained in my recent article, Cumulative Testing Enhances Learning

What I did to make the pretests is simply go into my online test editor (Respondus) and make a copy of each online test. Then I removed the cumulative item groups from each test, leaving only the item groups that pertain to that particular unit of study. An easy and quick job in the test editor. Then I saved those as pre-tests and uploaded them to my learning management system (LMS). And set them up for ONE attempt only (not the usual three possible attempts).

What?! Using the same test items as their "real" test? Isn't that just like handing them a list of answers? Glad you asked! Remember, the odds of anyone ever getting "the same test" again (or as anyone else gets) is astronomically low. What they get in the pre-test is a "version" of the real test, but not the actual test that individual will end up taking later.

I then set up my LMS so that each pre-test opens about halfway through the preceding unit. Students can go into the LMS before their next unit to take the pre-test for the upcoming unit. I also set things up so that students do not have access to the course resources they need to get through the unit until after they submit the pre-test. For example, I use online Previews as part of my sorta-flipped format (I called it a half-flip with a quarter turn). If they don't take the pre-test first, then neither the Previews, nor anything else, will open up for them.

Because the "locks" that unlock the other course resources give students incentive to do the pretests, I didn't need to assign grade points or deduct grade points if they didn't do it. My pre-tests do not impact the grade directly—only indirectly by enhancing student learning.

I have been amazed at the results of that simple step. I noticed that class performance increased right away—and there has been no downward trend since then. So I guess I shoulda believed the research data when I first saw it, eh? 

I think there are several things going on. For one, pre-tests give students an overview of what they'll be expected to solve at the end of the unit. And they'll get a chance to use what they know already to predict what might be a correct answer—with immediate feedback on where they predicted incorrectly. This prediction exercise can be a powerful learning strategy.

Also, as they then struggle through the unit, they have in mind what they need to master if they're going to have a chance of passing the test. 

Along the same lines, they gain some familiarity with the upcoming material. They'll have "seen this all before" even if they don't fully understand it. As we go through it all after the pre-test, students will have already walked through that neighborhood, so it's not so unfamiliar to them.

I think pre-testing also shakes loose some prior learning. That is, students will recognize some basic principles and some patterns that they've seen before. I suggest that this stimulates their awareness of how things connect and thus gets them better prepared for their new learning.

One last thing I want to mention: I now have another assessment tool that I can use to compare before-after data and get a sense of what my students have accomplished. Even better, I suppose, is that I added a column to their LMS gradebook that calculates their "gain" by providing the percentage by which they improved between the pre-test and the real test (post test). Individuals seeing that they learned a lot in each unit is a real motivator for continued hard work in the course.

As with anything, there are potential pitfalls. The one that I didn't anticipate, but should have, is that the super-high-achievers will NOT want a "bad grade" anywhere in their gradebook. Even though students are EXPECTED TO FAIL the pre-test—and it isn't part of the course grade calculation. It's a mindset—it doesn't have to make sense.

So I would have these high-achievers in my office the day after they took their pre-test and want me to go over every item with them. I'll bet their blood pressure wasn't normal that day, either—probably even worse the night before. The solution I found, which is not 100% effective, is to repeatedly remind students that they are expected to do poorly on the pre-tests. And that they should not struggle with it—just think a moment, then give your best guess and move on.

Like any unfamiliar teaching strategy, pre-testing works best if you tell students how they will benefit.

If any of you have experiences with pre-testing that you'd like to share, please comment at the blog site, in the form below this article.

Want to know more?


Testing in A&P Courses
  • Kevin Patton. The A&P Professor. Collection (various dates).
  • An assortment of brief articles on methods and issues regarding testing in the undergraduate A&P course.
  • http://my-ap.us/2kyiffC

Cumulative Testing Enhances Learning
  • Kevin Patton. The A&P Professor. 5 Sep 2016.
  • Article on how cumulative testing can be used to promote long-term learning in A&P courses.
  • http://my-ap.us/2kydGBW

The pretesting effect: Do unsuccessful retrieval attempts enhance learning?
  • Richland, Lindsey E.; Kornell, Nate; Kao, Liche Sean. Journal of Experimental Psychology: Applied, Vol 15(3), Sep 2009, 243-257. 
  • Research article describing experiments in pre-testing.
  • http://dx.doi.org/10.1037/a0016496

Small Teaching: Everyday Lessons from the Science of Learning

  • James M. Lang. Jossey-Bass, San Francisco. 2016.
  • If you don't read anything else on teaching-learning this year, at least read this. Lang's clear writing, chunked into small chapters, reviews some of the major contemporary insights with practical "small" things you can do in your class to improve learning. Part I, Chapter 2, discusses pretesting.
  • http://my-ap.us/2knofdh



No comments:

Post a Comment