A simple model for assessing your online orientation program

Graphic of Kirkpatrick's model of evaluation, depicting the 4 levels in pyramid formation

If we’re going to dedicate time and energy to creating and maintaining an online orientation program, we’re going to want to have some kind of evidence that the program is having a positive effect on students. Kirkpatrick’s model of evaluation offers an effective way to think through how you want to assess and evaluate your program. Kirkpatrick’s model is a popular approach to evaluating training programs, and offers four different levels of evaluation:

  • Level 1: Reaction
  • Level 2: Learning
  • Level 3: Behaviour
  • Level 4: Results

Your assessment and evaluation strategy may encompass all four levels, or just one, so you should decide at the beginning of the project which levels you’re interested in evaluating. As you move from levels 1 through 4, evaluation techniques become increasingly complex, data becomes harder to collect, and it’s more difficult to be certain that the findings of the evaluation are attributable to the training course, and not confounded by other variables. Across most organizations, evaluation at the lower levels of the model happens more frequently than evaluation at the higher levels, and this generally holds true for our work in student affairs as well.

Kirkpatrick’s model of evaluation

Level 1: Reaction

The first level of Kirkpatrick’s model measures students’ reactions to the course, looking at how they feel about the content, delivery method and, if applicable, instructor, and their overall satisfaction. Essentially, in this level, you are trying to answer the question, Did they like the program?

Level 1 evaluation is typically done via surveys, although can also be done via focus groups or individual interviews. In UVic’s Pre-Arrival Program, we included an evaluation at the end of the program that asked students how helpful they found each topic (on a 4-point scale), how helpful they found the program overall, and gave them space to tell us their favourite part, and what they thought could be improved. Students had to submit this evaluation to complete the program.

We also sent a follow up survey 3-4 weeks into the term. This survey repeats the same questions that the end-of-program survey had, asking students to re-evaluate how helpful they think the Pre-Arrival Program was, now that they have actually experienced university life. This survey is sent to all new students, which allows us to get feedback from students who completed the entire program, completed only parts of the program, and who did not access the program at all.

Level 2: Learning

The second level of Kirkpatrick’s model evaluates what the student has learned, and is measured by changes in their abilities, including knowledge, skills, and attitudes. Essentially, this level asks, Did they meet our learning outcomes?

Level 2 evaluation is often done using a pre-post test, or using a comparison group. In UVic’s Pre-Arrival Program, we chose to use a pre-post test. Before a student can access the 8 different topics within the program, they must complete a pre-test, where they are asked to evaluate, on a 4-point scale, how prepared they feel to ________. They are asked one question that corresponds to each of the program’s overarching goals, and one question related to each of the 8 topics included in the program. Once a student has completed all 8 topics, they gain access to the post-test, where they are again asked to evaluate how prepared they feel to _______. While this method does mean we are relying on self-reported learning, it allows us to determine whether there have been any shifts in their perceived preparedness to start at UVic as a result of completing the program.

Often, online orientation programs might try to evaluate learning through end-of-module quizzes. However, while these quizzes can confirm that a student knows the information we want them to, it does not necessarily mean they learned it through the program, as we don’t know what their prior knowledge was. Additionally, I have often found that these quizzes are passable even without engaging with the content; the quiz questions are often multiple-choice, and the correct answer can be intuitive or easily guessable based on the options provided. Be strategic with this option!

Level 3: Transfer

The third level of Kirkpatrick’s model measures changes in a student’s behaviour after they have completed the training, attempting to determine if they are applying what they learned to their everyday lives. Essentially, this level asks, Did they implement the things they learned? 

Level 3 evaluation can be more difficult to carry out, as it involves observing behaviour. Are students using the study strategies you recommended? Are they making the correct decisions when it comes to situations that involve academic integrity? Are they accessing the resources and services you introduced? We obviously can’t follow our students around to make these determinations, so we need to look to either self-reported information or institutional data. Neither of these strategies is perfect, but they can help us understand what is not working within our program.

With UVic’s Pre-Arrival Program, we don’t evaluate too thoroughly at level 3, but we do make an attempt. Within the program, students are required to complete a Think Forward activity at the end of each topic. These activities ask students to apply what they have learned, and set goals and intentions for the term (i.e. choose 3 study strategies from a list that they would like to try; set 2 health and wellness goals, etc.). In our follow up survey, sent 3-4 weeks into the term, we ask students whether they have followed through on their intentions. It’s not a perfect evaluation metric (especially since answer the question requires them to remember the intentions that they set), but it does give interesting data!

Level 4: Results

The fourth level of Kirkpatrick’s model looks at the impact that the training program has on business outcomes, or in the case of an online orientation program, institutional metrics, such as retention and grade point average (GPA). Essentially, this level asks Did the program have an impact on the metrics that matter most?

In some ways, level 4 evaluation is not that difficult to carry out. You simply need access to institutional data to be able to see how the students who completed the program performed. The difficulty comes in ensuring that the result you found is actually a result of your program, and not a myriad of other factors. At first glance, comparing the GPAs or retention rates of students who completed the program with those who did not might indicate that completing the program was helpful. However, it’s also possible that the students who completed were more likely to succeed in the first place, or maybe many of them also participated in a different intervention that helped them be more successful.

Evaluating the program at level 4 helps you ensure that your program is addressing the right problems. Maybe a goal of your program was to help students get better grades, so you taught students a number of different study strategies. Level 3 evaluation will help you understand if they implemented those strategies, but level 4 will show you whether implementing those strategies actually helped students do better in their classes. If they don’t, you may need to revise your program!

Adding one more metric

This isn’t covered in Kirkpatrick’s model, but with an online orientation program, there’s one more metric that’s important to measure: completion rates. I like to call this level 0. The question asked at this level is very simple: Did students complete the program?

Most learning management systems, if set up properly, can track completion at an activity level, a topic level, and at the program level. Tracking completion allows you to intuit student’s reactions to the course to some degree before you even get to level 1. If students aren’t completing the course, or are skipping one topic at a higher rate than others, then something likely needs to change. Comparing completion rates between different groups of students (i.e. by gender, by faculty, by admit status, by citizenship, etc.) can also be useful.

Assessment and evaluation is not necessarily quick and easy, but it can help you ensure your programs are useful, making an impact, and appreciated by your students!


References

Kirkpatrick, D. (2006). Four Levels of Evaluation. Association for Talent Development.

Strother, J. B. (2002). An assessment of the effectiveness of e-learning in corporate training programs. International Review of Research in Open and Distance Learning; Athabasca, 3(1).

Leave a Reply

Your email address will not be published.