BlogHuman Resources

How to Conduct an Instructional Design Evaluation

Continu Team
One Platform for All Learning
Human Resources
January 26, 2024

Dive into the steps and best practices for evaluating instructional design, ensuring effective learning experiences and outcomes.

Instructional Design Evaluation Template

You’ve spent weeks researching, planning, and designing your training. You’ve edited and reworked the instructional materials countless times. Finally, after what seems like years of hard work, you’ve created a training course that you think is as effective and engaging as it can be possibly be.

But how do you know it’s as good as you think it is?

Instructional design evaluation will help you find out. Learning and development is a high-stakes activity—developing and implementing trainings can cost tens of thousands of dollars for large organizations.

What is an instructional design evaluation?

An instructional design evaluation is the process of determining whether a training program meets its intended goal. In addition, evaluating the course helps determine whether learners can transfer the skills and knowledge learned into real-world job performance.

And if those trainings aren’t having measurable real-world effects, executives aren’t going to be happy. Trainers, human resources managers, and instructional designers need to be able to show that the trainings are working.

Effective instructional design evaluation uncovers evidence to prove the training’s value. Or, if the value is unexpectedly low, show how to make improvements to the ROI.

What gets measured, as they say, gets managed. It’s painfully cliche, but it’s absolutely true. Instructional design is how you measure your trainings. Here’s what you need to know to get started.

Training That Helps Employees Grow

Build a thriving team with customizable employee training that works.
Learn More

Where Evaluation Fits into the Instructional Design Process

Ross and Morrison (2010) sum up three high-level types of evaluations quite nicely:

Formative evaluation is used to provide feedback to designers as the instruction is “forming” or being developed. Summative evaluation is conducted to determine the degree to which a completed instructional product produces the intended outcomes. Confirmative evaluation examines the success of instruction as it is used over time.

If you have the resources to conduct all three assessment types, there’s no question about where evaluation fits into the design sequence. It happens throughout the entire process (and continues after design is done, as well).

Of course, not every company will be able to run the large number of assessments necessary for formative, summative, and confirmative evaluation of every course. In these cases, determining when to evaluate your instructional design may come down to your goals.

To make sure that your design is effective when it’s debuted, for example, formative evaluation practices are useful. Summative evaluations determine how much employees learned and whether this knowledge creates behavior change. And confirmative evaluations measure the long-term success of the program.

Frequent evaluation allows for continuous improvement. So it’s best to assess your instructional design as often as you can.

Designers who follow the ADDIE model might be tempted to leave all evaluation for the end of the process. But evaluating more often has many benefits.

Play to your organization’s strengths here. If you have a budget big enough for multiple evaluations throughout the process, go for it. If not, you may need to get creative. We’ll talk more about these three types of assessment a bit later.

Instructional Design Evaluation Strategies: The Kirkpatrick Model

How, then, do we measure the effectiveness of instructional design? There are many theories, each with advantages and disadvantages.

One of the most useful frameworks for evaluating your instructional design is the Kirkpatrick model. Although it has drawbacks, the model’s simplicity and popularity make it a good option for companies looking to evaluate their instructional design. Especially if you’re interested in comparing results to other companies in your industry. The model has four levels:

  1. Reaction
  2. Learning
  3. Behavior
  4. Results

Let’s take a look at each one individually.

1. Reaction

How do learners react to the training? If their reaction is positive, they’re more likely to have positive learning outcomes. A negative reaction doesn’t preclude learning, but it makes learning less likely.

The factors that elicit positive reactions vary. Trainers, training methodologies, and audiences all have unique needs and preferences. But there are some commonalities. A 2009 study found that trainees’ perceptions of the training’s efficiency and usefulness, as well as the trainer’s performance, were correlated to reactions.

The usefulness of the training was the most important factor in the study. Which won’t surprise anyone who’s ever been to an irrelevant or useless training.

Trainings that are efficient (i.e., don’t waste time), useful, and conducted by likable, interesting, effective trainers are going to get the most positive reactions. And that will help boost learning.

(Though it’s important to note that there are multiple types of reactions to instruction, and not all of them may be correlated with learning.)

2. Learning

How much did participants learn? Did they retain the information over time? These questions don’t seem overly complicated. But getting reliable objective measures can be difficult.

For example, when measuring learning, you need to know how your participants’ level of knowledge before they completed the training. If trainees were already experts, showing that they’re experts at the end of the training isn’t very interesting. Which is why both pre- and post-testing is necessary for effective measurement of learning.

There are many ways of testing the acquisition of knowledge; assessments, self-assessments, informal testing, and so on. Choosing the right instrument may depend on the type of training you’re running and your measurement objective.

Knowledge acquisition, for example, can be measured with a simple multiple-choice quiz. The ability to apply that knowledge, however, is more difficult to measure, and may require a more in-depth assessment.

3. Behavior

Are your participants applying what they learned to their jobs? This is possibly the ultimate test of the effectiveness of instruction.

Successfully measuring behavior change isn’t easy. You’ll need to determine the most relevant behaviors, measure them before the instruction, measure them after the instruction, and figure out if the training caused the change.

Let’s look at an example. If you run a course on sales enablement, the employee behavior you’re trying to affect might be the sharing of information between marketing and sales departments. You’ll need to identify some sort of measurable metric to see if this is actually happening. Which metrics might you look at to assess behavior change in sales enablement? Many knowledge-sharing systems provide analytics on content usage, like the number of times a particular piece of content has been accessed or used in a sales pitch. How many sales interactions include materials developed by marketing? That’s a metric that you can compare pre- and post-test.

Other software platforms can provide you with similar metrics. And while behavior change can be measured independently of tools, those instructional design tools make the process much easier.

While they aren’t as rigorous as objective measures, employee surveys are useful as well. In our case, we might ask employees how often they use the sales enablement system, the percentage of documents they think are shared between the two groups, and similar questions.

The most important thing to remember here is to choose a metric that’s closely tied to the behavior you’re trying to influence. Choosing the wrong measure of behavior can skew your results.

4. Results

Did your training affect the business’s bottom line? In the end, this is the most important question. If your training increases revenue, decreases costs, or otherwise improves profitability, it was a success.

To answer this question, you need to extend measures of behavior change to business results. If your sales and marketing teams are communicating more effectively, and those teams are pulling in more money, it’s a good bet that your training had a positive bottom-line effect.

There’s one snag, though: how do you know that the bottom-line change had anything to do with the training? There are countless factors at play when it comes to revenue and profitability, and isolating a single variable is tough.

Your sales team might be selling a lot more after the sales enablement course, but how do you know that a manager didn’t switch up their management style as well? Or that a new employee became a selling superstar? These questions are difficult to answer.

The best answer, unfortunately, is that you’ll need to use your reasoning skills. Look at as much information as possible and see if there might be confounding variables. If you find one (or more), see if you can control for them. If you can’t, you may just have to draw conclusions based on the available data.

Measure What’s Important

Kirkpatrick’s model is a useful framework for evaluation. But the specific model you use isn’t important. What’s important is that you measure the effectiveness of your training and take actions based on what you find.

And when it comes to measurement, the most important factor is bottom-line results. Is your instruction having a positive impact on the bottom line?

Unfortunately, many companies don’t know. According to Patel (2010),

  • 90% of surveyed companies measured participant reactions
  • 80% measured student learning
  • 50% measured behavior
  • 40% measured results

Instructional designers are measuring what’s easy to measure, but not what’s important. Remember what you’re ultimately trying to achieve when you set out to evaluate your instruction.

Formative, Summative, and Confirmative Evaluation Methods

In addition to knowing what you should be measuring, it’s important to understand why you’re measuring it. We previously discussed formative, summative, and confirmative evaluation. Now we’ll talk about what those are and why they’re important.

Formative Instructional Design Evaluation Methods

Formative evaluation can happen at any point during the learning process. It can take place while the instructional materials are still being developed, while learners are using them, or after the training is done.

The point of formative evaluation is to help instructional designers improve their instruction materials and methods.

Rapid prototyping and testing of instructional materials often happens alongside formative assessment before the training has been implemented. Because formative evaluation is best done quickly, the methods for this type of assessment are often somewhat informal.

These methods might be as simple as asking people what they thought of the training. Was the material clear? Was it presented efficiently? Did it seem useful? Were the practical exercises helpful? Questions like these help designers optimize their trainings to maximize engagement and learning.

Most of the questions in formative evaluations are related to Kirkpatrick’s reaction and learning steps. If you’re getting positive reactions and trainees are learning, you’re on the right track.

One-on-one interviews, focus groups, surveys, and informal conversations are useful for formative instructional evaluation. Field trials are also useful in that they give designers an idea of how the training will be conducted in real life, which may get different responses than more controlled trials.

Summative Instructional Design Evaluation Methods

Summative evaluation happens after the training has been completed. The primary question that summative evaluations ask is whether the training had the intended outcome. Did participants learn the information the training was designed to teach?

If they did, the training was a (summative) success.

Determining whether participants learned the information is relatively easy. Post-test quizzes and questionnaires give designers insight into the efficacy of the training (of course, this requires a solid pre-test regimen as well). These instruments are great for measuring the knowledge uptake from the training.

This corresponds to the learning portion of Kirkpatrick’s model. But summative assessment can also take behavior change into account.

Figuring out if learners are modifying their behavior is more difficult. As we discussed above, this may be best accomplished with tools that measure the metrics you’re interested in. Knowledge-sharing software, customer service apps, project management platforms, and other types of tools have analytics and metrics built in and can help managers track changes in performance.

Surveying managers might also be an effective method, especially if the managers weren’t a part of designing or implementing the training. Choosing the right summative metrics is up to the designer. Be sure to give them a lot of thought; you might come up with an obvious metric at first but discover a more apt one later on.

Confirmative Instructional Design Evaluation Methods

As we’ve discussed, measuring the bottom-line effects of a training is the ultimate determination of whether the instruction was valuable. Confirmative evaluation takes a long view and asks whether the training was effective weeks, months, or even years after it was completed.

Confirmative evaluation benefits from many of the same methods as summative evaluation, but designers will also be expected to report on the ultimate business metric: return on investment (ROI). If a training has a positive ROI, it was a success. This evaluation is all about the results step in Kirkpatrick’s model.

Of course, that means that there needs to be a monetary value associated with the outcomes from the training. And that will likely require the cooperation of many people in different areas of the company. If your training improved manufacturing processes, you’ll need information from the manufacturing team to see if the practices and concepts learned in the course are still being applied.

You’ll also need accounting information to find out if the manufacturing team is more profitable than it was before the training. You may need to use statistical analysis to find out if there’s a significant correlation between training-related behavior change and positive financial outcomes for the company.

And it’s possible that there will be intermediate steps, as well. If the instruction was aimed at reducing injuries, you’ll need to find out how much the average injury costs your company—then look at behavior change to determine how many injuries were averted by the training.

While both formative and summative assessment can benefit from qualitative standards, confirmative evaluation is all about numbers. Which is why you’ll need hard financial data to compare to information on behavior change.

Begin the Evaluation Process Early

If you’re just starting your learning and development program, this might all sound overwhelming. But remember that evaluation is an ongoing process that runs alongside instructional design.

You don’t need a fully fledged evaluation system for your entire program at the outset. Instead, develop it as you go along. Start with the goals of your training, and develop formative, summative, and confirmative measures that tell you whether you’re heading toward those goals.

Like anything else, effective evaluation takes practice and experience. It can be tempting to give scant attention to instructional evaluation, because doing it well takes a lot of time. But with the right system in place, your evaluation efforts will get faster and you’ll post a better ROI.

And isn’t that worth taking the time to do?

Schedule a Demo Today

See Continu in action and how it can help your organization build a culture of learning.
Get Demo
Schedule a Demo Today
About the Author
Continu Team responsible for Continu's content.
Continu Team
One Platform for All Learning

Continu is the #1 modern learning platform built to help companies scale and consolidate learning. From training customers to employees, Continu is the only platform you need for all learning.