The strength of a company is a strong team. For employees to build strength, they need to be trained regularly. And do it not just for a tick, but effectively.
How do you evaluate whether training has been effective? Many clever people have tried to invent an evaluation model. Some remained in eternity. We’re going to break down those models now: the essence, the pros and cons. Let’s go.
Kirkpatrick Model
The dinosaur model is over half a century old and a bit “naive” in today’s realities. But it has survived to this day and is still in use because it’s well thought out. There are 4 steps, where you have to stop and evaluate specific points:
- Response – how students respond to learning. If something is stalled, unclear, we refine it. Poor performance here threatens a good outcome.
- Comprehension – how students have mastered the material. Do a quiz and other rituals.
- Behavior – how students perform after the course. Efficiency, speed, or whatever was needed?
- The result – summary. How did the training benefit you, did it solve your problems?
Kirkpatrick assessment turns training into a business tool – something not abstractly developmental, but tangibly useful. However, it doesn’t take into account the financial side – whether the training worked out for the money spent.
Phillips Model
Kirkpatrick believed that the outcome of training should not be counted in money. There’s a lot of human error, and it’s hard to estimate payback.
It was forty years after his model appeared. Phillips proposed a new one – with an important addition. Since we are talking about business training, it’s critical to evaluate the financial side: training is part of the business, not just development.
Phillips model is like the most elaborate real money slots Canada because everything is taken into account: how the training shows itself in practice, whether it led to the goal and worked out the money. But the model isn’t suitable for everyone and not always. You need a strong finance department to do a good job of counting training ROI.
Stufflebeam Model
The second name for this model is CIPP. Right now it will be clear why:
- Coptecht Evaluation – assessing the context of development. Who do we teach, what do we teach, and why?
- Input Evaluation – we look at what we have in the input. What kind of educational project do we want? Can we do it? What do we want to achieve?
- Process Evaluation – evaluating learning in process. How is it going? What are the intermediate outcomes?
- Product Evaluation – looking at the results of training. How did employees learn? How are they doing now?
Emphasis is spread evenly across all phases: we evaluate both the preparation for training, the process, and life after. We go through the training comprehensively and gather the big picture. But it lacks focus on clear outcomes. Getting mostly answers useful for reflection and learning development. To give the business all the answers about effectiveness, more calculations would be needed.
The CIRO Model
This model is similar to the previous one, with adjustments for the human approach – learners are given a lot of attention here. It was developed by the team of Warr, Byrd, and Rackham. Sometimes it’s called the Byrd model, the more common name is CIRO, and this is how it stands for
Deciphering:
- Content Evaluation – Assessment of the essence of learning. Who, what, and why should be taught?
- Input Evaluation – input evaluation. How do we teach, assess? What resources are available?
- Reaction Evaluation – assessment of reaction to learning. What do students think: how do they feel about learning in general, what is clear, easy and comfortable, and what is not?
- Outcome Evaluation – assessment of results. Did they achieve what they wanted or not?
Finally, attention is paid to learners. Their perception of learning affects its effectiveness – this is an important component of assessment. The model will help trigger learning carefully and add human-centeredness.
Tyler’s Model
The problem with learning, according to Tyler, is fuzzy goals. Here is what he suggested:
- Set detailed and clear goals. Classify them: each goal is a specific behavioral pattern.
- Think about how to check that the goals have been achieved, that is, that the patterns of behavior have been mastered. Situations to check, a scale of evaluation.
- Complete a comprehensive assessment of effectiveness. Collect data on effectiveness of training and work after – compare.
Detailed goals are a good thing. Especially in corporate training: as in any business process, here every step should be done for a specific goal. But the focus is shifted to working out the goals – there is a risk of getting stuck at this stage, slowing down the start of training or “exhausted” on the way to evaluating the results and not paying enough attention to them.
Scrivens Model
Scrivens suggested inviting an outside evaluator who is not at all aware of what you have been training and what you wanted to accomplish. The evaluator will look at the outcome and diagnose if the training is effective.
With this model, you take away the task of working out the details of the assessment (process, criteria). You get an outsider’s view, uncluttered by the routine of your business and training.
But you also need well-trained, experienced appraisers. Where do you get them and how do you check for reliability? The Scrivens model isn’t a full-fledged way to evaluate performance, but rather an interesting approach, a supplement to internal evaluation.