How to tell whether training programs have a payoff

How to tell whether your training programs have an impact
Question:
I read your article on management development in the March issue of Small Business Times and enjoyed it. We’re struggling with some of the issues you discussed. For example, how can we evaluate the effectiveness of the training that we do offer?
Answer:
It has been my experience that most people don’t get very excited about designing training programs, let alone evaluating them. When people think of training evaluation, they often think of a questionnaire about how well the learners liked the instruction.
It’s not easy to work up much enthusiasm for an activity in which someone administers a hastily prepared questionnaire to learners, tabulates the results and discovers that different learners have different opinions, so the best thing to do is muddle on pretty much as before.
There’s more to evaluation than asking, “Did they like it?”
Other areas worth probing include, “Do they need it?” “Is it well designed?” “Did it work?” “Did they learn it?” “Did they use it?” “Did using it do any good?” And so on.
There are several ways that training programs can be evaluated. Let’s consider four:
Need – Is the instruction needed? Is there a performance deficiency? What’s it costing? What would it be worth to “fix it”? Is it a system problem, a management problem, a motivation problem, a knowledge problem, or a skill problem? Who, if anyone, needs training? What, exactly, do they need to learn and why?
Design – Is the instruction well-designed? Does the content match the identified learning need? Does the instruction match the prior knowledge and skill levels of the learners? Do instructional methods simulate the key aspects of the environment in which the new material will be used? Does it have the necessary parts (e.g., overview, learning objectives, performance criteria, etc.)? Does the instruction match the conditions for demonstrating mastery of the material?
Implementation – Does it work? What parts went according to plan? What part of the knowledge, skill, and behavioral objectives were met? What percentage of the learners achieved each objective? Were there unexpected problems? What parts of it did the learners like? What parts did they dislike? What parts produced errors and confusion?
Impact – Did they use it? Did using it do any good? Was the post-instruction performance better? Did the work environment support or interfere with the performance desired? What percentage of the learners used their new knowledge, skills, or behaviors? Did the changed performance produce better organizational results? Was the problem identified in the needs assessment ultimately solved by the learning programming?
Having pointed out some areas of training which can be evaluated, I should also mention that actually performing this kind of thorough evaluation is often more difficult than simply identifying what you can measure. Some research that I recently completed in conjunction with the Human Resource Management Association (HRMA) bears upon this last point.
In 1997 more than 800 southeastern Wisconsin companies were surveyed regarding the manner in which they evaluate the training programming which they offer. Of the surveys which were returned (approximately 300), 94% of the organizations indicated that they offer some sort of training. Many organizations indicated a strong mix of technical and “soft skills” training. And, interestingly, 88% of the organizations indicated that they were measuring the effectiveness of their training.
However, with respect to this last figure, while most of the organizations indicated they were evaluating their employees? reactions (e.g., “Did you like it?”) to the training, less than half indicated they were evaluating the effectiveness (e.g., “Does it work?”) or impact (e.g., “Did you use it?”) of the training.
Why weren’t the organizations doing a better job in evaluating their training programming? My impression is that many organizations are not doing as strong a job as they could in identifying how the learning objectives relate to the strategic objectives of the organization and the specific expectations of the position the individual occupies. In simple terms, too much training is done on a “drive-by” basis, isolated and separate from what really counts – performance back on-the-job.
How to remedy this? Forge stronger connections between those responsible for the strategic objectives of the organization (i.e., top management), on-the-job performance (i.e., line management), and learning programming (i.e., HR and training staff). This is consistent with what Peter Senge (author of The Fifth Discipline) and others describe as a “learning organization,” an organization in which there is a clear integration between strategic objectives, management of on-the-job performance, and employee learning.
In order for this to take place, HR must be seen as a “player” in terms of organization adaptation and improvement. As we all know, HR has long suffered from an “identity crisis” in which it is not viewed as being as bottom-line-impacting as sales/marketing or operations/production. In some organizations, HR must continually “justify its existence.” This skepticism is only heightened given the heavy cost of many training initiatives.
Where training is concerned, the establishment of objective criteria is necessary to gather the evidence that such programming can indeed “make a difference.” And, as we have discussed in this article, attending to questions other than “Did they like it?” will help you to develop a fuller picture of the effectiveness of your organization’s learning programming.
HR Connection is provided by Daniel Schroeder, Ph.D., of Organization Development Consultants in Brookfield. Small Business Times readers who would like an HR issue discussed in this column can contact Schroeder at 287-8383 or via e-mail at odc@execpc.com.
April 1998 Small Business Times, Milwaukee

What's New

BizPeople

Sponsored Content