People
Articles
Events
Marginalia
Genealogy

HomeIndex.htm

 

 

Measuring the Training Department

Stanley E. Malcolm, Ph.D.

You've heard the old saying, "Everyone complains about the weather, but nobody does anything about it." You could say just about the same thing for measurement in the training field. I'd like to suggest how training and development professionals might approach creation of meaningful metrics. The metrics themselves are beyond the scope of this article - since they will necessarily be idiosyncratic to the organization being measured. But the approach applies anywhere and is based on simple notions of business urgency and customer service.

Measurement can be approached at two levels: the "course" or event, program, initiative - call it what you will; and the department - training, training & development, etc. While my main purpose is to address measurement at the department level, I'll include a few words about course-level measurement as well (Side Bar 1) - since it is hardly possible to succeed at one without paying close attention to the other.

Measuring the Training Department

External and Internal Metrics: I find it useful to distinguish "external" from "internal" metrics. For me, external metrics are those that others (e.g., sponsors, customers) use to assess the training department's value. Internal metrics are those we in training use to assure ourselves that we're doing the right things - measures which will serve as indicators of strong results to come on external metrics. Putting it another way, external measures may be thought of as summative while internal measures are primarily formative. Alternatively, you might think of external metrics as assessing terminal objectives while internal metrics address enabling objectives, or steps along the road to success.

Subjective and Objective Metrics: Note too that metrics can be objective or subjective - and we should look for a balance of both. In my opinion, there is sometimes nothing better than a good, subjective anecdote or two to build senior executive confidence in its training function. For instance, a senior sales representative who tells the the Vice President, "I wouldn't have made that sale if I hadn't had such-and-such learning experience" can have enormous impact on management perceptions of the training department. I believe the effect is greater because a) it's coming from a business person versus Training measuring itself, and b) the link to the bottom line performance is often more direct (even though anecdotal). (See Side Bar 2, below, "Subjective versus Objective Measures", for more depth on this issue.)

Approaching Your Sponsors: How should the training department be measured? The best people to answer that question are our sponsors, the people whose money we're spending! I'd like to propose a radical step - let's ask them. Specifically, we need to ask what measures would assure them that they were receiving value in excess of what they were investing.

No, it's not quite that simple. We should first present some basic information to them in order to give context to the question. Otherwise, it is our own fault if they tell us that they want to know how many people we trained, the old "asses in classes" nemesis. Also, included in our presentation should be some suggestions for what we think they should expect of us. Following are some suggested bullet points for such a presentation:

  • Statement of the problem: The training department wishes to adopt metrics that will demonstrate our value to the company. We need to determine which metrics to select, and then establish a measurement and reporting strategy. We would like you, our sponsors, to help in selecting appropriate metrics.
  • Flaws with "traditional" measures of training: Classes offered and students attending them are measures of training volume. However they say nothing about the appropriateness of training to closing business performance gaps or preparing for the future. In other words, the training department might be doing a lot of the wrong thing. Such measures may in fact further confuse the picture by a) acting against learning strategies that don't fit the measure, e.g., on-demand training without registration, electronic performance support systems, or structured on-the-job training; or b) concentrating trainers' activities on delivery versus development.
  • Given this insight into the world of training measurement, how do you know if training is giving you an adequate return on investment? (Probe for objective and subjective measures.) Can you think of any anecdotes from your past, that is, training that you felt was particularly effective? How did you know?
  • Finally, having engaged your sponsors in the question of metrics, suggest some or all of the following questions be the foundation for external metrics, supplementing the list with others specific to your environment if necessary.

External Metrics: Once again, by "external metrics" I mean measures valued by the training department's sponsors or customers: generally your company's senior executives. Your goal is to determine what metrics would assure the executives that the training department is providing them greater value than cost. Questions around which you might design external metrics include:

  • Are we addressing the most critical training needs?

    All training needs are not created equal. Resources are finite at best, typically scarce, and never to be frittered away. Every dollar you spend had first to be earned through sales. Given these facts, it's critical that you determine what matters most to your company's success - and focus resources there. If some other, less critical things don't get done there may be a few complaints but senior management will not likely be among the complainers.

    Your job is to determine the most critical needs in terms of business urgency. Ask yourselves what keeps your executive sponsors up at night. Ask how they would define "X" in the sentence "If only we could do a better job at "X", our business objectives would be achieved." Concentrate your resources on training needs associated with the "Xs".

    If you can't achieve this kind of linkage with business strategy, you might as well stop here - it won't matter how successful you are on other questions. As for measuring your success on linkage to issues having business urgency, ultimately it is your executive sponsors who must answer the question. Your job is to feed them information on what you understand to be the company's priorities and what you are doing to address training needs associated with them.

  • Are we focusing training dollars on the right organizations?

    Just as all training needs are not equal, all organizations are not of equal importance to a company's success. Some lines of businesses represent the rising stars that will provide tomorrow's profits, others are today's cash cows, providing the capital and profits for business growth and investor satisfaction, and other businesses may be past their prime, candidates for sale or dissolution.

    Your executive sponsors should be able to tell you how they'd like to apportion resources among lines of business. Measurement is a simple matter of reporting how closely you were able to match actual spending by organization against their plan.

  • Is the training we support making a difference in performance on the job?

    Training is about influencing business performance by improving individual and group performance. Learning is just a step along the way. My advice, which some would call radical, is not to bother measuring learning at all. If you can prove that you've influenced performance, you can infer that learning has occurred - but why bother since performance is the point? (See Side Bar 1, below, "Measuring Courses", for more depth on this point.)

    As with many of the questions you need to answer with regard to metrics, you need to build for success long before any measurement takes place. If you're going to measure performance, you need to design for performance! That means your designs should be based on action learning or problem-centered models. Wherever possible, they should be designed for intact work groups, not aggregations of individuals. As Barry Leskin, a former Head of Aetna Education, said (personal communication), "you can't put a changed individual into an unchanged environment and expect change to happen."

    In my opinion, every training initiative should be designed to conform to this simple rule: The "event" (course, program, etc.) is not over until the skills have been successfully demonstrated on the job. If you're designing new product sales training, the event doesn't end in the classroom. It ends after a certain number of actual sales have been completed. In addition to designing the classroom (or CBT, or Workbook...) component, you must create a structure for the on-the-job component where skills get applied. Too often it seems that trainers abrogate responsibility for designing this element. Since they won't be there to "teach," it seems that skill application is "out of sight, out of mind."

    With the right design in place, measuring improved performance is relatively simple - certainly much simpler than if you weren't expecting performance to be demonstrated "on your watch." Better still, the right people to measure improved performance are business managers. After all, the measures are business measures: improved sales, etc. Engage them in the measurement process.

  • Have we achieved the right balance between strategic and operational training?

    The balance I speak of is that between the skills necessary for today's jobs and those you know your company will need in the future. Again, you'll need your executive sponsors' help in answering this question. With that information, measurement is a matter of dividing courses into strategic and operational categories and monitoring proportional utilization.

  • Are we offering employees an opportunity to grow and remain employable?

    There's a new social contract between employer and employees. Companies aren't offering a job for life. Today's employees know they must develop the skills that will maintain their employability. An employer's obligation is to provide the means - courses and experiences - and opportunity by which employees can enhance their skills.

    If you've done a good job assessing the performance goals of your company, chances are you've covered the "means" aspect of your obligation to employees. But do employees have the opportunity, generally expressed as a management commitment to some average amount of training per year? Are employees actually being released to attend that training? These are questions you should address through metrics.

  • Does everyone have a learning plan? Are they progressing towards implementing it? If you don't have a goal, how can you reach it? Implementing a learning plan strategy assures that every individual has defined their learning goals. Furthermore, it means that you can roll those plans up in order to a) forecast volume, b) determine if the plans reflect your company's priorities (and if not, adjust the plans before they're implemented), and c) measure the degree to which plans have been achieved.

    In my experience, learning or development plans are often an adjunct to performance appraisals and administered through Human Resources, not the Training Department. You need access to those plans! Better still, you should be in charge of the process of creating and monitoring those plans. Ideally, individual planning tools and plans should be accessible on your company's intranet, and linked to your training administration system and training history database.

  • Is training administered efficiently and is it easily accessible when and where needed?

    When people need training, they generally need it now! Don't make them jump through hoops to get it. Technology can be a great tool in this area. Many companies are using their intranets to share training information, register students, provide on-line courses, and assure access to critical information and performance support resources.

    Look for means to measure the elapsed time from when a training need is identified (for groups as well as individuals) until the training has been accomplished. In the case of group needs, note that you should measure the time it takes to close the underlying business performance gap for the entire group. Too often it seems that trainers concern themselves with the course development cycle time, that is, the time until the first course offering, ignoring the fact that it might take years to reach the entire population in need of training.

    If you measure time, you'll probably also affect accessibility in terms of location. The further people have to travel to receive training, the longer it is going to take to arrange it. Look for means to serve people at or near their workstations and your results on your accessibility metric will improve dramatically.

  • How does your company's training compare with that of your competition, and of business in general?

    This measure doesn't seem very important to me, but it can be to your executive sponsors. Often they'll want to know how their efforts compare to those of others. Humor them - and take the opportunity not only to determine how well others are doing, but how and what they're doing. In other words, use this metric to justify part of your travel budget. Get out there and visit peers at other companies. Look for innovation; look for best practices. Don't settle for "average practices" since the average is nothing to strive towards.

  • Are you doing all this at a cost that is acceptable to your sponsors?

    By all means track costs. But if you've done your homework on the questions above, the benefits you provide to your company should far exceed the resources you expend. Your executive sponsors should understand that they have received a bargain!

Internal Metrics: These are the training department's measures of ourselves: Are we focusing our energy on the right initiatives? Are we creating or buying effective designs? Have we the right balance between design and delivery? Are we communicating effectively? Is our delivery infrastructure (facilities and technologies) matched to our employee's ability to access them? We should look for a mix of design, delivery, and administrative measures.

Internal metrics should be designed with the external metrics in sight. If we are performing well on our internal metrics, the external metrics should fall into place. Again, view internal metrics as "formative" or "enabling".

In addition to indicating probable performance on external metrics, internal metrics should feed performance management discussions in one-on-ones and appraisals. In other words, we should be using these metrics both in aggregate and by individual.

In this case it wouldn't be appropriate to ask your executives for help in determining internal metrics. The best approach is still to ask - but in this case, ask the training department's staff.

Your guiding principle should be this simple statement: We only measure those things that we care enough to act upon.

Following are some questions around which a training department might develop internal metrics:

  • Are we focusing our energy on the right initiatives? Are we doing enough?
  • Are we creating or buying effective designs? (e.g. Action Learning; embedded skill application)
  • Have we the right balance between design and delivery?
  • Are supervisors anxious to send employees to training, and are employees anxious to attend (i.e., do they value it)?
  • Are people satisfied when the leave? (Do they recommend training to others?)
  • Are people learning?
  • Are people applying what they learn?
  • Are we communicating effectively?
  • Is our delivery infrastructure (facilities and technologies) matched to employees' ability to access it?
  • Are training administrative staff able to give the right advice on a broad range of subjects to supervisors and employees?
  • Have training staff, supervisors, and employees achieved the right balance of responsibilities for creating and implementing learning plans?

SIDE BAR 1

Measuring Courses: At the course level, the standard approach has been based on Kirkpatrick's four levels - with participant feedback "smile sheets" at level one and assessment of organizational impact at level four. Most organizations use smile sheets to determine if students "liked" a course, that is, perceived it as worthwhile. Many test students to see what they learned. Few measure skill application on the job, and of those few, fewer still do so routinely. A very rare few indeed feel able to measure organizational impact and do so only rarely.

While, to a certain extent, measurements at Kirkpatrick's four levels tell course developers different things, by and large they represent a hierarchy of perceived difficulty to implement. They also represent increasing direct relationship to business impact, which, after all, is the point of training in a corporate setting. So, the things that seem easier to measure (perceptions and learning) say less about business impact than application and, well, impact.

Generally, training organizations have chosen to measure perception and learning while inferring application and impact. I contend that this satisfies nobody. Trainers have lingering doubts about the impact of their efforts. In the worst cases, they lose sight of the real point of the measures, contenting themselves with achieving "satisfaction" and learning. Business managers remain skeptical, often treating training as an expense to be cut - for if they believed in training's effectiveness, would they cut the budget?

The problem in measuring courses is not in Kirkpatrick's levels, it is in the design of the courses themselves. In "Reengineering Corporate Learning" (Training, August, 1992, pp 57-61, or online at: http://www.performance-vision.com/articles/art-reengineering-learning.htm) I estimate that over 80% of critical job learning happens on the job. If you're skeptical of that figure, just ask yourself where you learned the critical skills you apply every day. Did you learn them in a classroom? If less than 20% of critical job learning happens in our traditional courses, how can trainers be confident that they've made a valuable contribution to business success? After all, the trainers were absent when over 80% of the learning took place!

I propose a simple solution to the problem - design the entire 100% of learning, but continue to deliver only 20% or less by traditional means. The remaining 80% or more should properly be learned on the job - but in a way that has been designed or "structured" to make it consistent - from office to office, supervisor to supervisor, and day to day. Training's role is to provide the structure for learning, not to deliver it. The means might include coaching and assessment guides for supervisors, self-instructional materials for employees, and electronic performance support systems that embed business processes, advice, and learning granules accessible in the context of performing work. Our initiatives in the area of knowledge management are a response to the realization that most learning happens on the job, not in the classroom: we're learning all the time, and need to learn all the time to be competitive; and we don't just learn from "teachers."

To drive home the point of how radically different the "100%" view is from our traditional approach to course design, let me propose the following simple rule: The "course" isn't over until the learning has been successfully applied on the job. For example, the sales course isn't over until the learner has sold 500 of the new model widgets. I ask you, how many of the courses you currently offer conform to that rule? If they did, wouldn't much of the frustration of measurement vanish?

SIDE BAR 2

Subjective versus Objective Measures: Several key differences (besides the obvious one) and one important similarity need to be explored relative to objective versus subjective metrics.

Unlike objective metrics gathered by the training department, subjective metrics are generally reported serendipitously from employee to management. They're pretty much (but not entirely) outside of your control. I think that they're often more credible simply because Training isn't reporting them.

It doesn't take a lot of anecdotes to make a big impact on management perceptions. This works both ways: bad news travels fast too; faster probably.

Objective measures a) require interpretation and b) are reported routinely. Both a) and b) may diminish their impact.

The similarity is that you can influence both objective and subjective measures, though the means aren't always the same.

What can you do to affect the likelihood of generating good subjective anecdotes versus bad ones?

  • You can set the seeds for a good impression through your communications strategy. Communications can set the stage for a self-fulfilling prophesy. (I don't mean just formal communications like a Training Department newsletter. Every meeting you have with an executive sponsor is an opportunity; as is every conversation a training administrator or instructor has with students and their supervisors.)

  • You can avoid obvious turn-offs like making certain programs mandatory. You can reach the same audience by making the program "by invitation only," but instead of having people arrive angry that they're being forced to do something, they come pumped up at feeling they are part of a special "elite." People who feel special aren't very likely to bite the hand that gives them that distinction.

    Aetna's chairman, Ron Compton, had a knack for such incentive techniques. When we prepared to release our first electronic performance support system (EPSS), the AMP Facilitator, he suggested that we control distribution in such a way that we created a "high visibility frenzy." We did so by placing the software first with people who were known to be on the fast track and likely to use the software where others would see them using it. It didn't take long for others to be clamoring for their copies.

  • You can also create peer groups of participants ("alumni") who can a) increase your effectiveness in real ways by serving as coaches to subsequent groups while they b) serve as promoters of the program as much for the shared experience as for the real value they got from it. For example, at Aetna the "Group School" for health insurance sales executives retained a fantastic reputation (unfortunately, some would say, long after the school had diminished significantly in relevance).
  • Anything you do to assure consistent, caring, customer service behaviors can go a long way towards generating good press.
  • When someone reports a positive anecdote to you, you can ask them to share it with their senior management.

Acknowledgments: I owe much of the thought behind this publication to fourteen years experience in the corporate education function of Aetna, Inc; the opportunity in 1997 to support Karie Willyerd, then with Lockheed Martin's Tactical Aircraft Systems division in Fort Worth, as she implemented a learning plan strategy and an accompanying set of metrics; and dialog with members of the STEP consortium, which I manage.

About the Author: Since 1995, Stan Malcolm has served as a coach to companies wishing to develop or review their strategies for electronic performance support, learning technologies, and/or corporate learning issues generally. Formerly, he headed learning technologies and performance support initiatives at Aetna, Inc. He can be reached at Stan@Performance-Vision.com, http://www.performance-vision.com, or in Connecticut at 860-295-9711.

 

People
Articles
Events
Marginalia
Genealogy

HomeIndex.htm

Index.htm