Problem 3: Your Company Doesn’t Measure It Properly – Or At All
Welcome
After a 6 month hiatus and reader feedback, The Impactful Executive is back, returning with a new format and fresh perspective.
Over the next six weeks, we will be presenting a series of articles outlining the pitfalls of corporate training and talent development and how to implement programs that truly work.
These articles have been written by Jay Kloppenberg, a top-tier management consultant and CXO advisor. Jay is a seasoned professional with over 15 years of experience in education, people development, and organizational performance. His career spans from founding an innovative school in South Africa to consulting for McKinsey & Co., where he helped companies globally enhance their performance. A published author and frequent guest speaker at graduate schools, Jay brings a unique blend of practical experience and academic insight to performance improvement initiatives.
Warm regards,
Dr. Ali Monadjem
Problem 3: Your company doesn’t measure it properly–or at all
Corporate training is a $300 billion global industry, but much of the training offered is ineffective. Over six weeks, I am detailing six reasons I’ve seen for that ineffectiveness, and how they can be addressed.
Today’s focus…
Problem 3: Your company doesn’t measure it properly—or at all
Just about every training I have ever attended (or led, for that matter)—no matter the industry, the geography, the business function, the training purpose, the length, or the modality—has always ended basically the same way:
“Please provide your feedback on this training. What would you rate it overall? What was most helpful? What was least helpful? What should we do more and less of? Do you feel you are better equipped to do your job after this training? Would you recommend it to others?”
This is only natural, of course. What could be more obvious than finding out the reactions of the training participants? It would be the height of arrogance not to ask these questions after a training is complete. How else will you learn whether it was effective?
Of course, there might be better ways, which go beyond the subjective responses of attendees. Maybe these responses aren’t a perfect encapsulation of the training’s effectiveness, but they are probably pretty good. Certainly better than nothing! Right?
Well…not necessarily.
This video is one of my favorite learning videos on the internet. It is from the popular youtube channel Veritasium, and it critiques another even more popular youtube channel, Khan Academy. It cites creator Derek Muller’s PhD thesis about the science of learning.
Muller asked participants a multiple-choice physics question about forces acting upon a falling object. Just under 1 in 4 answered correctly. He then showed them a video explaining the answer. When asked about the effectiveness of the video, participants said it was extremely effective. The most common words used to describe it were “clear,” “concise,” and “easy to understanding.” Participants’ confidence in their knowledge increased significantly.
Then they were tested again. And the success rate was…almost exactly the same. Still just under 1 in 4 answered correctly. Participants admitted afterwards that they didn’t pay complete attention for the full 10 minutes, mostly because they thought they already knew the answers.
Muller then tested another group. Instead of providing a “clear, concise, easy to understand” video, he presented a video that had actors voice out the most common misconceptions, then had another actor explain why those were wrong—directly confronting participants’ incorrect prior knowledge.
After this video, most participants said the training was “confusing.” But afterwards, twice as many got the right answer.
This is a fascinating result. For Muller, this result shapes how he designs his youtube videos to make them as effective as possible. Perhaps in a future post, we will discuss this finding’s broader implications for learning. In the context of this post, we will focus on a single, problematic truth:
In any learning environment, you cannot simultaneously maximize both participant satisfaction and participant learning.
This is not to say that effective learning experiences cannot be enjoyable. Of course they can. Learning is a fundamental human drive, and the process of authentic and effective learning is an inherently rewarding one. No one learns effectively by being bored to death.
But learning new concepts, in general, requires a challenge. It requires what educational psychologist Michael Shayer called “Cognitive Conflict,” an inherently uncomfortable state in which our existing mental models are insufficient to solve the challenge in front of us. So effective learning will nearly always require some short-term discomfort. This discomfort drives hard thinking as we adjust our mental models to address the challenge, leading to conceptual learning.
Participant enjoyment and perception of improvement are just not good proxies for learning. In the TNTP study on teacher development referenced in a previous post, researchers found that 80% of teachers whose performance declined significantly over the course of the study responded that the training helped them improve their practice either “some” or “tremendously”.
While it is generally beneficial for training providers to ask for feedback from participants, attempts to maximize satisfaction will inevitably lead to suboptimal learning practices. Executives often believe their training is highly effective only because they have no other, more accurate means of assessing it.
The Kirkpatrick Model of Adult Learning, first published by Donald Kirkpatrick in 1959, does provide a better way, though few organizations currently use it effectively. It measures the effectiveness of adult learning efforts along four dimensions of increasing importance to the organization.
Level 1: Participant reaction. As described above, it is always good to know what participants thought of the experience, but this rating tells us nothing about learning effectiveness.
Level 2: Learning. This is the level Muller reached in his video, and where schools usually stop. Do you know the content that we taught you today? While a better measure than participant reaction, it is of limited intrinsic value to a business—most of your customers don’t pay you to know things, but to do things.
Level 3: Behavior. Now we’re getting somewhere. All professional development experiences should impact the behavior of the participants. Do you do anything differently as a result of this experience? If the answer is no (and it usually is!), then the training can be considered basically a waste of time. The problem is, assessing behavior change requires measurement outside of the training facility or workshop. This finding relates to our previous post, on Problem 1. If the learning process stops when the workshop ends, there is no learning process.
Level 4: Results. The holy grail. At the end of the day, our professional development efforts are designed to increase performance. If they don’t do that, they have failed. Yes, performance can be difficult to measure in many fields. Yes, there are confounding factors that make it often difficult or impossible to isolate the impact of the PD activities by themselves. But we must try. Executives: if you do not know whether your PD efforts drive improved performance, you do not have a clue whether they are useful in any way. And if you don’t know whether they are useful, they probably aren’t.
Most external training providers do not want to be measured on 2, 3, or 4. Neither do most participants. It is far easier to complete a fun workshop, declare it effective based on Level 1 results, and have everyone go home happy.
But, to return to the question from the previous post, is this a check box exercise or a strategic imperative? If the latter, we owe it to ourselves to do it as effectively as we can, and to hold ourselves accountable for driving real value.
See also:
Introduction: Why your company’s required training feels like a waste of time (it’s because it is)
Problem 1: You consider “training” and “work” to be two separate things
Problem 2: Your CEO delegates talent development
Coming next week: