Unhappy with Happy Sheets

Course evaluations are usually dumb, counter-productive and distorting.  Conference evaluations are largely the same.

They are actually NOT “evaluations,” that is the problem. They are “happy sheets”.

Moreover, what you want from participants and attendees is not “evaluation” of the Conference or training session primarily (though secondarily, that can be relevant and interesting). What you want is evaluation of the applicability of what they experienced, the “return on energy” once they seek to convert ideas into action.

If you charge Trainers and Conference organizers with getting rave ratings from people, you incentivize them to taper what they do to “popularity”. But what if organizational value comes from making people uncomfortable, from challenging them? Then the “evaluation” should be related to whether this discomfort was constructively provided, led to a helpful change in behavior, or created positive momentum in a direction sought. People may hate having to be challenged, and the organization may love the results.

If a key strategy has to be understood, then lack of social time may be indeed a Conference deficit objectively, and yet consciously be taken on, because of how mission-critical getting everyone’s engagement around the strategy is at that juncture. Though everyone may understand that, they are unlikely to give high scores to the statement “We had enough time to relax, socialize and enjoy our surroundings.”

We can while prioritizing landing the strategy, consider if slightly  more time can be taken, or a more neutral location selected (if truly we cannot enjoy where we are, why bother?).  And that’s why I say these observations are secondarily relevant.

But the primary issue is to discern and advance whatever the real aims are. Now if another Conference was created primarily to build relationships and bridges across disparate global teams, then the critique of inadequate time for bonding, engagement, team-building and more, becomes more damning.

The point:  there should never be a one size fits all “checklist”. But we should be checking on achievement against our highest priority aims.

I also have found that if people are being chased for evaluations, they are never “in” the experience, but are constantly second-guessing it, often from the default settings of their own preferences, paradigms or at times, even prejudices. There is a time to engage and experience and get the most out of an  experience. Then, there should be time to reflect, to consider and to recommend. These are different faculties and should be utilized distinctively as such…each at appropriate junctures. And the questions we ask, should reflect what we are really after, not a generic set of standardized aspects.

Relative to learning experiences, evaluations should consider pre-session engagement by bosses and preparation of attendees, the actual experience, action-planning and tracking with bosses or other mentors in the aftermath, results achieved, and therefore an evaluation of the total process, including the briefing given to the learning provider, and the customization done if relevant.

“Presentation skills” of providers are a certainly relevant and valuable but hardly the most critical aspect we should be evaluating. That’s wonderful icing. But did the right cake get baked?

Presenters can wow and enchant, and provide little of take-forward value. Or people can be charged up, ready to go, and bosses can be disinterested in their experience or its applicability….thereby blunting the cutting edge of any learning.

The learning experience should be construed as a multi-faceted partnership between boss, participant, experts or coaches, and the organization-at-large. Otherwise there is scant ROI, and we are just tossing money overboard in the hope that some stimulus will “stick”.

So forget happy sheets. Get people to engage first, evaluate second. When they evaluate, evaluate actual outcomes of value to the organization primarily, and the entire process that is to deliver them. Secondarily, check out what people thought of acoustics, food, visuals used, even presentation skills. A total “hit” in terms of being wowed by the presenter, hotel, visuals, can deliver a total dud in terms of learning value.

No reason not to have both we can argue…but get the split of attention right based on what is really essential. Let’s sweat the real stuff first…and the “surround sound” next. First value, then sizzle!