Having “onboarded” the emerging leader(s) into their
coaching program, the next step – and incidentally, the 2nd of my
series of 10 “leadership coaching best practices” blog/pulse posts – consists
of using assessment(s). Don’t frown just
yet… This post won’t be a lengthy and convoluted treatise on the scientific
bases of assessments, nor will it cover the importance of their statistical
validity and reliability (you can use Wikipedia for this ;) Instead, were considering
assessments as pragmatic tools helping us shed light on the classic leadership
coaching questions: Where are we starting
from? Where are we headed? What should we work on? What should we leverage? And
how should we go about all of this?
What are “Assessments” anyway?
By “assessments”, we basically mean the various evaluation “tools
and methods” that can be used by the coach and his/her coachee to better
understand – with some degree of quantitative measure – the various dimensions
that have been deemed important for the leadership coaching program. Whereas some
of these tools are “more quantitative” in nature (they yield normalized “scores”
and may benchmark these against larger population statistics), others have a
more “blended” approach (they combine both quantitative metrics and qualitative
data; they may include “free-form respondent feedback”, they might relate assessed
dimensions with one another without attributing them with absolute score,
etc.). Another important aspects regards the person “responding” to the assessments.
Some are self-evaluation (typically done by the participant him/herself)
whereas others will combine feedback from multiple respondents (these are often
call “multi-rater” or “360” feedback tools).
Why do Assessments matter?
Quite simply, we found that using these tools within Leadership
Development Programs was beneficial for the following reasons:
- They help us (the coaches) to get to know our clients better (or better faster);
- They help the clients (the Emerging Leaders) to get to know themselves better;
- Assessments can also help uncover “blindspots” – strengths or weaknesses that the participant has unknowingly;
- They’re practical in establish a common “vocabulary and framework” used by the coach and the participant to discuss development matters. This is particularly useful to clarify, to differentiate, to analyze and to communicate leadership dimensions and development priorities; and
- Assessments help put everything in perspective (for the client, the coach, and the other stakeholders – HR, the manager).
In a nutshell, when the client – either on his/her own or
through the guidance of his/her manager – signs-up to a coaching program with
the crucial “I need to get better”
question, then the use of assessments help both the coachee and his/her coach clarify
the question with the “what exactly, why
and how”.
What works well, what doesn't?
When using assessments within a Leadership Development
Program, a number of important aspects need to be cnsidered. Based on our
experience coaching several emerging leaders, we found the following six (6) practices
– Do’s and Don’t’s so to speak – to be practical when using assessments:
1.
Select the
right assessment(s): Given the fact that there are soooo many different
assessment tools out there, it’s crucial to spend more time upstream of the
program in order to select the right assessment for the task at hand (i.e.: the
development of emerging leaders). Start by identifying your program goals (develop
leadership skills, enhance conflict resolution, understand thyself better,
boost individual/team creativity, plan your career, improve work-life balance,
etc…). Then, shop around! Spend a fair amount of time reviewing existing
assessments (and there’s a lot!) in the context of your program goals. In doing
so, pay attention to the following: assessment – and provider reputation;
scientific background; statistical validity and reliability; completeness of
assessment results and report; administration tool; costs (to get certified with
the assessment tool, to buy individual assessments, as well as the effort to
administer these assessments); ease of use (to log-on, to complete the
evaluation, to interpret the report). And as final “litmus test” take the
assessment yourself, and self-debrief. Your own feedback should prove quite
decisive in selecting the final set of assessment(s).
2.
Select
the right number of assessment(s): When it comes to assessments, “one size
fits all” doesn't exist (beware if someone is trying to sell you just that!). Once
you’ve identified your program focus, and have a good idea of the kind of goals
your participants will be shooting for, it is time to identify the important
“dimensions” that need to be uncovered, understood and leveraged during the program.
If you’re lucky, a single assessment may be sufficient to help you – and your
coachee – evaluate these important characteristics. But it’s not uncommon to
require more than one tool to broaden the assessment spectrum and identify all
of the key metrics that you need. For example, in our Leadership Coaching
Programs for Emerging Leaders, we typically use three assessment tools to cover
all the bases: two psychometric self-assessments to highlight our clients’
“Character Strengths” (personality characteristics to be leveraged) and their
“Preferred Learning Style” (to help us craft meaningful development
experiences), along with a Multi-Rater (aka 360) Leadership Feedback Tool (to
evaluate their proficiency with the targeted “Leadership Competencies”).
3.
Selectively
use the assessments’ information: Having decided the number and kind of
assessments is essential… but your work isn’t complete just yet. Most
commercial tools will provide you with comprehensive assessment reports that can
typically be 20 to 30 pages long. If you’ve selected to use three tools (as we
have), you’ll end up with close to 100 pages of assessment-related data!
Although this information is usually well presented and quite interesting, it’s
simply overkill. Your task is now to “identify and extract” the important
subset of data that is needed by your program, and to leave the rest aside. For
example, in the “Learning Style Inventory” assessment that we use, there’s
information that relates your preferred learning style with different work/life
situations such as working in teams, resolving conflict, communicating or even
choosing a career. This “extra knowledge” is indeed fascinating… but of little
use for the task at hand. And if we try to digest too much data, we may run the
risk of “drowning the fish” so to speak. We recommend that you keep the
assessment report(s) intact, but that you clearly indicate to your participants
which information “subset” will be used during the program, and clarify that
everything else in the report should be deemed as “interesting information to
consult on their own time, but not essential for the coaching program”. The
curious overachievers will read it, and the pragmatic minimalists will ignore
it.
4.
Dedramatize
and contextualize: This is perhaps the most crucial step in using
assessments with emerging leaders. Since most of them will be confronted with
this kind of personal data and 360 feedback for the first time, you need to
thread carefully and prepare your participants before you share this
information. What works well is to “dedramatize and contextualize” the process:
let them know that, although “scientifically-based”, these assessments provide
an “approximation of reality” and not an absolute and unchangeable truth.
Often, perception will bias the feedback received by others. Sometimes, the
participant will self-evaluate too negatively… or too positively. Tell them
that the data they’re about to receive needs to be considered within the
context in which it was gathered. For example, if you’ve just finished a challenging
project, your teammates may have a different opinion of you than they would
otherwise when you’re at your best… Emphasize that these reports provide “trends
and guidelines” that will help us – the coach and the coachee –better
understand and plan.
5.
Debrief the
Results in 2-steps: This part is a little more controversial in that, contrary
to most practitioners who show the assessment results to their coachee as they
debrief them, we advocate sharing the results to the coachee “prior” to the
debriefing session per se. The reason behind this “2-step” approach is to
minimize the time required for the assessment debrief, thus helping maximize
the value of the program by keeping more coaching time for the “development”
phase. How we go about this is as follow: we send the assessment results
electronically (by posting a PDF version of the report onto the Private Online
Collaboration platform). However, as we share the report with the participants,
we also provide them with specific instructions and guidelines as to how to
read and interpret the results. For example, we suggest that they first read
through the assessment without taking any notes. This “first quick read” allows
them to get a general feel for the results. Then, we recommend that they read
it again, but this time while highlighting the sections of the report that they
either find surprising, that they feel are important, or that they don’t quite
understand. Finally, we schedule a coaching session quickly thereafter
(typically 1-2 days after having shared the results) and debrief the report
with our coachee by paying special attention to their questions and to the highlighted
sections.
6.
When
possible, use online tools: At last, we think that assessments conducted online
are simpler to administer and allow for greater flexibility from a survey
respondents point of view. These online tools also lend themselves well to “pre-scripted
guidelines” that can easily be communicated electronically to the program
participants. Again, the principle of “minimizing costs while adding value”
dictates that assessment should be conducted with minimal involvement from the
coach (and/or the program administrator), thus keeping the effort low while
maintaining the focus on developmental coaching rather than on administrative
tasks.
Finally, we found that using assessments – as imperfect as
they are – is in line with the following development philosophy: “You can’t manage and improve what you don’t
measure and plan…”
What do you think?
If you have ideas to share or feedback to provide, please
comment this post, contact me through our Blog or website (www.crinq.com), or email directly me at: patrick@crinq.com.
Merci, in advance…
Patrick
Great post! Thanks. I’ve been reading around this subject and you might also like Free Leadership Self Assessment Tools
ReplyDelete