Archive for the ‘Assessment’ Category

Everyone Looks Good on Paper!

Tuesday, February 1st, 2011

With all the books on writing resumes and cover letters, and all the professional services out there that
actually write job search paperwork for candidates, I’ve come to the realization that everyone looks
good on paper. The question that leaders like you need to answer accurately is, “how will this person
look in six months?”

Some of our client organizations use psychological assessments to help them hire candidates, others do
not. In the current marketplace, with multiple candidates for every open position, you might decide
that it’s a “buyer’s market” out there and every candidate is a great one. You might conclude that your
current hiring process already does a good job of filtering out the worst candidates and netting the best
ones. If that is your firm mindset as you begin reading this, you can stop right now; this Leadersynth
edition will not be helpful to you!

On the other hand, if you look at the success rate for your new hires over the last several years and think
“there must be a better way,” you might learn something here. This is the story of a recent hire for a
client organization that illustrates an effective series of steps to insure the best hires.
TQX, a mid-sized manufacturing company, was looking to hire a senior design engineer. At the
beginning of their search, they were contacted by Tom, an engineer from a competitor, about possible
job openings at TQX. The COO and CEO were somewhat familiar with Tom and his career at the
competitor. In fact, the COO lived near him; their kids attended the same high school and had played in
the same soccer league growing up. The COO knew Tom as a friend and neighbor, and she was
favorably inclined to hire him. However, the CEO had a nephew who worked at Tom’s current
employer, and the nephew’s sense of Tom was that he was arrogant and difficult to work with,
sometimes responding with inappropriate emotion when upset.

TQX called me in to conduct a pre-hire psychological assessment. The COO, CEO, and I met up front to
discuss the position and what their desires were for the successful senior design engineer candidate. I
left with a clear sense of what they wanted in the position and what they liked and disliked about Tom
from their first interview, as well as their knowledge of him from other sources.

We sent a couple of personality tests and a critical thinking assessment to Tom as pre-work, and then I
met in person with him to conduct an in-depth behavioral interview. In particular, I focused on his
challenges, successes, and failures in previous roles and projects, as well as the aspects of his work that
had motivated him throughout his career. At one point, we talked about the last couple of years at his
current employer, and it was clear that he had been involved in a project that did not go well and had
caused him a great deal of stress. He obviously had not felt supported by senior management in the
project, and, in fact, there had been legal issues for which he was called to testify in court. It had been
very unpleasant for him.

At another point in the interview, I asked him what was motivating him to leave his current employer.
As he answered, he began to tell me about how his wife had been involved in a serious car accident
about a year ago, and how stressed he felt traveling out of town for projects and leaving his wife and
kids at home as she recuperated. At this point, tears began to well up in his eyes and his voice became
choked with emotion.

He was able to finish the interview, and, after he left the office, I thought about these new data points
he had shared. First, it was clear that he was on the rebound from a very negative work experience in
which he felt unsupported by his management. Second, it was clear that he had a tendency to become
overwhelmed and uncomfortable reaching out for help when he needed it on projects. Third, I began to
understand why others might have experienced him as arrogant and difficult to work with, with all the
upset related to his wife’s recovery and his extensive travel schedule.

After writing a report describing the key attributes of this candidate, I met again with the CEO and COO
to discuss the report and dig more deeply into our collective overall impressions. They found it helpful
to ask questions and hear me elaborate on his potential strengths and development areas. In particular,
they were a bit surprised that Tom became emotional in the interview. I described him in the report as
someone who expresses his feelings openly, in part due to my interview observations, but also due to
several scales on the personality tests. This discussion helped us pull together the various bits of data
from his workplace, the interview, and the test results that supported the notion that he was likely to
wear his feelings on his sleeve.

Other than these couple of concerns, however, Tom was a very strong candidate. He was smart,
personable, strategic, creative in problem solutions, and a hard worker. Just the kind of person they
wanted in the senior design engineer role. To allay any lingering concerns, they circled back to Tom to
discuss their perceptions of his style under pressure. They liked his responses and decided to hire him.
This story, for which the names and other specifics were changed, illustrates several key steps to build
success into any organizational hiring process:

  • Be clear upfront what you want/need in a candidate for a particular position
  • Conduct initial interviews internally to narrow down your list and screen out poor candidates
  • Use additional resources to screen candidates beyond internal behavioral interviews. In this
    case, it involved using an external psychologist to administer tests and an additional interview; it
    also included data from an insider at the individual’s current job and the perspective of a
    neighbor in the community. Using standardized tests, particularly personality and abilities
    assessments, can provide a helpful norm-based perspective for hiring.
  • Discuss the candidate in the light of all the data and use the discussion to explore the possible
    meanings of the data you have on the candidate. Use this time to identify development areas
    for the candidate that, if you hire the individual, you can begin to address when he/she first
    begins to work.
  • Circle back to the candidate to discuss potential concerns and see how he/she responds
  • Make your final decision

Remember: everyone looks good on paper, and most people can manage to look good for at least a first
screening interview. Use objective and standardized sources of data to help you paint a picture of what
a candidate will really be like six months after being hired! Roselle Leadership Strategies offers valuepriced
Selection Assessments to help build success into your hiring and promotion strategies. With all the books on writing resumes and cover letters, and all the professional services out there that actually write job search paperwork for candidates, I’ve come to the realization that everyone looks
good on paper. The question that leaders like you need to answer accurately is, “how will this person
look in six months?”

Get bang for your buck with selection assessments!

Saturday, April 3rd, 2010

Especially in these times of economic uncertainty, hiring top talent is of utmost importance.  To this end, organizations employ a wide range of assessment strategies, from a “one size fits all” approach to the other extreme, the “more is better” theory.  In the following post I will analyze both ends of the spectrum and use empirical evidence to suggest a best practice.


One-size fits all.  Those who trust in one standardized instrument, like the Predictive Index™, Caliper™, or Prevue™, use the same tool to help them make most hiring decisions.  They trust the instrument, are familiar with its content, and have a favorable view of its success rate in weeding out poor candidates and highlighting strong ones.  The advantages of such an approach include simplicity, minimum time and financial expense, and, oftentimes, a straightforward, common vocabulary to use in discussing candidates.  The primary disadvantage is that one simple approach for all levels of hiring decisions can miss important data and perspective, especially for senior positions that are critical to the organization’s success.  In these cases, one instrument is often not sufficient to pick up essential details.


More is better.  Contrasting with the one-size fits all proponents are those who believe in using complex assessment centers, or a wide variety of instruments and approaches to gather large amounts of data on each candidate.  These assessments typically include some combination of the following:

  • Structured interviews with multiple consultants
  • Personality tests
  • Abilities tests, including IQ tests
  • Work simulations (either tailored to the specific organization, or an “off the shelf” version designed for a marketing, engineering, or other functional area position)
  • Interpersonal role plays (individual, team)
  • Administrative in-basket exercises


The advantage of using such an approach is that the assessment views candidates from a variety of perspectives in order to draw a conclusion about them.  The instruments used typically provide a much greater level of depth than the one-size fits all, minimalist approach.  The multiple assessment components also offer a good deal of “face validity” in that it seems logical that multiple components would provide a more valid perspective than using a single test. 


While these advantages are enviable, there are many disadvantages to the more is better approach.  The cost per candidate can be prohibitive, especially if the assessment uses custom-designed work simulations, in-basket exercises, and role-plays.  The results across the various components often conflict and, consequently, are confusing.  The validity of these complex assessments is not significantly greater than the one-size fits all approach.  Let us look at each of these disadvantages separately. 


Prohibitive cost.  The up-front fees for designing and implementing an assessment center runs somewhere in the five figures range.  Even when formal assessment centers are not used, the cost per candidate for all the pre-work testing, the multiple interviews, and the in-basket, simulation, or role-play observers and raters is typically several thousand dollars.   


Conflicting, confusing results.  This happens because candidates usually do not respond equally well to each of the various assessment components.  They are not great actors, for example, so they struggle in the role-plays or simulations.  Perhaps they score well in the work simulations, but their personality testing raises red flags.  Or, they impress people in the interviews, but fail miserably in the in-basket exercise.  A related issue is that large assessment companies use multiple raters and observers across a group of candidates, often using consultants from geographically dispersed offices.  Despite the best efforts to train each of these consultants consistently on how to observe and rate candidates, their own individual biases and nuances in training introduce variability into the scoring process.  This adds to the conflict in results, and confusion in the interpretation.


When consulting companies determine the bottom line result/recommendation for such candidates, they usually sound confident in their decision.  As a client, however, you may not know what decision rules led to the conclusion and why they determined that the role-play was more important than the personality testing, for example.  The underlying decision rules, therefore, are critically important to the conclusions drawn, but these are often “hidden” from you as a client.


No real gain in validity.  How this can be, you might ask.  Why else would you include so many different assessment components, if not to substantially increase the validity?  From a strict statistical perspective, the validity of a set of assessment components is not appreciably higher than the validity of the most highly valid component. 


We know from employment research that the most valid single predictor of future job performance is general mental ability.  For this reason, any selection process should use some measure of mental ability as a primary differentiator between candidates.  From there, the question is which assessment component(s) add substantial incremental validity above mental ability, without prohibitive costs? 


A 1998 review of hundreds of assessment studies found that Assessment Centers have substantial predictive validity themselves but only add a 2% increase in validity when combined with a measure of mental ability.  In other words, applicants who score well on measures of intelligence typically also perform well in Assessment Centers, so there is little to no additional value in adding this substantial cost to the selection process.


On the other hand, both work sample tests and structured interviews add some incremental value when used in conjunction with mental ability.  A measure of mental ability plus a well-designed work sample test will predict 42% of a candidate’s performance, while mental ability combined with a structured interview will predict 40%.  However, the cost of developing a work sample test for a specific occupation in a particular organization is significantly higher than conducting a well-structured interview.  Again, the question is whether adding significant cost is worth the additional two percent in validity.


How do you get the most bang for the buck?  For most of the people they hire, our clients want an individual whose background, style, and skills match the demands of the role for which they are hiring, and give them the potential to move beyond that role.  As a leader in your organization with responsibilities for the hiring of new employees or the promotion of existing ones, you must answer the question yourself of how to get the most from your selection assessment process. 


At Roselle Leadership Strategies, we take a balanced perspective in the assessments we provide to our clients.  That is, we include tests of mental ability and structured interview as the data, above, suggest.  We typically include personality tests to help illustrate the candidate’s style and fit with the culture.  However, we avoid the excess of work simulations, in-basket exercises, and role-plays that can add major costs, with little additional validity.  Moreover, as noted above, the various components often confuse and obfuscate the bottom line results.  We try to get the clearest picture possible of the candidate, using responses from written and verbal interview questions, personality assessment, and mental abilities testing, and then we map that to the current and future needs of the organization.


Assessment results should act as a catalyst for further dialogue about a candidate, not pronounce indisputable judgment on them.  It is important to avoid situations like the one we recently encountered with a very successful director of a non-profit organization who went through an assessment.  The conclusion reached by the firm conducting the assessment was that he had “a 9 percent chance for success” in his current role.  Such a pinpoint conclusion by any vendor, in light of the limitations of the assessment components we just outlined in this paper, was either ignorant or arrogant, or both. 


The bottom line is that we suggest you use assessment tools that fit your budget and help you make better hiring decisions.  Understand that an assessment recommendation is a helpful part of an overall selection process, but should not be presented or interpreted as the final arbiter of a candidate’s chance of career success in your organization.

The Three R’s of High Performance Leadership

Sunday, February 7th, 2010

Although leaders do not need to be highly effective at every behavioral aspect of building relationships, achieving results, and demonstrating resourcefulness, they must maximize the strengths they have and minimize the impact of their weaknesses.  Using a multi-rater instrument like the FULLVIEW™ provides leaders with the in-depth perspective they need to identify their strengths and development areas as seen by their manager, peers, direct reports, and others. 

Once individuals receive feedback, how can you best work with them to leverage their strengths and ensure that they have no major gaps that might derail them?  In our work with client organizations, one approach we employ regarding this question is to provide Development Assessments to a group of leaders or high potentials.  This type of assessment typically includes FULLVIEW™ 360-degree feedback, personality testing, and an in-depth interview with each individual. 

The feedback results for leaders or high potentials on our FULLVIEW™ tend to cluster into a six-cell matrix, with different development strategies attached to each, depending on their strengths and deficits across the Three-R’s (relationships, results, and resourcefulness):


 

Some strengths in 1-2 areas


Solid strengths in all 3 areas



Standout strengths in 1-2

 

Minor deficits in one or more areas



Spend minimal development time, focus on strengths, consider band-width limitations



Develop strengths, spend minimal time on deficits

 

Leverage strengths, spend minimal time on deficits

 

Major deficits in one or more areas



Spend minimal development time, look to replace


Develop strengths, look for underlying issues in deficits, consider replacing



Leverage strengths, look for underlying issues in deficits

KEY:   red=low priority     amber=moderate priority      green=high priority development

Based on this matrix, we recommend minimal investment of development time for individuals with some strength in one or two areas (first column); if they have minor deficits across the three core competencies we often encourage spending some amount of time building their existing strength areas, if their potential has not been tapped fully.  For example, if you have a few young leaders who exhibit some ability to achieve results and can be resourceful on occasion, you might conduct other personality and ability/skills tests to determine their potential for growth.  If the results are encouraging, you could decide to invest some development time focused on their strengths.  For leaders with only some strength in one or two areas, but major deficits in the other areas, you would probably not invest development resources, but, instead, look to replace them.

On the other hand, for leaders who show a balance of solid strengths across all core competencies (second column), your inclination usually should be to invest resources to build on those strengths.  When they have only minor deficits, our suggestion is that you ignore these in favor of focusing on their strengths.  If they have major deficits, however, you will serve the organization best by looking for underlying causes.  These leaders are ones you probably want to keep, but you will need to invest in improving their deficit areas. 

Often, there are underlying causes like organizational obstacles, poor fit with boss, poor fit with primary responsibilities, or internal obstacles (perfectionism, defensiveness, low self-esteem, etc,) that get in the way in these situations.  Making changes, removing obstacles, and bringing in a coach are ways to remove or minimize underlying issues.  If these steps seem too costly, or they show little impact early on, consider replacing the individual.  In using our Leading Fearlessly™ model with such leaders, we often discover that underlying irrational fears and faulty beliefs cause or contribute to their ineffective reactions to situations.  Bruce Roselle’s book, Fearless Leadership (2006), provides a six-step approach for recognizing ineffective reactions and replacing them with high performance behaviors. 

For leaders with standout strengths (third column), get out of their way as much as possible.  Help them leverage strengths, but do not force them to spend much of their time in the deficit areas, especially if they are minor.  However, leaders with standout strengths and a major deficit or two are usually great candidates for coaching.  This is only true, however, if they recognize the deficits, take personal responsibility for them, and express genuine interest in becoming more effective in these areas.

All leaders with minor deficits (first row in the matrix), whether they have only one or two strengths or standout strengths, would also benefit from training in the areas of leadership effectiveness.  Learning and practicing a set of core skills and understanding the perspectives of high performance leaders can raise an individual’s level of overall effectiveness and offset minor deficits.  As an example, our clients often bring us in to provide sessions from our Good Managers to Great Leaders™ workshop series to enhance strengths and support development areas.  Although leaders do not need to be highly effective at every behavioral aspect of building relationships, achieving results, and demonstrating resourcefulness, the primary goal of any leadership development initiative should be to maximize the strengths they have and minimize the impact of their weaknesses. 

Core Capabilities of Effective Leaders

Tuesday, January 12th, 2010

For a number of years now, leadership theorists and designers of multi-rater feedback instruments have discussed and debated the number and nature of core capabilities that exist in an effective leader. Several have settled on five dimensions or five practices, others have identified as few as two and as many as eight dimensions. Most of these have not provided proof of the validity of the core capabilities they hypothesize exist. Consequently, it is difficult to know if their dimensions reflect real, measurable leader capabilities. 

In 2008, Roselle Leadership Strategies, Inc. embarked on a rigorous validation study of the FULLVIEW Feedback Inventory™, a 360-degree instrument developed in 1996. When we first constructed the instrument, we determined through our experience with high performing leaders and our analyses of other multi-rater instruments that there were three fundamental capabilities important in leaders at all levels in an organization. We called these Building Relationships, Solving Problems, and Taking Initiative. Then, we expanded these into 12 competencies, which we measured with 48 behavioral items and 14 sections of anecdotal comments.

Our construct validation study is now complete, and it includes FULLVIEW Feedback Inventory™ results of more than 300 leaders in various organizations. For the purposes of the study, the researcher (Colleen McGinnis) conducted separate factor analyses for each different perspective. That is, she analyzed Self, Manager, Direct Report, and Peer data individually and searched for the best fit for number of core capability factors. In each case, the researcher ran a factor analysis for two, three, four, and five factor possibilities. The results of this analysis indicated that, in fact, three factors best represented the data clusters across the 48 behavioral items for each perspective (self, manager, direct report, and peer).

Because the correlations of specific behavioral items from the 48 differed somewhat by self, manager, direct report, and peer perspective, the researcher created separate core capability data sets for each. She named them in a way that she felt best captured the content of the items in each set. It makes sense that the items would differ slightly across these four rater groups, because they each represent a unique perspective on the person they rated. For example, an item like “utilizes a wide range of approaches to persuade people, provide feedback and coaching, show appreciation,” fell into four different core capability categories, reflecting variations across the four perspectives:

  •  Inspires Others (self perspective)
  •  Achieves Results (direct report perspective)
  • Exhibits Resourcefulness (manager perspective)
  • Uses Resources Wisely (peer perspective)

Despite the variability of a few items like this one, however, the results clearly identified three factors across the four distinct rater perspectives. Using the criterion cut-off that at least three of the four rater perspectives must include an item in the same core cluster, the results showed 16 items in one category the researcher named “Achieve Results,” 16 items in a second category she named “Build Relationships,” and 8 in a third category she named “Adaptive Resourcefulness.” The three-factor analysis, then, accounted for 40 of the 48 total items—solid proof that our initial supposition of three core capabilities was valid.

The remaining eight items did not cluster significantly with any of these three core capabilities, but included important behavioral observations like “exhibits a model of healthy life balance and wellness,” “analyzes multiple perspectives before making decisions,” and “recognizes impact of actions on whole system.” Since these eight items did not exhibit a strong statistical relationship to each other, it was clear that they did not represent a fourth distinct leadership capability. One important outcome from this study is that we will revise or replace each of these “miscellaneous” behavioral items to create a better fit with the three core capabilities. All of the items on the revised FULLVIEW instrument (available at the end of 2009), then, will more closely align with these three validated factors of leadership effectiveness.

The results of this factor analytical study illustrate why we are so confident at Roselle Leadership Strategies when using the core competencies of the FULLVIEW Feedback Inventory™ in our Development Assessments, our Good Managers to Great Leaders™ workshop series, and our executive coaching engagements. These three core capabilities are the most foundational and critical to leader success across multiple levels in any organization. We have started to describe these three capabilities as the “Three R’s of Leadership” because they focus on

  • Relationships
  • Results
  • Resourcefulness

Is your 360-degree feedback system actually valid?

Tuesday, December 15th, 2009

One of the most popular recent advances in leadership development is the 360-degree feedback system. Many organizations use this type of assessment to collect information from various people who can accurately rate the performance of a specific manager with whom they work. In the eyes of most users, the strength in such instruments is their capacity to capture multiple perspectives that most often include manager, self, peers, and direct reports. If your organization has purchased and used a 360-degree instrument, do you know the validity of the feedback?

The results of these surveys provide the basis for important human resource decisions—such as individual development goals, promotion, and training emphases—that, cumulatively, can make or break the success of an organization over time. Most organizations do not use multi-rater feedback for selection or promotion decisions, in large part because the results rely on subjective perspectives, and the instruments are not designed for these purposes. Despite the popularity of these instruments, however, the majority do not report validity data to confirm that they actually measure the underlying leadership factors they purport to assess.

What is Multi-rater/360-degree Evaluation? Although leadership performance historically has been measured through performance appraisals delivered solely by an individual’s supervisor, the last three decades have seen the emergence of multi-rater, 360-degree feedback systems. Effective leadership is a complex construct, requiring leaders to master a host of sophisticated cognitive, strategic, and interpersonal skills. Starting in 1967, researchers began to note that using only a single rating source to evaluate leadership might not provide all of the information necessary to evaluate a leader’s performance properly. Since then, relevant research has convincingly demonstrated that a single assessment of a leader, either by self-evaluation or by a supervisor, is inadequate to capture that leader’s performance fully. First, individuals are not always the most astute evaluators of their own performance. Self-ratings of any behavior are often widely different in comparison to ratings of that same behavior when completed by another observer (Atwater & Yammarino, 1992). Second, various rating perspectives (i.e., supervisor versus peer, manager versus direct report) actually assess different underlying performance constructs (Turkel, 2008). That is, individuals in differing organizational roles have limited opportunities to observe a specific individual’s behaviors, so we need multiple perspectives to measure performance accurately. However, this leaves the question of how to interpret the variation in ratings between raters.

Measuring Validity and Reliability. Concern about inter-rater agreement focuses on the meaning of low agreement across organizational perspectives. If two perspectives disagree substantially in their ratings, the meaning of that discrepancy remains unclear. Theories range from those claiming that the data are inaccurate or meaningless, to those concluding that differing perspectives supply equally valid data. Tornow (1993) suggested that, “the very differences in perspectives among those who provide feedback can enhance the personal learning that takes place.” Therefore, the differences in rater perspectives are not treated as error variance (variation that needs to be reduced), but rather as critical additional information that makes the findings more reliable and gives them deeper perspective.

Further, Scullen et al., (2000), hypothesized that observed variations in ratings might reflect actual differences in performance, because a manager is likely to perform differentially in front of diverse groups of people. Specifically, they found that both supervisor and subordinate perspectives capture something unique to those perspectives, but peers do not. They suggest that these rating differences are more a function of true differences in the observed performance than of variations in the observers themselves (bias). Despite the fact that differing perspectives exist on each individual leader, Scullen, Mount and Judge (2003) also have shown that raters across various perspectives share a common conceptualization of a specific leader’s overall performance. 

Knowing that it is crucial to gather multiple perspectives when attempting to create the most accurate possible picture of performance, and with so many instruments from which to choose, how can you know where to start? According to VanVelsor et al. (1997), authors of these instruments must meet the guidelines of a comprehensive process for evaluating 360-degree instruments. According to them, an author of this type of instrument must:

1. Attempt to identify the full range of behaviors or skills believed to represent leadership competencies.

2. Provide reliability information regarding whether the instrument items cluster in behavioral competencies that are internally consistent, distinct from each other and useful for feedback.

3. Provide validity information about whether the scales actually measure the behavioral dimensions they purport to measure (construct validity).

If your company is currently using some form of multi-rater feedback, did someone scrutinize it for these three features? The majority of multi-rater feedback providers designed and implemented their 360-degree feedback tools with the assumption that they accurately measure the leadership skills necessary for success in a particular organization. They picked items that logically seemed important to leader success, or they evaluated data they collected on competencies that support or undermine leader success. Some used a combination of logic and data collection, but most did not assess them for validity.


Works Cited

Atwater, L.E. & Yammarino, F.J. (1992). Does self-other agreement on leadership perceptions moderate the validity of leadership and performance predictions? Personnel Psychology, 45, 141- 164.

Scullen, S.E., Mount, M.K., & Goff, M. (2000). Understanding the Latent Structure of Job Performance Ratings. Journal of Applied Psychology, 85, 956-970.

Scullen, S.E., Mount, M.K. & Judge, T.A. (2003). Evidence of the construct validity of developmental ratings of managerial performance. Journal of Applied Psychology, 88, 50 – 66.

Tornow, W. (1993). Perception or reality: Is multi-perspective measurement a means or an end? Human Resource Management, 32, 221-229.

Turkel, C.C. (2008). Female Leaders’ 360-degree self-perception accuracy for leadership competencies and skills. Dissertation Abstracts.

VanVelsor, E., Jean-Brittain, L. & Fleenor, J.W. (1997). Choosing 360: A Guide to Evaluating Multi-Rater Feedback Instruments for Management Development. Greensboro, N.C.: Center for Creative Leadership