Especially in these times of economic uncertainty, hiring top talent is of utmost importance. To this end, organizations employ a wide range of assessment strategies, from a “one size fits all” approach to the other extreme, the “more is better” theory. In the following post I will analyze both ends of the spectrum and use empirical evidence to suggest a best practice.
One-size fits all. Those who trust in one standardized instrument, like the Predictive Index™, Caliper™, or Prevue™, use the same tool to help them make most hiring decisions. They trust the instrument, are familiar with its content, and have a favorable view of its success rate in weeding out poor candidates and highlighting strong ones. The advantages of such an approach include simplicity, minimum time and financial expense, and, oftentimes, a straightforward, common vocabulary to use in discussing candidates. The primary disadvantage is that one simple approach for all levels of hiring decisions can miss important data and perspective, especially for senior positions that are critical to the organization’s success. In these cases, one instrument is often not sufficient to pick up essential details.
More is better. Contrasting with the one-size fits all proponents are those who believe in using complex assessment centers, or a wide variety of instruments and approaches to gather large amounts of data on each candidate. These assessments typically include some combination of the following:
- Structured interviews with multiple consultants
- Personality tests
- Abilities tests, including IQ tests
- Work simulations (either tailored to the specific organization, or an “off the shelf” version designed for a marketing, engineering, or other functional area position)
- Interpersonal role plays (individual, team)
- Administrative in-basket exercises
The advantage of using such an approach is that the assessment views candidates from a variety of perspectives in order to draw a conclusion about them. The instruments used typically provide a much greater level of depth than the one-size fits all, minimalist approach. The multiple assessment components also offer a good deal of “face validity” in that it seems logical that multiple components would provide a more valid perspective than using a single test.
While these advantages are enviable, there are many disadvantages to the more is better approach. The cost per candidate can be prohibitive, especially if the assessment uses custom-designed work simulations, in-basket exercises, and role-plays. The results across the various components often conflict and, consequently, are confusing. The validity of these complex assessments is not significantly greater than the one-size fits all approach. Let us look at each of these disadvantages separately.
Prohibitive cost. The up-front fees for designing and implementing an assessment center runs somewhere in the five figures range. Even when formal assessment centers are not used, the cost per candidate for all the pre-work testing, the multiple interviews, and the in-basket, simulation, or role-play observers and raters is typically several thousand dollars.
Conflicting, confusing results. This happens because candidates usually do not respond equally well to each of the various assessment components. They are not great actors, for example, so they struggle in the role-plays or simulations. Perhaps they score well in the work simulations, but their personality testing raises red flags. Or, they impress people in the interviews, but fail miserably in the in-basket exercise. A related issue is that large assessment companies use multiple raters and observers across a group of candidates, often using consultants from geographically dispersed offices. Despite the best efforts to train each of these consultants consistently on how to observe and rate candidates, their own individual biases and nuances in training introduce variability into the scoring process. This adds to the conflict in results, and confusion in the interpretation.
When consulting companies determine the bottom line result/recommendation for such candidates, they usually sound confident in their decision. As a client, however, you may not know what decision rules led to the conclusion and why they determined that the role-play was more important than the personality testing, for example. The underlying decision rules, therefore, are critically important to the conclusions drawn, but these are often “hidden” from you as a client.
No real gain in validity. How this can be, you might ask. Why else would you include so many different assessment components, if not to substantially increase the validity? From a strict statistical perspective, the validity of a set of assessment components is not appreciably higher than the validity of the most highly valid component.
We know from employment research that the most valid single predictor of future job performance is general mental ability. For this reason, any selection process should use some measure of mental ability as a primary differentiator between candidates. From there, the question is which assessment component(s) add substantial incremental validity above mental ability, without prohibitive costs?
A 1998 review of hundreds of assessment studies found that Assessment Centers have substantial predictive validity themselves but only add a 2% increase in validity when combined with a measure of mental ability. In other words, applicants who score well on measures of intelligence typically also perform well in Assessment Centers, so there is little to no additional value in adding this substantial cost to the selection process.
On the other hand, both work sample tests and structured interviews add some incremental value when used in conjunction with mental ability. A measure of mental ability plus a well-designed work sample test will predict 42% of a candidate’s performance, while mental ability combined with a structured interview will predict 40%. However, the cost of developing a work sample test for a specific occupation in a particular organization is significantly higher than conducting a well-structured interview. Again, the question is whether adding significant cost is worth the additional two percent in validity.
How do you get the most bang for the buck? For most of the people they hire, our clients want an individual whose background, style, and skills match the demands of the role for which they are hiring, and give them the potential to move beyond that role. As a leader in your organization with responsibilities for the hiring of new employees or the promotion of existing ones, you must answer the question yourself of how to get the most from your selection assessment process.
At Roselle Leadership Strategies, we take a balanced perspective in the assessments we provide to our clients. That is, we include tests of mental ability and structured interview as the data, above, suggest. We typically include personality tests to help illustrate the candidate’s style and fit with the culture. However, we avoid the excess of work simulations, in-basket exercises, and role-plays that can add major costs, with little additional validity. Moreover, as noted above, the various components often confuse and obfuscate the bottom line results. We try to get the clearest picture possible of the candidate, using responses from written and verbal interview questions, personality assessment, and mental abilities testing, and then we map that to the current and future needs of the organization.
Assessment results should act as a catalyst for further dialogue about a candidate, not pronounce indisputable judgment on them. It is important to avoid situations like the one we recently encountered with a very successful director of a non-profit organization who went through an assessment. The conclusion reached by the firm conducting the assessment was that he had “a 9 percent chance for success” in his current role. Such a pinpoint conclusion by any vendor, in light of the limitations of the assessment components we just outlined in this paper, was either ignorant or arrogant, or both.
The bottom line is that we suggest you use assessment tools that fit your budget and help you make better hiring decisions. Understand that an assessment recommendation is a helpful part of an overall selection process, but should not be presented or interpreted as the final arbiter of a candidate’s chance of career success in your organization.