Is your hiring test leaky? I mean, does it pass too many unqualified candidates?
I recently did a search for “hiring tests.” Google turned up 84 million listings, Yahoo about 70 million, and Ask…well, I stopped counting after 106 pages. By any standards, selling “hiring” tests is a big business. But, there is a big difference between a good hiring test and a leaky one.
Leaky tests pass-through marginal performers and, depending on the type of job (unskilled, semi-skilled, professional, managerial) they can cost organizations between 10 percent and 50 percent of annual payroll. In other words leaky hiring tests can be the single most expensive mistake organizations can make.
Here are some common-sense guidelines to dry-up leaky tests.
Self-reported data leaks
A leaky hiring test often begins by asking employees to answer items describing him or herself. It might be given to your own employees or to people around the country with the same job title. Scores are collected, averaged, and used to screen job candidates. Sounds good, right? Wrong.
A couple things happen when we are asked to describe ourselves. In the best case, scores are idealized self-presentations — how we want people to see us. In other cases, responders might be completely out of touch with reality or just faking it. Even when tests include an internal truthfulness scale to flag inconsistent answers, self-reported information exists purely in the mind of the candidate.
Averaging scores is a bad thing. Averages can describe groups, but they cannot describe individuals.
For example, you might believe Californians are flaky and southerners are rednecks. But when you get to know an individual Californian or Georgian as a human being, you usually learn he or she does not match the average stereotype.
Accordingly, when people are assigned to performance groups, it’s rare for any individual to match their group average. Group test scores and individual scores are two entirely different things.
Passing scores are supposed to predict good performance. Failing scores are supposed to predict bad performance.
If you set cut-scores based on averages of high performing groups, you learn nothing about the low group. In fact, both groups may have a lot in common! Hiring accuracy depends on knowing what makes them different.
Cause or pause leaks
Not all test factors cause performance. For one thing, many tests like the MBTI, the DISC, and others were all developed to measure on aspects of normal personality. They might work in a communication workshop, but not all normal personality factors apply to jobs. Research shows only three factors correlate with job performance and six to job-fit; the rest are either irrelevant or overlapping.
Be wary of tests that ask a few questions and use the answers to comprehensively describe behavior. Watch out for tests that try to describe every aspect of human personality.
Finally, avoid like the plague any test that comes without objective supporting data showing factor scores directly lead to (or somehow affects) job performance. If it does not cause, you need to pause.
How many people do you know claim they are intelligent, but aren’t? Claim they are good with people, but aren’t? Claim they are organized, but aren’t?
Article Continues Below
Research shows there is almost no correlation between scores on a personality test and skills such as building interpersonal relationships, solving problems, or ability to learn. Organizations that trust personality scores to predict actual skills are pre-destined to make mistakes.
Purple dinosaur leaks
I love you/ you love me/ we’re a happy family/ with a great big hug and a kiss from me to you /won’t you say you love me too?
Ratings are like Barney the Dinosaur relationships: grouping is often based on who the manager likes best. Whenever you ask managers to rate employees’ performance, the scores will probably underemphasize actual job skills and overemphasize sociability.
Performance confusion leaks
Even assuming your managers are blunt-force honest, and ratings are made on jobs where numbers can be tracked (e.g., customer service, production, sales, and so forth), what happens when an employee such as a Customer Service Representative is rated both on quality and number of customers served (i.e., the two are usually inversely related)? If management cannot decide what’s most important for an employee to accomplish, then what exactly are you supposed to measure?
Employee similarity leaks
All those high and low performers you are studying belong to a special group: sufficiently skilled to remain employed. In other words, there really may not be enough difference between one employee and the next to measure a useful difference in scores. If you have no identifiable and trustworthy standards for comparison, then how can you set scores?
Same but different leaks
Some vendors offer norms for drivers, salespeople, emergency medical technicians, and so forth, implying they can be used for selection. But look at your own workforce.
Are all your employees with the same title high performers, do their jobs require identical tasks, do individuals match the group average, do high and low performers have score differences? Playing the job-norm card is an effective way to market a leaky test but it does not help the test user make better hiring decisions.
Leaky tests are great examples of junk science. I advise my clients to take them with a grain of salt.
Leaks come from many sources: restricted score range, conflicting metrics, useless test factors, self-report errors, overemphasizing manager bias, underestimating job skills, trusting personality to predict actual skills, comparing individuals to group averages, and assuming job titles all involve the same skills.
Considering the cost of water these days, don’t you think it’s a good idea to tighten-up the faucets?