Do the vast majority of people who pass your pre-employment personality tests turn out to be exceptional performers?
If you answered “no” then your tests aren’t testing.
Recruiters and hiring managers are led to believe people who pass their personality tests will be successful. Unfortunately, practical experience shows about 50 percent of employees and 70-80 percent of managers still fail to meet expectations.
It’s a hard concept to grasp, but don’t be fooled by statements like: “The XYZ is not a hiring test … but it can be used to help make hiring decisions.” That’s like saying, “Ignore the rattle … the snake’s harmless.”
Cause? What cause?
Here is an example of traits often found in personality tests: dominance, compliance, extraversion, judgment, sensitivity, curiosity, conscientiousness, humility, and determination. First, we’ll show you a silly-science example:
- Divide producers into groups (e.g., high and low performers);
- Give both groups the same personality test;
- See which scores differ; and finally,
- Use candidate scores to predict group membership.
After impressive number crunching, suppose the A-list group had higher average dominance, compliance, and extraversion scores; the B-list group had higher average curiosity, conscientiousness, and determination; and, both had the same average judgment, humility, and sensitivity scores.
Is this enough evidence to use the results for selection or promotion? Noooo.
Anyone can compare two sets of numbers and tell you whether they correlate, but, it takes careful study to know whether A actually leads to B.
For example, skirts and stock markets tend to move up and down together, beach ice cream sales and shark attacks tend to move together, and watermelon sales and temperature move together. But, skirts do not cause the market to change, sharks do not buy ice cream, and selling watermelon does not cause it to be hot.
You can probably think of many others, but the most important statistical concept is, “If it does not cause, you need to pause!”
True professionals know beforehand the factors they want to measure; then, they use stats to compare scores with performance to try to prove themselves’ wrong! I know it does not make sense, but remember since the future is murky and uncertain, it’s better to reduce mistakes than seek perfection.
Explaining things after-the-fact is creative story telling. Professionals make an informed prediction, collect data and try to disprove it.
Returning now to our example … we already discussed why throwing things against the wall to see what sticks is unprofessional. Now let’s consider the Lake Woebegon effect; that is, the men are all strong, the women are all pretty, and the children are above average.
Let’s suppose in our previous example that shoe size was one of our factors. We know individual shoe sizes in both Group A and Group B ranged from size 6 to 12. However, Group A folks averaged size 8 and Group B averaged size 10. Does that mean an applicant wearing a size 9 will become a member of Group A? A size 12 a member of Group B?
Article Continues Below
Nope. And, nope. Group-level data tells us about groups … not about individuals! Bad analyst! Bad!
How about this? There are four people in Group A and 10 people in Group B. Aside from the problems we already discussed, can we compare the two groups? ‘Nope again. One person in Group A has a 25 percent impact on the group’s overall score, while one person in Group B has only a 10 percent impact. Furthermore, the group sizes are so small it would be silly to think scores would generalize to all candidates. It takes at least 25 (preferably, hundreds) of subjects to draw reasonable conclusions. No soup for you, analyst!
Oh, yes … one more thing. Can we trust someone with a high score in the judgment trait to be smart? Get real! Most studies show less than a 1 percent relationship between personality scores and cognitive skills and about a 10 percent relationship with interpersonal behaviors. Why? When people take a self-descriptive test you never know if they are honest, trying to make a good impression, delusional, clueless, and so forth. If you need someone who is smart, give them a problem to solve — not tell you about it!
Importance of being wrong less often
It may sound counter intuitive, but it is easier to reduce the number of bad hires than it is to find superstars. The future is murky, filled with unpredictable events that elude even Carnac the Magnificent. Folks who expect 100 percent hiring and promotion accuracy are going to be frustrated. No system known to humankind can perfectly predict the future. There are just too many uncontrollable variables.
The present, however, is more tangible. So, instead of trying to ensure perfect success, it’s actually easier to reduce test error by screening out unskilled people. It goes without saying manager-employee compatibility is very important; but, in addition to personality factors, organizations expect employees to have cognitive abilities, motivations and, so forth. This is the 20 percent that delivers 80 percent of job results.
Summary: silly-science is more than bad practice
Selecting or promoting people based on silly-science is more than just bad practice. It’s unethical, irresponsible and unprofessional. Qualified people are rejected, unqualified ones are hired or promoted, and the inevitable potential of legal action increases. And, it could get worse. What do you think will happen when all those incompetent employees think they should be promoted to management?
Reducing the odds of making a wrong decision requires tests, interview questions, application blanks, and so forth that are grounded in a solid theory of job performance; that is, they measure things that cause high or low performance. If you cannot, for certain, prove you are measuring factors that cause performance, you will never graduate from the half-wrong club.
“If it does not cause, you need to pause!