Business Psychology

Article No. 352
Business Practice Findings, by James Larsen, Ph.D.

What Are We Measuring?

Selection tests are examined with some surprising findings.

Imagine a group of high school boys trying out for the varsity basketball team. They're all lined up in the gym, and one-by-one, they step up to a ruler taped on the wall. Standing flat on their feet, they reach as high as they can, and the coach notes the place on the ruler they touch. Next, they crouch, jump, and touch the ruler again at the peak of their jump. The coach also notes this place, and the next boy steps up to the wall.

The coach has not told them why they are doing this, and so the boys easily fall into two groups: one that figures out the purpose of the test and one that doesn't. The boys who realize that jumping ability is being measured give it all they've got and jump as high as they can. Those who don't, can be expected to withhold some effort. This is known as a jump-reach test, and a boy would have to be pretty dense not to realize that it measures jumping ability.

Assessments to measure jumping ability are easy to figure out. Assessments that business owners use to help them select employees are not. Every day, business owners conduct selection tests and job interviews and carefully observe and note what they see, hear, and feel, and just as with the young basketball players, those who figure out what is being measured have a distinct advantage over those who do not. We can see it, and we call it preparation for the interview. People hunger for insight into what we are looking for, and it should come as no surprise that people vary in their ability to ascertain the factors being measured in an evaluative setting. That's one finding from research conducted by Anne Jansen from the University of Zurich. She even named it. It is the ability to assess situational demands.

Jansen worked with 124 young adults who agreed to participate in a job selection simulation that lasted all day. It consisted of interviews and a variety of assessment exercises. They were motivated to do their best because they believed it was a good experience to prepare them for real employment assessment situations they would soon face, and there were also cash prizes for those who did the best. One thing they didn't know was that their ability to assess the situational demands of the assessment experience was also being measured. Jansen did this simply by asking people after each exercise to list factors they thought were being measured and then comparing this list to the lists the assessors were actually using. Their performance assessing these demands across all the assessment experiences was rated, and ratings varied from poor to excellent. The overall average rating was 1.54 where poor was 0 and excellent was 3. This finding suggests that it was not easy for the young adults to figure out what was being measured and that people varied greatly in their ability to do so.

Next, she noted the outcome of the overall evaluations. Recall, that the young adults were competing and trying to do their best in the assessment, and people did vary considerably in their performance in this all-day experience. Some people did very well, and others not so well. Since all the adults were also employed, Jansen contacted their supervisors and asked them to rate their job performance back on their jobs.

Not surprisingly, the people who did best on the job simulation exercises also received the highest ratings from supervisors. The job simulations and interviews seemed to be measuring important job skills. People who had more of these skills performed better on their jobs. But there was a problem. The people who did best on the job simulation exercises had also scored highest on assessing the situational demands. Jansen now had an interesting question to ask of her data: Which score was most important in predicting actual performance back on the job, the score on the job simulation exercises or the score on assessing the situational demands? It was the score assessing the situational demands.

Next, Jansen controlled for this ability (to predict situational demands) and looked at her data once again. Simply put, she subtracted the ability to predict situational demands from the data and then noted if top scoring individuals on the interviews and simulation exercises were still the ones who received the highest ratings from supervisors back on the job. She found they were not. Jansen was left with a single score that consistently predicted higher ratings by supervisors back on the job: the ability to predict situational demands.

We try hard in our businesses to measure the knowledge, skills, and abilities (KSAs) of candidates we consider for employment. We intend to hire people with the best KSAs. Now, along comes Jansen's research, and we are left to wonder: Are we really measuring knowledge, skills, and abilities, or are we actually only measuring people's ability to assess the factors we are trying to measure? Are those who correctly guess what we're looking for and then displaying exactly what we want to see tricking us into thinking they have superior knowledge, skills, and abilities for the jobs we're trying to fill when they do not? Well, why wouldn't they? A job is at stake. Are we actually turning away people who are better equipped with KSAs for the jobs we want to fill? This is discouraging.

Another implication of Jansen's work, however, is not so disheartening. If it's true that ability to assess situational demands predicts job performance, and that our selection procedures measure this factor (and perhaps nothing else), then we are hiring people who will continue to use this ability to guide their actions in their jobs. Supervisors, apparently, are finding that this feature of performance leads to superior ratings, and Jansen believes many work settings need employees who have this skill, especially settings where employees are confronted with novel or unpredictable situations. So that's good. After all, the alternative is that employees missing this ability will guess incorrectly in such settings and apply behaviors that are not appropriate for the situation. And that's bad.

What to do?

One choice is to remove the obscurity in our selection procedures and state clearly what factors we are measuring. Of course, we've never done that before, and we've no idea how people will react. Perhaps we could dip our toes into the water of this uncertainty by stating our measurement factors sparingly at first, seeing how people respond, and noticing what kind of employees we get as a result. It's at least something we can try.

Reference: Jansen, Anne, and Klause Melchers, Filip Lievens, Maartin Kleinmann, Michael Brandli, Laura Fraefel, and Cornelius Konig (2013) Situation Assessment as an ignored Factor in the Behavioral Consistency Paradigm Underlying the Validity of Personnel Selection procedures. Journal of Applied Psychology (currently in press).

© Management Resources

See Also:
Fooling Us in the Interview
To Hire the Right Person

Back to home page