Why Standardized Assessments Should be the First Step in Your Hiring Process

Jiaying Law

People Scientist

As we enter the era of information and technology, recruitment methods are no longer limited to written documents. We now have various options for collecting information and assessing candidates’ abilities, most common of them being standardised assessments.


However, you may still rely on resumes, cover letters, and motivation letters as the initial documents to request from candidates. Next to all the discrimination we can think of at the surface level, using such unstandardised hiring methods could also lead us to form an inaccurate judgement from a predictive validity perspective..

In this article, we will touch upon the two topics below:

  • The consequences of having the wrong screening methods
  • How can placing standardised assessment as the first step help?

2 consequences of having the wrong candidate screening methods

Letโ€™s begin with a small experiment. The pictures below consist of 5 CVs company X received for a junior software engineer position. Your goal is to screen through these CVs and find out who is qualified for the next step in the recruitment process, ultimately finding the best candidate for the position in terms of their future work performance.

Before you make a decision, ask yourself the following two questions:ย 

  1. What skills or which pieces of information are the most important predictors of future job performance that made you advance the person to the next step of the hiring process but not others?ย 
  2. How is your decision being made? Did their education, past experience, school projects or certifications affect your decision?ย 

Problem #1: Different kinds and amount of information collected

Despite the fact that there are tons of online resume aids out there to help us create a โ€˜perfectโ€™ resume, candidates still differ from each other with past experience, job-related knowledge and skills. They are free to choose which sections or what skills they want to show to prospective employers.


From our examples above, you can see these candidates have listed various professional qualifications and interpersonal skills. Some candidates submitted their certifications while others provided school projects as past experience.

Types of information

The different kinds of information have different predictive power for future job performance. For instance, resumes serve as a platform that lets candidates list their past work experience.ย 

When in fact, pre-hire working experience has low to zero correlation with future task performance (r=0.06) and turnover (r=0.00; Van Iddekinge et al., 2019). Specifically, not all past work experience could generate the same amount of job knowledge or skills for the new job requirement or work environment (Dokko et al., 2009). ,ย 

Another information we can find on the resumes is the education of candidates. In a similar vein, having the same education level does not represent that they have acquired equal professional knowledge. Thatโ€™s probably the reason why years of education only have a very weak correlation with future job performance (r=0.10; Schmidt et al., 2016).

Amount of information

On the other hand, an overload of unnecessary information or insufficient useful information on these resumes might hinder our judgement of candidatesโ€™ capability.ย 

For example, the dilution effect refers to the phenomenon that while we already have strong and positive evidence for our prediction, adding weaker or more negative information will weaken the overall strength of the prediction (Hotaling et al., 2015). When our mind is clouded by too much diagnostic information, our reliance on good information can decrease due to limited cognitive space available (Dana et al., 2013).

Thus, in order to make a good prediction of candidatesโ€™ future job performance and screen candidates accordingly, we need the right types and right amount of information.ย 

Problem #2: Inconsistent information combined

Letโ€™s get back to the question: How was your decision made? It might be easy to make a final decision, but it can be tricky to explain how exactly we come to that decision. It is because we are using holistic judgement (a.k.a. clinical, intuitive judgement) when we screen through those resumes, where we process and combine all the information in our mind (Dawes et al., 1989).ย ย 

Inconsistent way of processing information

With different kinds of information collected, it makes it harder to integrate it all consistently across candidates and make a decision. From the example above, you might already have a feeling that it is difficult to make a consistent judgement across candidates in your mind.ย 

Comparison between two CVs with different qualifications, past experience

For candidate 1, we might think that being a team player and attentive to details are advantageous for the position. At the same time, we might think candidate 2, who has a high analytical mind and project management skill, is also suitable for the position.ย 

However, we donโ€™t know how they judge themselves on the skills that they did not share.ย ย 

This inconsistency of information integration is likely to lead to a low predictive validity of candidatesโ€™ future job performance and a more serious problem โ€” unfairness. As we may fail to explain how the exact weighting of each piece of information was evaluated in our mind, our intuitive judgement would go without transparency and clarity. It is unfair if we just use our personal judgement (such as impression, gut feeling or intuition) to screen candidates.ย 

Unconscious biases

This is also the time where unconscious biases (such as confirmation bias, affinity bias, halo effects etc.) creep in. We might suffer from these biases which can result in inaccurately assessing the capability of candidates. For example, we might find candidate D is the most qualified candidate simply because we also graduated from University of PineHills regardless of the skills required for the position (affinity bias). The dangerous thing is that this could happen even without us noticing.ย ย 

In short, while processing or integrating candidatesโ€™ information, we need to consider a more standardised method to prevent low predictive validity, unfairness and unconscious biases.

How can placing a standardised assessment as the first step help?

Two benefits of using standardised assessments as the first step in the hiring process

Benefit #1: Collecting the same kind of more predictive data across candidates

Instead of wondering about what sections could be included in resumes by candidates, letting all candidates complete a standardised assessment could make sure that you are obtaining the same kind of data across candidates. In this way, you take back control of choosing more valid information as predictors for future job performance (e.g., cognitive abilities or behavioural tendencies).ย 

One example of valid predictors for job performance that can be measured by standardised assessment is cognitive abilities (Schmidt et al., 2016; Salgado & Moscoso, 2019; Ones et al., 2005). Candidatesโ€™ cognitive abilities can directly tell us about their cognitive processing speed, learning ability, flexibility and problem-solving abilities, which are more aligned with the skills or capability you are looking for in the position. This can improve the predictors-criteria matching and leading to more authentic data collection (Lievens & De Soete, 2012).ย 

Also, by carefully selecting the predictors you think are essential, you can rest assured that you have all the information you need for predicting their future job performance, not more and not less. In such a manner, you save yourself some trouble by not having unnecessary, low predictive data that would jeopardise your judgement.ย 

Benefit #2: Integrating information consistently across candidates

Compared to holistic judgements, mechanical judgements (a.k.a. standardised, actuarial judgement) are a more valid way to integrate information for decision making (e.g., Grove et al., 2000; Kuncel et al., 2013; ร†gisdรณttir et al., 2006). Mechanical judgement refers to the method in which decision-makers use decision rules consistently across all candidates to come to a decision (Dawes et al., 1989).ย 

With standardised assessments providing a way to collect the same kind of information across candidates, you are free to use a decision formula to equally assess candidates. Imagine you have decided that problem-solving ability, flexibility, and collaborativeness are the most important predictors for future success in the engineering position. Now you just have to simply sum up all the scores from each standardised assessment that measures those predictors and use the score to rank candidates. The decision comes in handy and you don’t have to worry about unconscious biases messing around with your decision.

Rather than integrating data in your mind and having a subjective judgement, standardised assessments allow you to put all predictors in nicely arranged categories and therefore it is easy to integrate information consistently across candidates.ย 

In this way, the prediction accuracy could be improved by 10%-13% where mechanical judgement outperformed holistic judgement (Grove et al., 2000; ร†gisdรณttir et al., 2006).ย 

Moreover, the predictive validity could also improve more than 50 % if we use mechanical judgement over holistic judgement (Kuncel et al., 2013).

To conclude

Requiring documents such as resumes at the first step of recruitment are problematic in terms of collecting different kinds of information and integrating information inconsistently across candidates.ย 

To solve these problems, you need the right type and amount of information, as well as integrating them invariably across candidates to prevent unfairness and low predictive power.

Placing standardised assessments as the first step of recruitment can help to obtain the same kind and amount of information from all candidates and in turn allow us to apply mechanical judgement to reach a more accurate decision.ย 

References

ร†gisdรณttir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., … & Rush, J. D. (2006). The meta-analysis of clinical judgement project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counselling Psychologist, 34(3), 341-382. https://doi.org/10.1177/0011000005285875

Dana, J., Dawes, R., & Peterson, N. (2013). Belief in the unstructured interview: The persistence of an illusion. Judgment and Decision making, 8(5), 512.ย 

Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgement. Science, 243(4899), 1668-1674. https://doi.org/10.1126/science.2648573ย 

Dokko, G., Wilk, S. L., & Rothbard, N. P. (2009). Unpacking prior experience: How career history affects job performance. Organization Science, 20(1), 51-68. https://doi.org/10.1287/orsc.1080.0357ย 

Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: a meta-analysis. Psychological assessment, 12(1), 19. https://doi.org/10.1037/1040-3590.12.1.19

Hotaling, J. M., Cohen, A. L., Shiffrin, R. M., & Busemeyer, J. R. (2015). The dilution effect and information integration in perceptual decision making. PloS one, 10(9), e0138481. https://doi.org/10.1371/journal.pone.0138481ย 

Kuncel, N. R., Klieger, D. M., Connelly, B. S., & Ones, D. S. (2013). Mechanical versus clinical data combination in selection and admissions decisions: a meta-analysis. Journal of applied psychology, 98(6), 1060. https://doi.org/10.1037/a0034156

Lievens, F., & De Soete, B. (2012). Simulations. In N. Schmitt, N. Schmitt (Eds.), The Oxford handbook of personnel assessment and selection (pp. 383-410). New York, NY, US: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199732579.013.0017ย 

Ones, D. S., Viswesvaran, C., & Dilchert, S. (2005). Cognitive Ability in Selection Decisions. In O. Wilhelm & R. W. Engle (Eds.), Handbook of understanding and measuring intelligence (pp. 431โ€“468). Sage Publications, Inc. https://doi.org/10.4135/9781452233529.n24

Salgado, J. F., & Moscoso, S. (2019). Meta-analysis of the validity of general mental ability for five performance criteria: Hunter and Hunter (1984) revisited. Frontiers in Psychology, 10, 2227. https://doi.org/10.3389/fpsyg.2019.02227

Schmidt, F. L., Oh, I. S., & Shaffer, J. A. (2016). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years. Fox School of Business Research Paper, 1-74.

Van Iddekinge, C. H., Arnold, J. D., Frieder, R. E., & Roth, P. L. (2019). A meta-analysis of the criterion-related validity of prehire work experience. Personnel Psychology, 72(4), 571-598. https://doi.org/10.1111/peps.12335

Our inspirational blogs, podcasts and videoโ€™s

Listen to what they say about our product offering right here