Harmful Impact on Minorities in Recruitment Processes

Jiaying Law

People Scientist

It is a strange phenomenon, where more and more companies claim that they are lacking candidates while candidates are also struggling to land a new job. Providing companies have declared a ‘talent war’, it is essential to look into the root cause of why there is a gap between companies and candidates and how to narrow it down. 

One of the possible reasons is the adverse impact on certain groups that exists within different types of hiring procedures. In the following sections, we will guide you through:

  • What is the adverse impact on personnel selection
  • How to detect adverse impacts
  • Cultural influence on adverse impacts of popular selections tools

A glimpse into the adverse impact of recruitment processes

Do you ever feel that certain recruitment processes are favourable for you? For example, I might think that a cognitive ability test can help me better to get into the next stage of the recruitment process whereas you might think that you would perform better in a personality test. Sadly, this is not just about how we perceive the fairness of recruitment processes. 

A score gap or difference among certain groups of people does exist in almost the entire hiring process, and is known as “adverse impact”. Specifically, it refers to a substantially different rate, either positively or negatively, affecting the percentage of certain groups members (e.g., race, gender or ethnicity) in the candidate population being rejected and/or accepted for employment, promotion and retention (Biddle, 2017).  

For instance, women are more adversely impacted in a physical ability test than men, particularly the upper body strength test (Biddle, 2017). Another example is that written tests typically have higher disadvantages for Blacks, followed by Hispanics and also sometimes Asians (Sackett et al., 2001; Neisser et al., 1996).

This effect is beyond the influences of unconscious bias, but rather due to various shortcomings every tool possesses which is almost unavoidable. More specifically, adverse impacts of the selection tools can be caused by several factors, including (Biddle, 2017):

  • Chance
  • Implicit measurement problems of the tests (e.g., poor reliability)
  • How have the test scores been used, i.e., selection ratio (e.g., ranking or pass/fail)
  • Differences in distribution sizes (e.g., 80 men and 10 women in talent pools)
  • Reliable subgroup differences in general to test-taking
  • True population differences in the distribution of the trait being measured (e.g., women are generally more empathetic than men; Mestre et al., 2009).


This is certainly an important point to consider when we choose the tools assisting us with recruitment, especially within the pre-employment screening process. It can lead us to legal problems and cost us a fortune if we do not handle it appropriately. For example, more than 12, 000 cases are filled each year with Equal Employment Opportunity Commission (EEOC) in the US solely for age discrimination, resulting in more than $83.8 million in financial benefits (2021 alone) spent for settling the cases (EEOC, 2021; Fisher et al., 2017). 

Scientific detectors of adverse impact

There are several ways to detect adverse impacts in selection procedures. The four-fifths rule was first introduced in the ’70s to evaluate the occurrence of adverse impacts (EEOC, 1978). It compares the selection ratio of any other gender, race or ethnic subgroups to the subgroups with the highest selection ratio. If this comparison of selection ratios is less than 80%, then we can conclude that there’s evidence of adverse impact (Biddle, 2017; Newman & Lyon, 2009).

Let’s do some maths! If 3 out of 10 male applicants (30%) and 5 out of 20 female applicants (25%) were chosen, then the comparison (25%/30%) between these two subgroups would be 83%, which indicates that there’s no adverse impact detected toward either subgroup. In contrast, if 5 out of 10  male applicants (50%) and 7 out of 20 female applicants (35%) were selected for a position, the comparison (35%/50%) between males and females would be 70%, which means that there is an adverse impact toward female applicants. 

In the ‘90s when statistics methods got elevated, more complex scientific methods came up that allowed us to run a more detailed analysis of the score differences. Simply put, it compares the differences in test scores or hiring rates between subgroups using statistical analysis. Detailed explanation of these analysis methods is beyond the scope of this article. 

Cultural influence on adverse impacts in different selection methods

Many pre-employment screening tools have been developed to accelerate the process of hiring. Talent pools nowadays are getting more and more diverse, which are formed by candidates from different backgrounds and cultures. Since globalisation is currently a prevalent phenomenon, it is crucial to examine the cultural influences on adverse impact in different selection methods.

Many researchers tried to explain the score difference in standardised tests between different demographic groups (e.g., White & Black) using the influence of culture on item responses (Hough et al., 2001). However, culture is not only about our skin colour, but also consists of several aspects that vary by country, ethnicity, religion, gender or various combinations of these.

In the following sections, we will talk about the potential cultural threat to adverse impacts on cognitive ability tests and personality tests.

Cognitive ability tests

For cognitive ability tests, cultural differences may be considered as a partial cause of subgroup differences if the non-targeted-construct influences are associated with culture (Hough et al., 2001). Particularly, if two people with similar abilities from two subgroups (e.g., Blacks, Hispanics) generally respond differently to the same items, then we might need to be cautious about the cultural influences for subgroups’ differences. 

Greenfield (1997) has pointed out that before we generalise the cognitive ability tests to all cultures, we should reach a cultural agreement on the values and meaning of the items and responses; testing on individual-level knowledge (not culture-specific knowledge); and the familiarity of item context and content (there’s only one correct response; Hough et al., 2001). 

Hough and colleagues (2001) had summarised the existing empirical research regarding cognitive ability tests and their potential adverse impact caused by cultural factors. They showed that culture only has a weak influence on cognitive ability test performance. Furthermore, this influence doesn’t change the psychometric properties of cognitive ability tests. However, they, at the same time, remind us that most of the studies were post-hoc and used race (i.e., Black vs White applicants) as the indicator of cultural factors. Further studies are needed to confirm this conclusion. 

Personality tests

As humans, we all have certain types of personality and the latent structure of these traits (e.g., extraverted vs introverted) are quite similar across different cultures. Yet, growing up in different cultures affects the way we express ourselves, providing various values formed along the life path. For instance, Markus and Kitayama (1991) demonstrated that people who grew up in collectivist cultures and individualistic cultures have a distinctive belief in the construction of self and of others. People from collectivist cultures tend to perceive the “self” in relation to context, fitting themselves in society, and preserving harmonious interdependence among people. In contrast, people from individualistic cultures are more likely to interpret the “self” as unique, independent from others and expressing their special characteristics. 

The potential threat to adverse impact is that most of the personality tests are being developed within a culture and do not take cultural differences into account. It leads to interpretation bias that could result from any combination of construct, method or item bias (Church, 2001). 

  • Construct bias: Occurs when the targeted construct only slightly overlaps across cultures. 
  • Method bias: Includes sample bias (nonequivalence of cultural sampling), instrument bias (distinctive response styles) and administration bias (practical issues while distributing the test)
  • Item bias: Resulted from the improper translation or the inclusion of items that are less relevant in certain cultures. 

A cruel fact is that while we are using personality tests, we seldom make sure during the development process that enough diverse people are in the sample to make sure the test can represent a cross-cultural trait. If a personality test was developed without concerning the interpretation bias listed above, then we need to be careful when using it as a screening tool. 

To Conclude

Research on adverse impacts has a long and controversial history (e.g., Gottfredson, 1988; Roth et al., 2001). Among them all, the relationship between cognitive ability tests (mostly g or intelligence) and race plays a big part. However, the result of these studies was mixed. Countless efforts were invested to reduce adverse impacts in recruitment (e.g., Newman & Lyon, 2009; Wee et al., 2014). 

It is important that we are aware of this issue and choose the most suitable employment tool to avoid any potential consequences.  


Age Discrimination in Employment Act (Charges filed with EEOC) (includes concurrent charges with Title VII, ADA, EPA, and GINA) FY 1997 – FY 2021. (2021). U.S. Equal Employment Opportunity Commission (EEOC). Retrieved May 20, 2022, from https://www.eeoc.gov/statistics/age-discrimination-employment-act-charges-filed-eeoc-includes-concurrent-charges-title  

Biddle, D. (2017). Adverse Impact and Test Validation. Taylor & Francis. https://doi.org/10.4324/9781315263298 

Church, A. T. (2001). Personality measurement in cross‐cultural perspective. Journal of Personality, 69, 979-1006. https://doi.org/10.1111/1467-6494.696172 

Fisher, G. G., Truxillo, D. M., Finkelstein, L. M., & Wallace, L. E. (2017). Age discrimination: Potential for adverse impact and differential prediction related to age. Human resource management review, 27, 316-327. https://doi.org/10.1016/j.hrmr.2016.06.001

Gottfredson, L. S. (1988). Reconsidering fairness: A matter of social and ethical priorities. Journal of Vocational Behavior, 33, 293-319. https://doi.org/10.1016/0001-8791(88)90041-3 

Greenfield, P. M. (1997). You can’t take it with you: Why ability assessments don’t cross cultures. American psychologist, 52, 1115. https://doi.org/10.1037/0003-066X.52.10.1115 

Hauenstein, N. M., Holmes, J. T., & Tison, E. B. (2013). Detecting adverse impact: The four-fifths rule versus significance testing. Public Personnel Management, 42, 403-420. https://doi.org/10.1177/0091026013495762 

Hough, L. M., Oswald, F. L., & Ployhart, R. E. (2001). Determinants, detection and amelioration of adverse impact in personnel selection procedures: Issues, evidence and lessons learned. International Journal of Selection and Assessment, 9, 152-194. https://doi.org/10.1111/1468-2389.00171 

Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion, and motivation. Psychological review, 98, 224. https://doi.org/10.1037/0033-295X.98.2.224 

Mestre, M. V., Samper, P., Frías, M. D., & Tur, A. M. (2009). Are women more empathetic than men? A longitudinal study in adolescence. The Spanish journal of psychology, 12, 76-83. https://doi.org/10.1017/S1138741600001499  

Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J., … & Urbina, S. (1996). Intelligence: knowns and unknowns. American psychologist, 51, 77. https://doi.org/10.1037/0003-066X.51.2.77 

Newman, D. A., & Lyon, J. S. (2009). Recruitment efforts to reduce adverse impact: Targeted recruiting for personality, cognitive ability, and diversity. Journal of Applied Psychology, 94, 298. https://doi.org/10.1037/a0013472 

Roth, P. L., Bevier, C. A., Bobko, P., SWITZER III, F. S., & Tyler, P. (2001). Ethnic group differences in cognitive ability in employment and educational settings: A meta‐analysis. Personnel Psychology, 54, 297-330. https://doi.org/10.1111/j.1744-6570.2001.tb00094.x 

Sackett, P. R., Schmitt, N., Ellingson, J. E., & Kabin, M. B. (2001). High-stakes testing in employment, credentialing, and higher education: Prospects in a post-affirmative-action world. American Psychologist, 56, 302. https://doi.org/10.1037/0003-066X.56.4.302 

U.S. Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, and Department of Justice. (1978). Uniform guidelines on employee selection procedures. Federal Register, 43, 38290-38315.

Wee, S., Newman, D. A., & Joseph, D. L. (2014). More than g: Selection quality and adverse impact implications of considering second-stratum cognitive abilities. Journal of applied psychology, 99, 547. https://doi.org/10.1037/a0035183 

Our inspirational blogs, podcasts and video’s

Listen to what they say about our product offering right here