All

An AI Black Box Algorithm Personnel Problem - Bias

Removing Bias from the hiring process & creating employment opportunities for a broader group of people.

Humans are the ultimate black box - rationale is not always rational.

Social scientists, such as Daniel Kahneman, have long explained that deficiencies in human decision-making result from both unconscious biases and noise. Humans use mental shortcuts to process information that unfortunately include irrelevant factors due to unconscious biases that lead to cyclical discrimination.

These unconscious biases are especially problematic in employment decisions. Because humans are unaware of their deficiencies in thinking, they are generally incapable of recognizing how inconsistent their own decisions are and are therefore unable to make objective decisions. Even when informed of their unconscious biases, not only do human fail to correct their thinking, unconscious bias training has been shown to actually increase bias.

The ability to treat everyone equally is the cornerstone to fairness, so the inability of humans to make consistent decisions is especially harmful in hiring. Poor decisions are difficult to detect and reveal because of the validity illusion, which allows humans to overate their ability to make predictions of how an individual will perform if hired. This is compounded by confirmation bias, which results in humans focusing only on the information that fits their pre-existing beliefs (and biases).

Since human decision-making is so unreliable, managers are turning to the use of AI and machine learning. These technologies can results in faster, more objective and more accurate hiring decisions, provided that the algorithm is created responsibly and fairly with unbiased data.

Big Data Algorithm Hiring

An algorithm is a set of rules - from calculating simple averages to performing complex statistical analysis. When evaluating candidates, the data types that go into hiring algorithms often include publicly available information (social media data, internet-scraped data) or background information supplied by candidates such as resume, biographical and interaction data.

Relying on these types of large datasets, known as "big data", may seem like good business sense, but can result in unintended discrimination against individuals or groups.

While AI tools can be ordered to ignore gender, race or ethnicity, its still possible to make a biased decision.

Many factors - zip codes, education, hobbies - can inadvertently become proxies for traits such as race or ethnicity, and an algorithm could make a biased decision by pulling together various pieces of seemingly innocuous data because the nature of these AI tools is to discern patterns.

The inherent danger in mining data from the internet is that it can replicate societal prejudices (garbage in, garbage out)

The inability to understand or explain why AI produced certain results can lead to inaccurate outcomes that go unquestioned known as the 'black box problem'

Black Box Hiring Problem

If firms do not understand why an algorithm produces a certain result, they're unable to determine if that result is biased. As a result they have a black box conundrum.

Companies need to know the basis for any selection. If they cannot justify why a candidate has been rejected from their application process, they're vulnerable to a legal challenge from that candidate. Every organization has to be able to legally defend its selection decisions.

A further issue with big data derived plug-and-play algorithms is that they're incapable of providing companies with sustainable competitive advantage that differentiate an employer brand from competitors. If competitors are using the same algorithm, firms will all be chasing the same talent and rejecting the same candidates.

Algorithmic Transparency & Fairness

As the use of AI has grown, it's attracted increasing attention from regulators and lawmakers concerned about fairness and ethical issues related to the technology.

Chief among those concerns is a lack of transparency in the way that many AI vendors' tools work. Many of them function as black boxes without an easily understood and transparent explanation of their inner workings; and heightened concerns that machine-learning algorithms can exacerbate or even perpetuate unconscious bias in hiring decisions.

Before deploying an algorithm to support business decisions, companies must demand evidence and transparency in how and why these algorithms are making specific predictions. Lack of transparency and explainability into how the algorithm operates will produce bias.

The Need for AI Interpretability & Explainability

Data science is complex but it doesn't mean that you don't need to see whats under the hood, or inside the black box and should simply place blind faith or trust in black-box hiring algorithms.

Interpretability means the algorithm needs to be understandable, to enable someone to understand why the algorithm came up with the prediction it did. It means you can make the algorithm accountable for existing problems, like bias.

Understanding how the machine arrived at a decision known as explainability, can help prevent and address bias in AI-based tools. Companies need to know what was built into the algorithm to calculate the results. It's unacceptable for it to be an impenetrable black box.

The calculation of an algorithm needs to be meaningful, explainable and able to be understood by a human. If the algorithm is so complex and/or unexplainable that even the designer cannot precisely explain how it works or how the results are produced then it's a black box algorithm that carries a high level of risk.

Even if the black box algorithm can be successfully in predicting outcomes, it carries significant risks. If an algorithm cannot be explained, then the outcome it produces cannot be defended. This can have significant legal implications if an organization's hiring practices are challenged in court. Not only can you not defend a hiring decision because you don't know what led to that decision, it can be impossible to know if the algorithm has inherited biases from the data it was trained on.

For example, if your organization tends to hire more men than women, then an algorithm developed to predict hiring success may learn to associate any 'maleness" in the data (such as names, interest, look, writing styles) with success. This may lead the algorithm to systematically promote male applicants, even if the actual gender of the applicant is removed from the data.

Be aware of Mysterious Algorithms

Organizations using AI to streamline their recruiting processes by analyzing and interpreting huge volumes of candidate data may not have a legally defensible basis for their selection decisions due to the black box algorithm not being explainable in a logical and meaningful way.

Selection criteria with little to no job relevance or with low "face validity" such as analyzing facial expressions to determine an applicant's job suitability is an example of an unexplainable black box. Whilst facial expressions can tell many things about a person, it operates on a false assumption of a 'universal facial expression'; not everyone and every culture share the same facial expression, eg. people from Asian and North American Indian cultures often display very limited facial expressions making them more difficult to 'read' during a conversation, especially in a formal situation such as an interview.

It is also unclear how facial movement, work choice, tone of voice or mannerisms can be connected to high performers. Using an algorithm with an unfounded blend of superficial measurements and arbitrary number-crunching that is not scientifically proven and explainable would penalize non-native English speakers, visibly nervous interviewees or anyone who doesn't fit the model for look and speech. This could bias selection towards similar types of candidates who share similar cultures and backgrounds, resulting in discrimination and less diversity.

Using mysterious facial expression in a selection algorithm is basically asking candidates to; pick the right words, use the right tone and put on a sufficiently happy face. So what would a model employee be - a white guy who smiles a lot?

The use of face-scanning algorithms for personnel selection is as disturbing as using graphology, a pseudoscience that analyzes the physical characteristics and patterns of hand writing with claims that it can identify psychological states and evaluate personality characteristics. To date, many research studies that have been conducted to assess its effectiveness in predicting personality and job performance have consistently failed to provide any evidence.

*Note; graphology differs from graphanalysis which is a branch of forensic examination of questioned documents that deals with handwritten documents

Opening the Black Box

Major employers with lots of high-volume, entry-level job openings are increasingly turning to automated systems to help find candidates, assess resumes and streamline hiring. Some of these organizations have taken the plunge into AI even with the realization that the decisions their algorithm is making can't be explained.

In one high profile example, Amazon developed an AI recruiting tool that went back and analyzed 10 years of employment applications in order to create a system that automatically identified the characteristics of high-performing employees. The tool made headlines in 2018 when it showed that the algorithm exhibited bias against female candidates because the model ranked candidates based on 10 years of resumes and most the resumes were from men, As a result, the algorithm deemed men more qualified when ranked beside women candidates.

Today the workforces that make up Apple, Facebook, Google and Microsoft are still overwhelmingly white or Asian men.

Firms must be able to open the box and understand how an algorithm is coming up with predictions. If it's not possible to justify why a candidate has been rejected from the application process, it can leave the company vulnerable to legal challenges.

How Do I know if it's a Black box Algorithm

IP protection; this is sometimes quoted as a reason for not 'opening' up and explaining what's in the black box - it's a smokescreen. If the vendor claims that due to the proprietary nature of the data, they're unable to disclose the algorithm's inner workings or that they are concerned that it could help candidates game the pre-hire assessment systems, it's a clear sign that it's a black box algorithm. If the algorithm is performance-validated then it can be logically explained and it won't be possible for candidates to game the system.

If the algorithm provider cannot explain how the algorithm was developed tested, validated and exactly how it's being used, then inevitably there is built-in biases and risks.

Know the data source; what data is being collected and why is it being collected

So how do you know the algorithm you're using hasn't inherited biases from the data it was trained on, despite the best of intentions?

Bias starts with data; AI uses machine learning to gain knowledge through training data. Machines cannot learn beyond the data so it's critical to train the machine with unbiased data.

If the algorithm is trained within a narrow scope of information, flawed data or does not come with a meaningful explanation, then the prediction outcome can be inherently and unconsciously biased, elevating the risks in complex scenarios like hiring.

Make sure that the data inputs used in developing the algorithm represents your hiring objectives, such as the KPI performance outcomes necessary for an individual to perform in the role, and for the successful functioning of the business.

Indiscriminately collecting any and all available data is a significant warning sign. Companies should be mindful of what types of data they are collecting, the context around that data and why they are collecting it. Are they relevant, meaningful and explainable?

Companies need to have very clear insight into the internal workings of the algorithm; what's the source of the data, how is the algorithm calculated and what are the reasons as to why it's calculated that way.

Job Performance Driven Selection Algorithms

If the goal of employee selection is to select people with a high probability of performing well in a specific job, then the selection algorithm must be built using actual employment outcomes, and which directly reflect organizational goals relevant to the job role. These may include job performance effectiveness and productivity measures such as sales goals attainment %, length of service, promotions, commission increases, probationary survival, disciplinary incidents, absence or any quantitative measure.

These concrete business outcomes then need to be correlated with independent variables, but instead of utilizing resume data, specific keywords or big data, use meaningful and relevant attribute measures that can differentiate individuals such as curiosity, work ethic, accountability, gratitude and optimism, which have no gender differentiation or culture boundaries.

Hence, best practice to prevent bias being fed into an algorithm is not to use any data that could potentially reflect any socioeconomic inequities and only consider traits or attributes that contribute to and or detract from on-the-job performance. Assessing attributes also addresses the deep-rooted biases, prejudices and stereotypes ingrained in corporate cultures, and candidates will be able to verify that the organization has opted to use fair and inclusive selection methods that provide everyone with equal opportunity.

We utilize a unique 'white box' (also known as clear box) approach called performance Fingerprints. The algorithm developed in the construct of a performance Fingerprint is locally validated, meaning that is draws on a company's actual performance outcomes in a specific job role, ie. statistical analysis is conducted between predictor traits and the concrete business outcomes that are derived from existing culture or environment. This is known as criterion or concrete validity and is the most powerful way that a company can demonstrate the job-relatedness and validity of its pre-employment assessment.

Algorithm Shelf-Life

In addition to being job attainment data driven, job-specific selection algorithms must also be adaptive and continuously learning because prediction results may not be sustainable and/or will decay over time. As conditions change, such as changing applicant socio-demographics, economy, market, products, customers, job task etc, the selection effectiveness of an algorithm will decrease.

It's therefore important not to 'set and forget' algorithms by continually revalidating them. This is done by continuously ingesting post-hire performance data and recalibrating the algorithm.

As the algorithm is refined, it adapts to changing conditions to increase its predictive power and selection effectiveness over time.

Our system provides continuous correlation analysis of employee's KPI performance, and continuously learns to replicate the best performers in an organization. Performance Fingerprint algorithms are ingested with new performance data every 6-12 months and the prediction model recalibrated to improve its selection effectiveness. It also continues to be applied back to the existing team (ie. correlated with its performance) to check its predictive robustness with actual performance outputs.

Diversity, Equity & Inclusion (DE&I)

For employers, the implications of black box assessments can drastically undermine their efforts to build fairness and diversity.

Our selection fingerprint does not utilize any resume data or biographical information, ie. no details about who they are, where they came from or what they have done are used in the selection calculation, hence no hiring recommendation can or will be made on the basis of age, race, color, religion, gender, ethnicity, disability. education or work experience.

Job knowledge, skills and/or abilities are also not part of the selection fingerprint calculation to ensure fairness. We believe that an individual with no prior experience can perform equally well if equipped with on and off-the-job training in the competencies needed to perform a specific job.

The Ultimate in Blind Hiring

Blind recruitment is the best practice to remove unconscious bias and ensure equal employment opportunity and workplace diversity, equity and inclusion.

This solution is the world's most advanced 'blind hiring' system for a major firm in the financial services market. The system deployed a unique 'attributes-first' approach to remove pedigree bias; and applicants were not able to provide their resume or any background data as part of the application process. The firm was then able to evaluate 1200+ candidates in a 5 week period and assess their suitability to be hired for 8 'apprenticeship' roles. The personal backgrounds of the selected apprentices varied widely; from one with no education beyond high school to a PhD in chemistry, and all with zero financial services industry experience.

Ditch the AI Black Box

Whilst it's common for big companies to use black box algorithms to come up with an output, it's strongly recommended they not be applied in hiring decisions because of the enormous implications for people's lives. Models built with biased and inaccurate data can lead to serious social consequences.

A black box algorithm performing calculations and producing results that can't be explained how they were arrived at would not matter as greatly if you're teaching a computer to play chess or process images. But it is a big deal when you're making decisions about people's careers because it adds a layer of opacity that's hard to see through and even harder to undo its failures.

Best Practice Is Crystal Clear

The alternative path is to use a white box or glass box approach that is open, transparent, understandable, fair and legally defensible. It needs to be carefully designed and scrutinized to ensure it's genuinely bias-free and that it's deployed in ways that don't allow human decision-makers to reintroduce bias into the hiring process.

When human-centric science and data driven analysis is rigorously applied to the hiring process decisions are faster better and consistently more accurate; with less effort, frustration and failure than conventional methods.

Developing a performance fingerprint algorithm thats custom validated, using incumbent team data across the spectrum from under performers to top performers delivers a unique understanding of performance drivers.

Performance fingerprints are fully transparent, accountable and interpretable. Prior to deployment clients have the ability to examine the algorithm in detail; what data have been used to develop the algorithm, how the performance outcomes are predicted and understand in as much forensic detail as desired, exactly how the algorithm predicts actual KPI performance across their incumbent team. This reveals insight and intelligence as to why certain employees perform better than others, creating and inimitable and sustainable competitive advantage.

Summary

  • don't utilize resume data or big data; build and validate the selection algorithm with your company's own job attainment data.
  • ensure that the data used to build the algorithm is relevant and meaningful
  • demand evidence and transparency as to how and why algorithms are making specific predictions
  • know the data source; what data is being collected and why it is being collected; understand the source/origin of the training data to help identify if there is algorithmic discrimination or biases
  • demand evidence that assessment instruments are verifiably scientifically based or developed; that they adhere to a scientific method or standard and are supported by reliable evidence
  • algorithms need to have organizational goal relevance; this means local validation with the actual outcomes necessary for an individual to perform in the role, and for the successful functioning of the business
  • selection algorithms have a shelf-life, they need to be adaptive and continuously learning, not 'set and forget'
  • never use an algorithm from inside a box that you can't see and the vendor can't transparently explain; they, and you, need to be able to explain exactly how and why the algorithms were made in order to be legally defensible
  • only use white box algorithms that you can place your trust in, that you can understand and explain
  • AI and data analytics will be the main tool for future recruiting and hiring. However, these tools need to be scientifically proven, transparent and replicable.

Ready to talk?

We're ready to listen. Unlock the true potential of your business today.

GET STARTED