Welcome to Leoforce!
0042_diversityblogimage 0042_diversityblogimage
Home Resources blog Combatting bias in AI recruiting: Part 1

Combatting bias in AI recruiting: Part 1

I’m a millennial. When I think about the diversity initiatives that are in place at literally every company I know of, I sometimes wonder when and why it became such an essential need for growing companies. The longer I’ve pondered this thought, the more the answer made sense to me. Tech. Tech made people more aware of their need for diversity.

Diversity can mean anything from race, ethnicity, or gender to simply just encompassing our human differences. Lack of diversity creates and sustains privileges for some while creating and sustaining disadvantages for others. Technology made us see those gaps in an entirely new light.

Why did tech make us more aware of our need for diversity and equality? Because technology is automating and repeating what we say, what we do, and what we learn. If it’s biased, we’re biased. If we’re biased, it’s biased. We quickly learned and identified gaps in our organizations and ethical guidelines. Without technology in the equation, those unhealthy patterns may never end. Technology is the catalyst to better ourselves, document our processes, and support unbiased decision-making.

It takes time, patience, energy, and a new way of thinking. It requires change.

The universal thought

AI is unpredictable. We feel like it’s out of our control because we don’t know every move that is being made behind the scenes. Sometimes, AI shows up in places we’re not even aware of and we don’t like that thought. We’re control-minded human beings who want consistency and dependency. The bottom line is, we don’t need to know all of those details. Technology is bigger than all of us, but what we need to know first and foremost is that we’re directly involved in teaching AI. It’s learning from us, for us. If the results are poor, we can go back and adjust the algorithms or programs accordingly. We don’t have to accept every answer it gives us or make a decision based only on its given recommendation. Our job is to collaborate with AI, letting it guide us and giving us the final decision-making power.

Amy Web from The Future Today Institute says it best, “technology can be simultaneously exciting, bewildering, thrilling, confounding and terrifying in the present. We must continue to think ahead to how our actions (or lack of actions) today will impact the future of our societies, businesses, and global communities.”

Tech made people more aware of their need for diversity. But they still deem it untrustworthy and not worth the risk.

Black box AI

For most ordinary people, machine learning lives in a black box. Most feel like they can’t understand its inner workings, so it’s difficult to trust and depend on for critical decision-making. According to Genpact’s recent study, 59% of employees believe they would be more comfortable with AI if they understood it better. Especially in a field like recruiting, when someone’s livelihood is affected.

The rapidly penetrating bias of machine learning, and the lack of transparency and diversity in its programmed algorithms has made it difficult for business and talent acquisition leaders to accept and adopt with confidence.

But, here’s the deal. Our bone-deep, intrinsic prejudices combined with our past experiences reveal themselves in our decision-making without us consciously realizing it. We rely on our thought process even if we are not able to rationally explain it. But, when a machine makes the wrong decision, we give up hope and would rather place all decision-making in the hands of a bias-bred human.

I once heard that we want technology to be just like us, but make no errors. We expect human-like behavior, without bias.

Think about that for a second.

Why is it more acceptable that we have teams of employees making biased decisions, but we’re horrified when a machine does it?

Because we’re human. We like humans. We trust humans. Machines are unpredictable and out of our control.
So, either we accept that programs make the same unfair mistakes we would (way less frequently), or we stop using AI systems for such purposes.

How to mitigate bias in these systems: Glass box AI

Because it’s so difficult for us to recognize and understand our own conscious and unconscious biases, it’s even more difficult not to feed them into technologies. When that happens, they are then deeply embedded, relearned, and reinforced in a company’s decision-making.

So the question is, how do we feed quality data into our technologies to minimize systemic bias within the results?

AI recruiting solution providers are beginning to use a body of work covering “fairness, accountability and transparency” in attempts to refine their systems so they produce “fair” results, as determined by “mathematical definitions”.

Trust in the company and its training process is a starting point, but experts say the real solution to black box AI is shifting to a training approach called glass box or white box AI.

Glass box AI modeling requires reliable training data that analysts can explain, change and examine in order to build user trust in the ethical decision making process.

Sankar Narayanan, chief practice officer at Fractal Analytics Inc. says, “AI needs to be traceable and explainable. It should be reliable, unbiased and robust to be able to handle any errors that happen across the AI solution’s lifecyle.”
Again, AI isn’t completely unbiased yet. We’re still combatting bias, but we’re able to do more with its aid than we have without it, while minimizing bias simultaneously.

Folks are no longer accepting status-quo explanations from AI vendors. They want real, in-depth answers about what’s actually going on with their data and decision-making criteria.

The amount of investments in AI ethics efforts, in this year alone, is astounding.

In May, the European Union released a standard guideline defining ethical AI use that experts are hailing as a step toward tackling the black box AI problem to, in turn, create better decision-making software.

The guidelines will nudge businesses to start to trust the specialists and veer away from generalist services providers that have rebadged themselves as AI companies by requiring:

  • Large companies that either reach 1 million devices or make $50 mil per year to do an automated systems impact assessment and data protection impact assessment
  • Provide a detailed description of the decision-making system and the data used to train it
  • Assess the risk that data impacts accuracy, fairness, bias, discrimination, privacy and security

Important steps are being taken to build trust and credibility for a black box AI system that people can’t fully understand. But here’s what it requires of us first:

  • A willingness to make critical changes to our processes, routines, and gut instincts by investing in AI tools
  • An understanding that technology isn’t simply a reflection of humans that make no errors
  • A desire to dig deep into AI ethics and only work with vendors that reflect transparency, diversity, and security

Find more compatible talent with Artificial Intuition.

Discover how Arya goes beyond conventional AI recruiting