Welcome to Leoforce!
0043_diversityblogimage2 0043_diversityblogimage2
Home Resources blog Combatting bias in AI recruiting: Part 2

Combatting bias in AI recruiting: Part 2

Let’s talk about bias, baby.
If you missed Combatting Bias in AI Recruiting: Part 1, here’s the tea:

  • Technology made people more aware of their need for diversity in hiring by identifying gaps, shortcomings, and biased processes and practices
  • AI-driven HR technology solutions are building a documented and smart process that quickly hold companies accountable to create and deliver a fair, transparent, and bias-free hiring journey
  • Humans don’t trust technology completely so there’s hesitation to adopt and implement innovative AI tools, not knowing whether they’re just going to repeat the worst in all of us
  • No more BS. Solution providers have to discuss, teach, and represent their AI well by guiding users through the strategy and execution of their AI’s decision-making – or they’ll lose them completely
  • Buyers want real answers, vendors need to implement and document AI ethics, and everyone involved needs to understand that AI isn’t the answer. It’s just working to solve the problem

Here were the action steps I charged y’all with:

  • A willingness to make critical changes to your processes, routines, and gut instincts by investing in AI tools
  • An understanding that technology isn’t simply a reflection of humans that make no errors
  • A desire to dig deep into AI ethics and only work with vendors that reflect transparency, diversity, and security

Take it all in…again. Alright, NOW we’re ready for phase two.

Explainable AI Recruiting

We all know by now that AI in recruiting is meant to do the heavy lifting for us – identify relevant dispersed data, organize it, translate it, then provide unique insights and actionable intelligence. Again, people don’t really get stuck on that concept. What they desire most, that companies aren’t providing, is to understand why AI is making certain recommendations and then making sure those recommendations are compliant with their hiring practices and goals as a company. Blind faith doesn’t exist here.

Most companies that advertise AI as the core of their solution don’t do a great job of explaining recommendation and decision-making criteria and strategies. Think about Netflix for a minute. “Because you watched Stranger Things” is literally the extent of their explanation for why they’re recommending a new show to me. Their machine learning algorithms are obviously drawing more insights on the backend based on all of my decision-making thus far. They’re recommending a new show based on a handful of shows I watched in the past. But I would never know why, in detail, unless they outlined it for me.

Now, to tell you the truth, I don’t need to know much more detail about why Netflix is recommending a new show to me. Nothing is at stake if I don’t know why. But if compliance issues are at stake, a person’s career is on the line, and my company’s goals could be in jeopardy, you’re damn right I want a full-blown, documented, explainable, AI decision-making strategy in my hands.

And that’s what AI recruiting companies aren’t doing that they need to do.

AI and Bias for Dummies

As an AI recruiting tool buyer, here’s what you need to do before you talk to a vendor or make an assumption about a biased platform. Read these next two statements very carefully. Then read them again.

“The data that’s available to make decisions is the same data for people as it is for machines”, says Alec Levenson, senior research scientist at the University of Southern California’s Center for Effective Organizations in Los Angeles. “Why would you think a machine is going to be less biased than a person?”

“Data has so many inherent biases,” says Madhu Modugu, founder and CEO at Arya by Leoforce. People develop biases based on their perceptions over time. They make decisions based on those perceptions, which are, in turn, reflected in data. When machines learn based on that data, there’s a good chance they’ll have those biases as well.”

There’s bias everywhere. Even the word bias is biased from some angle. And when buyers see the words ‘AI’ and ‘bias’ side by side more times than not, they start panicking.

Guys, stop pretending that you know anything about AI. Cause you most likely do not. Therefore, you probably don’t truly know if an AI system is inherently biased or not. It’s a mistake to immediately believe a technology is biased without understanding what that bias even means. Everyone has a different definition of bias. Are you simply basing that on average society biases? Is the technology bringing back more white candidates than black? More men than women? More candidates from a certain university? Is there a reason for this based on the data the system was measuring? If you don’t define success from the very beginning, there will be no way to measure it down the road. Good vendors will sit with you, outline your goals, and tailor the settings in the system to work for you and your business needs. You’ll discuss strategy and how to approach your recruiting process to keep biased decision-making out of the equation.

So, how can organizations address bias in AI? Pay attention to the basics:

  1. Do you have enough adequate data (including data diversity) to train and test your algorithms? If not, augment more generalized data within your industry and gather competitive data.
  2. Gain a deeper understanding of the features the algorithms are working with and eliminate the obvious ones that could contribute to biases (like gender information).
  3. Identify the right algorithms that are in-line with your organizational objectives. Some algorithms have better capabilities to deal with biases.
  4. Set the precedent of appropriate precision rules on top of models to achieve desired results.
  5. What is the deeper philosophy behind the models and AI? This could profoundly influence your selection of algorithms.
  6. Monitor, refine, and update as needed.

Want more detail on these points? Read the full summary here.

Bias in AI won’t get solved tomorrow

According to research by IBM, more than 180 human biases have been defined and classified, any one of which can affect how we make decisions. Think of how long bias has been affecting your decisions, whether consciously or subconsciously. Correct answer: every single day of your life. Combatting bias in AI platforms won’t be a quick fix. It will take lots and lots of time to refine, test, and perfect. That’s why we have to start small with how we approach these systems – they’re not solving world hunger in a day. But they are tackling a world-renowned issue that has bled into each of our lives, whether we wanted it to or not. And I think that’s pretty great.

AI needs us just as much as we need it (for now). AI tools weren’t made to make decisions completely on their own. Once organizations incorporate their objectives and results-driven goals, they not only create better decision-making criteria for their team, but they develop stronger recruiters who actually trust in their data-driven results.

There has to be a balance between compliance and innovation and that’s where AI systems are truly shining. People are your most important asset as a company, and nowadays, you can find the right people with the right data and AI-backed strategy. As long as solution providers are ethical and transparent about data, where it’s coming from, what’s being done with it, and the process it’s taking to make a recommendation or decision, we’re all going to win. BIG.

AI is scary to us because it’s remained in a black-box for far too long for us to feel comfortable with its inner-workings. But, data scientists, engineers, and technology vendors are finally coming to understand what buyers need from them (i.e. glass-box AI). And that they needed it yesterday.

If you remember nothing, at least take these items with you:

  • AI has never promised to be perfect, so don’t expect it to be
  • AI isn’t going away. It’s going to get more expansive and will eventually touch every piece of our professional lives, so recruiters and executives better hop on the train quickly
  • Define bias in your company and create goals to match it (before you talk to a vendor)
  • Make sure whichever solution provider you choose strikes the right balance between compliance and innovation
  • Remember, AI needs us just as much as we need it so stop being scared

Want to learn more about bias in AI recruiting? Check out our latest whitepaper, Combatting Bias in Machine Learning Algorithms: For Recruiting.

Find more compatible talent with Artificial Intuition.

Discover how Arya goes beyond conventional AI recruiting