Guide to HR Compliance Regarding AI Tools per U.S. Employment Laws
If tools make life easier, then rules make it easier to live together. It’s reflected in our concerns and priorities of getting the law right with every new social arrangement, technology, and even sports! So, when AI tools are making mainstream marks, it’s natural that employment laws are under the spotlight to get it right. As HR managers, it’s not only crucial to stay on top of these HR policies but also contribute to their shaping up.
In this guide on HR compliance and workplace policy, we’ll see how the existing employment laws are put to the test by AI hiring tools. We’ll go through the ethical and legal implications of integrating AI’s case from the perspective of both the employer and the candidates.
But let’s first get a refresher on some of the pertinent HR regulations and HR laws to start on the changes required, challenges faced, and status quo.
What are employment laws?
Employment laws refer to a set of rules and regulations that govern the employee-employer relationship at a workplace. These laws are designed to protect the rights and interests of workers and ensure that employers provide a safe and fair workplace.
It’s noteworthy that each state in the United States has its own set of employment laws in addition to federal employment laws. These state laws can vary widely in terms of scope and specifics, but they generally cover many of the same topics as federal employment laws.
All the HR compliance, workplace policies, or office rules you hear (and adhere to) are derived from these employment laws.
Overview of key employment laws of the U.S. Department of Labor
There are numerous employment laws that are administered federally by the U.S. Department of Labor or DOL. In fact, 180+ laws are administered and regulated by the organization. They cover workplace policy and activities for over 10 million workplaces and 150 million workers.
Some of the key employment laws include:
Fair Labor Standards Act (FLSA)
One of the pivotal employment laws, this one has a direct bearing on HR compliance and workplace policy. The FLSA establishes federal minimum wage, overtime pay, child labor standards, and record-keeping requirements for employers. It sets the standard for working conditions and regulates the maximum hours that employees can work in a week.
Occupational Safety and Health Act (OSHA)
OSHA establishes workplace safety and health standards and regulations to ensure that employers provide a safe working environment for their employees. It also provides for enforcement of these standards and penalties for non-compliance.
Family and Medical Leave Act (FMLA)
The FMLA requires employers with 50 or more employees to include in their HR policy provision for 12 weeks of unpaid leave per year for qualified medical and family reasons. These reasons include the employee’s own serious health condition, the birth or adoption of a child, caring for a family member with a serious health condition, or the.
Americans with Disabilities Act (ADA)
The ADA prohibits employment discrimination against people with disabilities and demands employers to make reasonable accommodations for qualified individuals with disabilities.
Title VII of the Civil Rights Act of 1964
Title VII prohibits employment discrimination on the basis of race, color, religion, sex, or national origin. It also prohibits retaliation against employees who file complaints about discrimination.
The last employment law that directs the HR policies around workplace discrimination has been put to the test recently in a class action. It’s a first of its kind and worth noting.
Case of the alleged AI hiring discrimination
A class action lawsuit has been initiated against Workday, Inc. (“Workday”) in the Northern District Court of California on February 21, 2023. The lawsuit claims that the company violated laws related to race, age, and disability discrimination by providing applicant-screening tools to its customers that incorporate hiring bias in AI algorithms.
This case is truly the first of its kind to legally claim prejudices linked to the use of AI hiring tools by employers. The EEOC and other government bodies are closely monitoring the possibility of discrimination. So, with the growing adoption of AI in the employment industry, it is also crucial for employers to understand its legal and ethical consequences.
AI hiring tools at the workplace in the context of employment laws
While the AI hiring bias case mentioned above could be the first to get mainstream conflict with the employment laws, the government has long been looking to keep up with accommodating the AI conundrum. The current EEOC Chairwoman, Charlotte Burrows, has expressed interest in tackling the use of biased AI tools. At a Genius Machines event, she opined –
“Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment,” Burrows said. “At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”
The EEOC even published guidelines for employers in May 2022 to enable them to stay in sync with the ADA (Americans with Disabilities Act) while leveraging AI at work. The Department of Justice also released its guidelines on AI-based disability discrimination and contravention of employment laws in the process. Both these guidelines lay out the direction in which automated hiring may violate the ADA and other HR laws.
In addition to Federal Laws, in 2022 alone, around 20 U.S. states have also come up with resolutions or bills for AI in the workplace. They are in various stages in the legal ecosystem at present. Some noteworthy ones include:
New York City’s Local Law Int. No. 144
- Prohibits employers or employment agencies from using an “automated employment decision tool” to screen a candidate or employee for an employment decision.
- Employers must conduct a biased audit of the tool before using it and make a summary of the results publicly available on their website.
- Local Law Int. No. 144 implements notice requirements and candidate opt-out requirements with which employers must comply.
- Employers must comply with the civil penalties of $500 to $1,500 for each violation.
Illinois Artificial Intelligence Video Interview Act (AIVIA)
- Implemented the Artificial Intelligence Video Interview Act from 1st January 2020.
- Restrictions on companies using AI for the analysis of video interviews.
- Employers must inform candidates about using AI for video interviews and get consent.
- Employers must explain the entire procedure, and characteristics tracked and restricted sharing of the interview to those with the expertise for the evaluation of the candidate.
- Employers must adhere to candidate requests to destroy these videos within a maximum period of 30 days.
For all the benefits of AI/ML in recruitment, these tools are examined for enabling systemic discrimination in several ways and also the replication of biases inherent to humans. Although its core premise is a fairer employment procedure that does away with subjectivity, issues may arise with regard to the violation of U.S. employment laws if the benchmark resumes deployed for creating algorithms came from candidates with predominant ages, genders, nationalities, races, and other groups.
Do AI hiring practices by design differ from current employment laws
AI hiring practices, by design, may not necessarily differ from current employment laws, but they do present unique legal challenges that employers need to consider. For instance, if an AI hiring tool has a biased algorithm, it may perpetuate discriminatory hiring practices, which would violate existing employment discrimination laws.
Employers who develop, purchase, or use AI-driven tools for HR decisions must take into account the legislation and regulations that apply to their use. To ensure compliance with the relevant employment laws, employers should involve their legal departments and seek guidance from external counsel when evaluating these tools.
By collaborating with legal experts, employers can assess the legal risks associated with AI-driven tools and develop strategies to mitigate these risks. This can help them make informed decisions about the use of these technologies and ensure that their HR practices align with existing laws and regulations.
The ethical conundrum of using AI in hiring
While employment laws and HR policies will eventually make the path ahead of using AI in hiring clearer, the ethical implications of it will also need to be addressed.
A major ethical consideration is the potential loss of human interaction and the subjective judgment that comes with AI hiring. AI tools may not be able to capture the nuances of human communication or the contextual information that can influence a hiring decision. This can lead to a lack of diversity in the workplace and limit opportunities for candidates who may not fit into preconceived notions of what a successful employee looks like.
There is also a concern that using AI tools for HR decisions could perpetuate a lack of transparency and accountability. Candidates and employees may not fully understand how these tools work or how their data is being used, which can lead to mistrust and a lack of confidence in the hiring process.
Only 22% of organizations have an AI ethics board to help enforce a responsible AI framework. So, the active participation of employers is required to mitigate the ethical conundrums of AI hiring. To start with, they should make it a part of the HR policy to hold AI recruiting processes to a standardized flow while maintaining channels of transparency, discussions, and compliance open to all stakeholders.
What employers should know about using AI for hiring
As the employment law evolves regarding AI hiring tools, and the related public outcry will probably take multiple turns, employers tasked with creating their HR compliance rules should be aware of several key factors when using AI for hiring.
The algorithms factor
HR regulations should focus on identifying the factors considered by recruiting algorithms and AI tools. Employers should first identify non-discriminatory and non-biased paradigms for training these hiring programs. They should continually track and modify inputs into recruiting algorithms for candidate screening and analysis.
Regular automation audits
In order to stay on the right side of employment laws, employers should consider regular audits of automation tools and AI systems for recruitment. One of the biggest USPs of machine learning is the fact that it can evolve and adapt based on feedback from individuals taking employment-related decisions. This will lead to improved future results as well.
Employers cannot depend on any initial analysis of the program and should keep looking for results that may disadvantage any specific community or group. They should audit these results regularly to make sure that the tools are learning the right lessons from inputs.
Scrutinize the outsourcing
In a bid to make sure that AI-based recruitment is in line with existing employment laws and HR regulations, employers should ideally outsource the process to professional software programs and creators. However, keeping an eye on the process and vendors is always recommended. Companies should take an active interest in the initial rounds of vetting for applicants along with the advertising material for drawing potential candidates.
These should be foolproof in terms of being non-discriminatory. Checking the AI tools used by vendors is highly recommended.
Data privacy and security
When using AI tools for hiring, employers must ensure that candidate data is secure and protected from breaches. It is vital to have protocols in place to safeguard candidate information and prevent unauthorized access.
Bias detection and mitigation
AI algorithms can inadvertently perpetuate bias if they are trained on biased datasets or if they are not designed to account for fairness and equity. Employers should implement measures to detect and mitigate any biases that may arise in the AI tools they use for hiring.
And, last but not least, human intervention. While AI can be useful in streamlining the hiring process, it should not be thought of as a replacement to human judgment entirely. Employers should ensure that there is human oversight and decisions to ensure that the AI tools used for hiring have fair output.
Over to you
The increasing prevalence of AI technology in the workplace has prompted the introduction of laws and regulations at the federal, state, and local levels to address potential biases associated with these tools. For employers, navigating this legal landscape can be challenging, as AI solutions are constantly evolving and related employment laws vary among different jurisdictions. Additionally, complying with AI-focused regulations and HR policies seems more difficult by the rise of remote work.
It wouldn’t be wrong to say with the current legal landscape for AI hiring tools, employers seem to be on a backfoot. However, for the benefits and adoption of AI in a larger sense, it only seems right that all the stakeholders in AI hiring should come on the same page for an efficient, ethical, and empowering arrangement. How? In time, we’ll know.
Right now, if you are looking to explore the advantages of an AI hiring tool, check out Arya by Leoforce. It goes beyond conventional AI with Artificial Intuition for a deeper mapping of vacancies and candidates, accounting for fair hiring practices. Book a demo today!
How does AI work in hiring process?
AI in hiring processes uses algorithms to screen candidates based on data and criteria.
What is the limitation of AI in HR?
AI’s limitation in HR is potential bias and lack of nuance in evaluating soft skills.
What are the implications of AI on HR?
AI’s implications on HR include increased efficiency, potential bias, and legal compliance challenges.
What is an example of AI discrimination in hiring?
An example of AI discrimination in hiring is biased algorithms that disproportionately screen out certain groups of candidates.
Are there any laws regulating AI?
Yes, there are laws and regulations at the federal, state, and local levels governing AI use.
What is the EEOC guidance on AI in hiring?
The EEOC has issued guidance on the potential for AI to create discriminatory hiring practices and how to avoid them.
What is New York City’s AI hiring statute?
New York City’s AI hiring statute requires employers to conduct anti-bias audits on automated decision-making tools.
What is Illinois law on AI?
The Illinois Artificial Intelligence Video Interview Act requires employers to inform candidates about using AI for video interviews.
Should artificial intelligence replace recruitment?
AI will not replace recruitment but will assist recruiters in making decisions by analyzing data and reducing bias.
Is AI in hiring ethical?
The ethical implications of AI in hiring involve potential bias and the need for transparency and fairness.