What You Should Know About Artificial Intelligence in Human Resources

Request Consultation

What You Should Know About Artificial Intelligence in Human ResourcesArtificial intelligence (“AI”) is in the workplace. Employers are implementing artificial intelligence in human resources (“HR”) departments worldwide to help find applicants, implement training, and communicate with staff. 

Using artificial intelligence in HR without adequately vetting the AI may bring unintended legal consequences. While the law is struggling to keep up with AI’s emergence in the workplace, any form of AI must adhere to existing laws. The attorneys at the Smithey Law Group, LLC describe some of the more prevalent legal implications of AI in HR. 

What Is Artificial Intelligence?

Artificial intelligence is the simulation of human intelligence by machines, especially computers. These computers use algorithms to learn and predict behaviors. An algorithm is an instruction to the computer system on learning and developing independently. 

Types of Artificial Intelligence Used in Human Resources

Artificial intelligence in HR can take many forms. The following are just a few examples of artificial intelligence in the workplace.

Recruitment and Screening

AI software can scan social media platforms to help identify and recruit candidates with specific qualifications. It may be able to sift through resumes to screen out unqualified applicants. You may be able to use AI to analyze video interviews or conduct pre-employment tests. 

Onboarding

HR departments can use AI software to help simplify benefits administration and set up work-related accounts. It can also facilitate new hire education.

Day-to-Day Support

Companies may use AI to record time through keystrokes or facial recognition. AI may enable workplace communications. Chat boxes can answer common questions regarding benefits. Plus, AI can continue facilitating ongoing learning and staff training programs.

Legal Issues of AI in HR: Bias

Employers must pay special attention when using these tools making hiring or promotion decisions. However, AI in HR has conflicted with federal and state anti-discrimination laws. When these programs screen for certain qualities in applicants that screen out members of a protected group or fail to accommodate people with health conditions or impairments, they may run afoul of federal and state civil rights laws. 

Title VII of the Civil Rights Act and the Age Discrimination in Employment Act 

Title VII of the Civil Rights Act (Title VII) prohibits employers from discriminating against applicants and employees based on race, color, religion, sex, or national origin. The Age Discrimination in Employment Act (ADEA) prohibits age discrimination. This means that an artificial intelligence program that screens out an applicant based on one of these characteristics is illegal

Not only do Title VII and the ADEA prohibit obvious discrimination, but they also prohibit employers from taking facially neutral actions that have a discriminatory impact on people from one of those protected groups. An example of a disparate impact in the artificial intelligence area would be if a human resources department instructed a program to use a model resume to screen for qualified applicants. Depending on the model resume used, the program may screen out women who took time off to raise children or people of a racial or ethnic minority. 

Another legal problem that comes up with AI in human resources is bias. Artificial intelligence will reflect the unconscious bias of the programmers or those providing it with data. For example, if an HR department provides the candidate screening program with resumes of all of the applicants recently hired, but the selection of those applicants reflects the unconscious bias of the human resources department, then the AI will perpetuate that bias. 

The Americans with Disabilities Act 

Artificial intelligence in human resources has not always been accessible to those with health conditions or impairments, raising issues under the Americans with Disabilities Act (ADA). The ADA prohibits discrimination based on a person’s disability so long as they can fulfill the essential functions of a job either on their own or with reasonable accommodation. Employers must provide reasonable accommodations so long as it’s not an undue hardship on the employer. An example of a reasonable accommodation for a job applicant would be to provide alternate means to apply for the job, such as large print, braille, or audiotape application processes. 

Some AI in the hiring process has been criticized for discerning an applicant’s disability and screening them out of the hiring process. AI may also screen out applicants who can do the job with a reasonable accommodation because of particular selection criteria. The Equal Employment Opportunity Commission issued guidance on the legal implications of artificial intelligence and the ADA and how to avoid them.  

Maryland State Law

Maryland law may also bring up problems with certain artificial intelligence in the workplace. Maryland’s anti-discrimination law—much like Title VII, the ADA, and the ADEA—prohibits employment discrimination based on the groups protected under these statutes. Maryland adds several categories to the list. Under Maryland’s anti-discrimination law, an employer cannot discriminate based on marital status, sexual orientation, or genetic information. A Maryland employer should avoid AI that reflects any conscious or unconscious bias regarding these groups. 

Additionally, Maryland is one of the few states that have enacted legislation addressing artificial intelligence. Employers cannot use facial recognition during pre-employment interviews without consent from the applicant. An employer may face legal problems with facial recognition software, even with current workers. Studies have shown that facial recognition software misidentifies people of Asian or African descent at a higher rate than Caucasians, “with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old.” This is problematic if, for example, an employer uses facial recognition to record their time. It could result in people of color having inaccurate timesheets and paychecks. 

Risk Reduction

Companies can reduce the risk of artificial intelligence running afoul of the law. They can ask software companies whether they’ve done an adverse impact assessment before purchasing the software. They can ask questions about what exactly the algorithms examine. Employers can ensure that the selected programs provide alternative formats that make them accessible for people with disabilities. 

Contact the Smithey Law Group with Questions About AI and the Law

Technology advancements don’t necessarily benefit all workers, especially if they exacerbate existing biases. Fortunately, the employment attorneys at the Smithey Law Group are here to help. We’ve handled employment discrimination claims arising from old and new technology, and we are ready to help you face the challenges that populate the modern work environment. Contact us today.

Author Photo
Joyce Smithey, a seasoned employment and labor law attorney, has over 22 years of experience representing both employers and employees in Maryland and D.C. Her practice, rooted in a deep understanding of employment law, spans administrative hearings to federal litigation. Joyce's approach is comprehensive, focusing on protecting client interests while ensuring legal compliance. A Harvard graduate, her career began in Fortune 500 companies, transitioning to law after a degree from Boston University School of Law. Joyce's expertise is recognized by numerous awards, including Maryland’s Top 100 Women. At Smithey Law Group LLC, which she founded in 2018, Joyce continues to champion employment rights, drawing on her rich background in law and business.

Read More Articles by Joyce Smithey