April 4, 2023 By Jennifer Kirkwood 3 min read

What C-level executives should know about Algorithmic HR

Many organizations are currently utilizing AI and automation in various HR practices, including internal sourcing, screening, candidate hiring, promotions, and pay. While these technologies have been widely adopted, organizations should monitor whether they could be perpetuating biases and discrimination. New laws and regulations, from the EEOC to NYC 144, have been implemented to address this concern and promote ethical HR practices in AI.

NYC 144 was passed in December 2021 and will officially be in 2023. This bill requires that bias audits be conducted on any automated employment decision tool prior to the use of said tool. Failure to comply can result in civil penalties of no more than USD 500 for a first violation and each additional violation occurring on the same day as the first violation, and not less than USD 500 or more than USD 1500 for each subsequent violation—and each day the unaudited tool is used counts as separate violations.

Beyond the legal consequences, unethical and biased hiring practices can impact a company’s reputation and potentially limit its ability to attract customers and talent, and foster shareholder trust. Therefore, companies should work to monitor whether their AI and automation processes could be perpetuating bias and discrimination. The HR industry is required to understand bias and secure the rights of protected classes in the United States. While automation and AI have been used for processes like resume data parsing for over 15 years, it’s important to regularly audit the supporting machine learning models and automated processes to review ethical HR practices.

Learn how to develop an AI governance framework

Mishandled data have the potential to lead to discrimination

The use of embedded automation, natural language processing and AI technologies in the hiring process has the potential to negatively impact certain candidates by eliminating or highlighting specific candidate attributes in discriminatory ways. It can be difficult for organizations to identify when automation or AI is being used since these technologies are often deeply ingrained in the hiring process. Therefore, organizations should examine the embedded technologies within their applications and processes to determine if they are collecting data about gender, race, disability, ethnicity, or other personal information that could potentially be used to discriminate against valid applicants.

To drive fair and ethical hiring practices, organizations should be aware of what information their applications are collecting, how that data is used, and how it is secured. Resources such as the US Equal Employment Opportunity Commission (EEOC) and the upcoming EU ACT GDPR of AI – which the US has agreed to align with – can serve as valuable resources for organizations as they proceed with compliance efforts.

There are risks associated with mishandling data in the hiring process. With regulations like NYC 144, organizations should use technology with caution and transparency because errors or lack of oversight could cause costly problems. For example, organizations like Walmart, CarMax, Capital One Financial Group, and KPMG, among others, have paid significant penalties for discriminatory hiring practices. These mistakes can result in fines, considerable brand damage, and loss of customers, employees, and promising candidates due to the perception of unethical hiring practices.

To help avoid these risks, organizations should not hesitate to begin targeting their hiring processes for bias mitigation to better prioritize diversity and equity and maintain compliance with regulations like NYC 144. By being transparent about their hiring processes, and organizing resources to mitigate bias, companies can build trust with employees, customers, and the public while also improving the quality of their hires.

A need for a united C-suite compliance strategy

CHROs should be partnering with their CXO counterparts to navigate compliance challenges stemming from the complex nature of these new regulations. Consequently, the responsibility for compliance overlaps among the CDO, CIO, CPO, and CHRO, and each executive has an opportunity to leverage their expertise to support compliance. 

CIOs, CPOs, and CDOs’ focus areas of data security, data privacy, and governance tools and frameworks, can provide a strong foundation for CHROs to begin auditing their processes. As domain experts, CHROs are suited to implement governance over processes to address compliance with regulations, educate teams and users about the importance of technology, and promote practices of fairness, transparency and equal opportunity. 

They can also give attention to candidate and employee experiences and other current and future projects requiring ethical HR standards inspection. All CIOs, CDOs, CPOs and CHROs should examine how automation and AI are used in HR workflows and monitor whether technology is used responsibly to avoid the potential risks of costly fines, brand damage, and loss of trust and talent.

Some example best practices to keep in mind:

  • Execute regular audits to explain how all hiring, promotion and pay decisions are conducted, threading through the entire candidate-to-employee lifecycle
  • Educate HR Stakeholders and have embedded technical and ethical AI resources aligned with the CIO and CDO 
  • Be transparent and publish standards so candidates and employees are aware of how their data is being used and stored
  • Vet employment technology with people who understand technical and employment privacy requirements
  • Embed ethical AI practices in ESG strategy and diversity, equity and inclusion
  • Work with key stakeholders on a holistic AI governance framework to establish or refine processes for directing, managing and monitoring your organization’s AI activities 
Learn more about an HR/Talent strategy with trustworthy AI
Was this article helpful?
YesNo

More from Artificial intelligence

Responsible AI can revolutionize tax agencies to improve citizen services

3 min read - The new era of generative AI has spurred the exploration of AI use cases to enhance productivity, improve customer service, increase efficiency and scale IT modernization. Recent research commissioned by IBM® indicates that as many as 42% of surveyed enterprise-scale businesses have actively deployed AI, while an additional 40% are actively exploring the use of AI technology. But the rates of exploration of AI use cases and deployment of new AI-powered tools have been slower in the public sector because of potential…

Empower developers to focus on innovation with IBM watsonx

3 min read - In the realm of software development, efficiency and innovation are of paramount importance. As businesses strive to deliver cutting-edge solutions at an unprecedented pace, generative AI is poised to transform every stage of the software development lifecycle (SDLC). A McKinsey study shows that software developers can complete coding tasks up to twice as fast with generative AI. From use case creation to test script generation, generative AI offers a streamlined approach that accelerates development, while maintaining quality. This ground-breaking technology…

What you need to know about the CCPA draft rules on AI and automated decision-making technology

9 min read - In November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT). The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world's biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders.  Furthermore, a California appeals court recently ruled that…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters