April 6, 2023 By Jennifer Kirkwood 3 min read

Organizations sourcing, screening, interviewing, hiring or promoting individuals in New York City are required to conduct yearly bias audits on automated employment decision-making tools as per New York City Local Law 144, which was enacted in December 2021.

This new regulation applies to any “automated employment tool;” so, any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence, including homegrown and third-party programs. Organizations must also publish information on their website about how these tools govern their potential selection and interview process. Specifically, organizations must demonstrate how their AI tools support fairness and transparency and mitigate bias. This requirement aims to increase transparency in organizations’ use of AI and automation in their hiring processes and help candidates understand how they are evaluated.

As a result of these new regulations, global organizations that have operations in New York City may be halting the implementation of new HR tools in their systems, as their CIO or CDO must soon audit the tools that affect their hiring system in New York.

To address compliance concerns, organizations worldwide should be implementing bias audit processes so they can continue leveraging the benefits of these technologies. This audit can offer the chance to evaluate the candidate-to-employee lifecycle, covering all relevant personas, tools, data, and decision points. Even simple tools that recruiters use to review new candidates can be improved by incorporating bias mitigation into the AI lifecycle.

Download the AI governance e-book

AI regulations are here to stay

Other states are taking steps to address potential discrimination with AI and employment technology automation. For example, California is working to remove facial analysis technology from the hiring process, and the State of Illinois has recently strengthened its facial recognition laws. Washington, D.C. and other states are also proposing algorithmic HR regulations. In addition, countries like Canada, China, Brazil, and Greece have also implemented data privacy laws. 

These regulations have arisen in part due to guidelines from the US Equal Employment Opportunity Commission (EEOC) on AI and automation, and data retention laws in California. Organizations should begin conducting audits of their HR and Talent systems, processes, vendors, and third-party and homegrown applications to mitigate bias and promote fairness and transparency in hiring. This proactive approach can help to reduce brand damage risk and demonstrates a commitment to ethical and unbiased hiring practices.

Bias can cost your organization

In today’s world, where human and workers’ rights are critical, mitigating bias and discrimination is paramount.

Executives understand that a brand-disrupting hit resulting from discrimination claims can have severe consequences, including losing their positions. HR departments and thought leaders emphasize that people want to feel a sense of diversity and belonging in their daily work, and according to the 2022 Gallup poll on engagement, the top attraction and retention factor for employees and candidates is psychological safety and wellness.

Organizations must strive for a working environment that promotes diversity of thought, leading to success and competitive differentiation. Therefore, compliance with regulations is not only about avoiding fines but is also about demonstrating a commitment to fair and equitable hiring practices and creating a workplace that fosters belonging.

The time to audit is now – and AI governance can help

All organizations must monitor whether they use HR systems responsibly and take proactive steps to mitigate potential discrimination. This includes conducting audits of HR systems and processes to identify and address areas where bias may exist.

While fines can be managed, the damage to a company’s brand reputation can be a challenge to repair and may impact its ability to attract and retain customers and employees.

CIOs, CDOs, Chief Risk Officers, and Chief Compliance Officers should take the lead in these efforts and monitor whether their organizations comply with all relevant regulations and ethical standards. By doing so, they can build a culture of trust, diversity, and inclusion that benefits both their employees and the business as a whole.

A holistic approach to AI governance can help. Organizations that stay proactive and infuse governance into their AI initiatives from the onset can help minimize risk while strengthening their ability to address ethical principles and regulations.

Learn more about data strategy
Was this article helpful?
YesNo

More from Artificial intelligence

AI transforms the IT support experience

5 min read - We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service. About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search…

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

Chat with watsonx models

3 min read - IBM is excited to offer a 30-day demo, in which you can chat with a solo model to experience working with generative AI in the IBM® watsonx.ai™ studio.   In the watsonx.ai demo, you can access some of our most popular AI models, ask them questions and see how they respond. This gives users a taste of some of the capabilities of large language models (LLMs). AI developers may also use this interface as an introduction to building more advanced…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters