April 6, 2023 By Jennifer Kirkwood 3 min read

Organizations sourcing, screening, interviewing, hiring or promoting individuals in New York City are required to conduct yearly bias audits on automated employment decision-making tools as per New York City Local Law 144, which was enacted in December 2021.

This new regulation applies to any “automated employment tool;” so, any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence, including homegrown and third-party programs. Organizations must also publish information on their website about how these tools govern their potential selection and interview process. Specifically, organizations must demonstrate how their AI tools support fairness and transparency and mitigate bias. This requirement aims to increase transparency in organizations’ use of AI and automation in their hiring processes and help candidates understand how they are evaluated.

As a result of these new regulations, global organizations that have operations in New York City may be halting the implementation of new HR tools in their systems, as their CIO or CDO must soon audit the tools that affect their hiring system in New York.

To address compliance concerns, organizations worldwide should be implementing bias audit processes so they can continue leveraging the benefits of these technologies. This audit can offer the chance to evaluate the candidate-to-employee lifecycle, covering all relevant personas, tools, data, and decision points. Even simple tools that recruiters use to review new candidates can be improved by incorporating bias mitigation into the AI lifecycle.

Download the AI governance e-book

AI regulations are here to stay

Other states are taking steps to address potential discrimination with AI and employment technology automation. For example, California is working to remove facial analysis technology from the hiring process, and the State of Illinois has recently strengthened its facial recognition laws. Washington, D.C. and other states are also proposing algorithmic HR regulations. In addition, countries like Canada, China, Brazil, and Greece have also implemented data privacy laws. 

These regulations have arisen in part due to guidelines from the US Equal Employment Opportunity Commission (EEOC) on AI and automation, and data retention laws in California. Organizations should begin conducting audits of their HR and Talent systems, processes, vendors, and third-party and homegrown applications to mitigate bias and promote fairness and transparency in hiring. This proactive approach can help to reduce brand damage risk and demonstrates a commitment to ethical and unbiased hiring practices.

Bias can cost your organization

In today’s world, where human and workers’ rights are critical, mitigating bias and discrimination is paramount.

Executives understand that a brand-disrupting hit resulting from discrimination claims can have severe consequences, including losing their positions. HR departments and thought leaders emphasize that people want to feel a sense of diversity and belonging in their daily work, and according to the 2022 Gallup poll on engagement, the top attraction and retention factor for employees and candidates is psychological safety and wellness.

Organizations must strive for a working environment that promotes diversity of thought, leading to success and competitive differentiation. Therefore, compliance with regulations is not only about avoiding fines but is also about demonstrating a commitment to fair and equitable hiring practices and creating a workplace that fosters belonging.

The time to audit is now – and AI governance can help

All organizations must monitor whether they use HR systems responsibly and take proactive steps to mitigate potential discrimination. This includes conducting audits of HR systems and processes to identify and address areas where bias may exist.

While fines can be managed, the damage to a company’s brand reputation can be a challenge to repair and may impact its ability to attract and retain customers and employees.

CIOs, CDOs, Chief Risk Officers, and Chief Compliance Officers should take the lead in these efforts and monitor whether their organizations comply with all relevant regulations and ethical standards. By doing so, they can build a culture of trust, diversity, and inclusion that benefits both their employees and the business as a whole.

A holistic approach to AI governance can help. Organizations that stay proactive and infuse governance into their AI initiatives from the onset can help minimize risk while strengthening their ability to address ethical principles and regulations.

Learn more about data strategy
Was this article helpful?
YesNo

More from Artificial intelligence

Unlocking business transformation: IBM Consulting enhances Microsoft Copilot capabilities

3 min read - Generative AI is not only generating significant revenue for tech companies, but it's also yielding tangible benefits. For large organizations implementing AI solutions across their entire enterprise, the impact can be substantial. For example, reducing customer support costs or increasing engineering capacity can lead to billions of dollars in added value to their bottom line. Microsoft is at the forefront of innovation in the generative AI market, where advancements in natural language processing (NLP) are powering the reasoning engine behind…

Mastering healthcare data governance with data lineage

6 min read - The healthcare industry faces arguably the highest stakes when it comes to data governance. For starters, healthcare organizations constantly encounter vast (and ever-increasing) amounts of highly regulated personal data. The impact of healthcare data usage on people’s lives lies at the heart of why data governance in healthcare is so crucial.In healthcare, managing the accuracy, quality and integrity of data is the focus of data governance. When healthcare organizations excel at this, it can lead to better clinical decision-making, improved…

IBM watsonx Code Assistant for Z: accelerate the application lifecycle with generative AI and automation

2 min read - The business outcomes from scaling AI are clear. Companies at the forefront of generative AI and data-led innovation are seeing 72% greater annual net profits and 17% more annual revenue growth than their peers. A key step toward unlocking these gains is the adoption of purpose-built AI assistants that are tailored to your use case. IBM watsonx™ Code Assistant for Z is an AI-based coding assistant designed to accelerate the application lifecycle for mainframe applications, leveraging generative AI and automation…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters