Sadly, the world of AI has moved from ethics washing to ethics squashing.

Microsoft recently laid off its entire ethics and society team. The team was responsible for transforming the principles created by Microsoft’s Office of Responsible AI into consistent rules and practices. It was also tasked with anticipating the risks that AI could pose to people and society, and most recently, it had been evaluating Microsoft’s adoption of OpenAI’s technology. It’s no secret that Microsoft has made massive bets on OpenAI, investing $11 billion in the startup and infusing ChatGPT’s capabilities into Bing and across its suite of products. A team whose very function was to pump the brakes was bound to get run over.

This is just the most recent example of AI ethics falling by the wayside. In 2020, Timnit Gebru, a researcher focused on the ethics of AI at Google, was infamously terminated after publishing a paper highlighting the danger of biases in large language models. Google executives took exception to a paper she coauthored that criticized a technology with massive commercial potential.

Unfortunately, tech companies haven’t learned that prioritizing profitability over principles is short-sighted and that the long-term impact can hurt your brand and society at large.

Shape Your Own Ethical Destiny — Don’t Let Tech Providers Do It For You

It’s not just technology companies that are grappling with these ethical dilemmas. Banks using AI to make credit decisions and detect fraud, healthcare companies using AI for medical diagnoses, and governments using AI to provide essential services all need to navigate these ethical waters. Unfortunately, the “leaders” in AI are setting a poor example. Relying on them to build AI ethically will expose you to reputational and regulatory risk. Here are some ways to mitigate these risks:

  • Third-party due diligence. Companies adopting solutions from Microsoft, Google, and other AI technology providers will ultimately be held accountable both legally and in the court of public opinion for any ethical missteps. Effective third-party due diligence will therefore be essential and will require tight coordination between your procurement, risk and compliance, and data science teams. Demand explainability from vendors — don’t trust vendors that refuse to grant transparency into how their systems work.
  • Fairness assessments. AI inherits the biases within its training data. As large language models are in their infancy, the biases within them are still being discovered. Without dedicated teams at vendors to perform this research, these biases are likely to manifest in commercialized products that you use to interact with customers and run your business. Companies should start measuring a variety of fairness metrics to prevent discriminatory outcomes.
  • Enterprisewide AI governance. Enterprises should build their AI governance programs across four pillars — purpose, action, culture, and assessment — to ensure compliance with internal policies and with coming regulations such as the EU’s AI Act and New York City’s Local Law 144. Be on the lookout for Forrester’s AI governance report in the next month or so.

We’re in the midst of an AI renaissance. Emerging AI technologies such as large language models will shape the future of business and quite possibly the future of our species. We need ethical gatekeepers to ensure that this future is the one we want to live in. Surely, it’s worth the cost.