The last few years—even the last few months—have seen artificial intelligence (AI) breakthroughs come at a dizzying pace. AI that can generate paragraphs of text as well as a human, create realistic imagery and video from text, or perform hundreds of different tasks has captured the public’s attention. People see AI’s high level of performance, creative potential and, in some cases, the ability for anyone to use them with little to no technical expertise. This wave of AI is attributable to what are known as foundation models.

What are foundation models?

As the name suggests, foundation models can be the foundation for many kinds of AI systems. Using machine learning techniques, these models apply information learned about one situation to another situation. While the amount of data required is considerably more than the average person needs to transfer understanding from one task to another, the result is relatively similar. For example, once you spend enough time learning how to cook, without too much effort you can figure out how to cook almost any dish, and even invent new ones.

This wave of AI looks to replace the task-specific models that have dominated the landscape. And the potential benefits of foundation models to the economy and society are vast. For example, identifying candidate molecules for novel drugs or identifying suitable materials for new battery technologies requires sophisticated knowledge about chemistry and time-intensive screening and evaluation of different molecules. IBM’s MoLFormer-XL, a foundation model trained on data about 1.1 billion molecules, helps scientists rapidly predict the 3D structure of molecules and infer their physical properties, such as their ability to cross the blood-brain barrier. IBM recently announced a partnership with Moderna to use MoLFormer models to help design better mRNA medicines. IBM also partners with NASA to analyze geospatial satellite data—to better inform efforts to fight climate change—using foundation models.

However, there are also concerns about their potential to cause harm in new or unforeseen ways. Some risks of using foundation models are like those of other kinds of AI, like risks related to bias. But they can also pose new risks and amplify existing risks, such as hallucination, the capability of generation of false yet plausible-seeming content. These concerns are prompting the public and policymakers to question whether existing regulatory frameworks can protect against these potential harms.

What should policymakers do?

Policymakers should take productive steps to address these concerns, recognizing that a risk and context-based approach to AI regulation remains the most effective strategy to minimize the risks of all AI, including those posed by foundation models.

The best way policymakers can meaningfully address concerns related to foundation models is to ensure any AI policy framework is risk-based and appropriately focused on the deployers of AI systems. Read the IBM Policy Lab’s A Policymaker’s Guide to Foundation Models—a new white paper from us, IBM’s Chief Privacy & Trust Officer Christina Montgomery, AI Ethics Global Leader Francesca Rossi, and IBM Policy Lab Senior Fellow Joshua New—to understand why IBM is asking policymakers to:

  1. Promote transparency
  2. Leverage flexible approaches
  3. Differentiate between different kinds of business models
  4. Carefully study emerging risks

Given the incredible benefits of foundation models, effectively protecting the economy and society from its potential risks will help to ensure that the technology is a force for good. Policymakers should swiftly act to better understand and mitigate the risks of foundation models while still ensuring the approach to governing AI remains risk-based and technology neutral. 

Read “A Policymaker’s Guide to Foundation Models”
Was this article helpful?
YesNo

More from Artificial intelligence

24 IBM offerings winning TrustRadius 2024 Top-Rated Awards

2 min read - TrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. “Earning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. It’s based entirely on reviews and customer sentiment,” said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher from…

Generate Ansible Playbooks faster by using watsonx Code Assistant for Red Hat Ansible Lightspeed

2 min read - IBM watsonx™ Code Assistant is a suite of products designed to support AI-assisted code development and application modernization. Within this suite, IBM watsonx Code Assistant for Red Hat® Ansible® Lightspeed equips developers with generative AI (gen AI) capabilities, accelerating the creation of Ansible Playbooks. In early 2024, IBM watsonx Code Assistant for Red Hat Ansible Lightspeed introduced model customization and a no-cost 30-day trial. Building on this momentum, we are excited to announce the on-premises release of watsonx Code Assistant for Red Hat Ansible Lightspeed,…

Unlocking business transformation: IBM Consulting enhances Microsoft Copilot capabilities

3 min read - Generative AI is not only generating significant revenue for tech companies, but it's also yielding tangible benefits. For large organizations implementing AI solutions across their entire enterprise, the impact can be substantial. For example, reducing customer support costs or increasing engineering capacity can lead to billions of dollars in added value to their bottom line. Microsoft is at the forefront of innovation in the generative AI market, where advancements in natural language processing (NLP) are powering the reasoning engine behind…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters