Technology Author:Yunfeng Zhang Jul 25, 2023 05:47 PM (GMT+8)

"We must be vigilant and alert to the risks of emerging technologies that could threaten our democracy and our values." So said President Joe Biden in a speech on July 21, US time. On the same day, the White House announced that seven artificial intelligence giants had "voluntarily" made a series of commitments to address the risks posed by AI after months of consultation with the White House.

AIGC

The seven companies are Google, Microsoft, Meta, OpenAI, Amazon, Anthropic and Inflection.

The Biden administration hopes to use "voluntary commitments" to achieve the goal of safe, reliable and transparent development of AI technology before substantive laws and regulations. White House officials have acknowledged that there is no enforcement mechanism in place to ensure that these companies adhere to their commitments, but are working on an executive order to do so.

With the widespread use of generative AI, fake news of all kinds, even audio, and video, has begun to increase, and the spread of disinformation has become a significant concern about AI. This issue has received more attention, especially in the run-up to the 2024 U.S. elections, with seven companies jointly announcing that they will develop technologies such as watermarking systems to ensure that users are able to separate AI-generated content.

Other commitments made jointly or in part by the 7 firms on this occasion include

  • Invest in cybersecurity and insider threat measures to protect proprietary and unpublished model weights;

  • Facilitate third-party discovery and reporting of vulnerabilities in their AI models;

  • Publicly reporting on the capabilities, limitations, and areas of appropriate and inappropriate use of their AI models;

  • Prioritize research on the social risks that AI models may pose, including avoiding harmful bias and discrimination and protecting privacy;

  • Developing and deploying advanced AI models to help solve society's biggest challenges.

The issue of watermarking AI-generated content was discussed with OpenAI CEO Ottman during EU Commissioner Thierry Breton's visit to San Francisco in June this year. Ottman said at the time that he would be "happy to demonstrate OpenAI's handling of watermarking as soon as possible."

While Biden called the commitments "real and concrete," many of them are somewhat vague, lacking specific details for implementation, and more symbolic than substantive.

For example, the seven companies collectively pledged to share information about managing AI risks within the industry and with governments, civil society, and academia. However, there is ambiguity about determining the scope of risk information disclosure and how it should be done.

Different AI companies also have very different attitudes toward disclosure. meta is clearly more open about sharing its AI models, most of which are open-source and publicly available to all. OpenAI, on the other hand, stated in its GPT-4 release that it refuses to release the amount of training data and model sizes of its models due to competitive and security concerns.

The promise also states that AI models will be tested internally and externally for security before they are released. However, this promise does not specify what testing these models will undergo, which organization will conduct the testing, or what the specific criteria for the testing will be.

In fact, these AI companies have claimed to have internally tested their models for security before releasing them. However, this is the first time that these companies have collectively committed to allowing external experts to test their models.