Leading artificial intelligence firms, including OpenAI, Alphabet, and Meta Platforms, have agreed to voluntary commitments aimed at enhancing safety measures related to AI technology, according to the Biden administration.
The commitment also involves companies such as Anthropic, Inflection, Amazon.com, and Microsoft, a partner of OpenAI. These organizations have pledged to conduct thorough testing of AI systems prior to their public release and to share information regarding risk mitigation and cybersecurity investments.
This initiative is viewed as a significant step forward in the Biden administration’s broader aim to regulate rapidly evolving AI technologies, which have seen substantial increases in both investment and consumer engagement.
The rise of generative AI, capable of producing content like ChatGPT’s written text, has sparked global discussions among lawmakers about managing the potential risks posed by this advanced technology to national security and economic stability.
In June, Senate Majority Leader Chuck Schumer called for comprehensive legislation that would establish protective measures for artificial intelligence.
Current legislative proposals in Congress include requirements for political advertisements to indicate if AI was employed to generate images or other forms of content.
President Joe Biden will convene leaders from the seven participating companies at the White House on Friday, where discussions will also center around drafting an executive order and bipartisan legislation focused on AI.
As part of their commitments, the companies are set to create a system for “watermarking” various types of AI-generated content, including text, images, audio, and videos. This watermarking process aims to enable users to easily identify when AI technology has influenced or produced the content.
The specifics of how this watermark will be displayed in shared content remain uncertain.
Furthermore, the participating companies committed to prioritizing users’ privacy as AI advances, ensuring the technology remains free of bias and is not misused to discriminate against vulnerable communities. They also outlined plans to develop AI solutions addressing scientific challenges like medical research and climate change initiatives.
© Thomson Reuters 2023