Top AI Firms Commit to Manage Risks in Generative AI, Including Watermarking of Digital Content

Seven leading artificial intelligence (AI) firms will engage in discussions with the US government to address the harmful risks associated with generative AI. In a non-binding agreement, the companies, including OpenAI, Google, Meta, Microsoft, Amazon, Anthropic, and Inflection, will make voluntary commitments to self-police their technology.

Source: Jcoope12

One significant focus of their commitments is the development of technical mechanisms, such as watermarking AI-generated content, to ensure people can differentiate between real and AI-generated material. Notably, OpenAI’s AI image generator, DALL-E, already employs a watermark, while other competitors like Google are still working on implementing such systems. Shutterstock and Adobe have adopted their own measures to guard against their liabilities from AI-generated images based on original content.

The surge in viral fake AI images, such as Donald Trump’s arrest or the Pope in a puffer jacket, has raised concerns about the potential impact on the 2024 presidential election. To address these issues, the companies are committing to invest in more robust cybersecurity measures and agree to third-party inspections to identify vulnerabilities in their AI systems.

The firms also pledge to share crucial information about AI risks with government agencies, academia, and civil society (whatever that means). However, some commitments, like reporting AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, leave room for interpretation.

In response to potential societal risks, the White House is urging AI companies to prioritize research on systemic bias and privacy concerns. Furthermore, the administration encourages leveraging AI technology for positive contributions, such as fighting cancer and climate change.

Although the current commitments are voluntary, the AI industry faces the possibility of an Executive Order regulating AI more extensively in the future. The White House is actively working on executive action that will eventually be signed, aiming to govern the use of AI effectively. The administration emphasizes that AI regulation is a top priority for President Biden.