AI

White House’s Call for Action: AI Companies to Commit to Safeguards

A draft document seen by Bloomberg outlines the eight proposed measures that these tech firms are set to agree upon, which include:

  1. Allowing independent experts to test AI models for potential harmful behavior.
  2. Investing in cybersecurity to protect AI systems from potential attacks.
  3. Encouraging third parties to identify and report security vulnerabilities.
  4. Addressing and highlighting societal risks, such as biases and inappropriate uses of AI.
  5. Prioritizing research into the societal risks associated with AI.
  6. Sharing trust and safety information with other companies and government agencies.
  7. Employing audio and visual content watermarking to indicate when content is AI-generated.
  8. Utilizing cutting-edge AI systems, known as frontier models, to tackle significant societal challenges.

The voluntary nature of this agreement reflects the challenges lawmakers face in keeping pace with the rapid advancements in AI technology. Congress has seen several bills introduced in an attempt to regulate AI, with one focusing on preventing companies from exploiting Section 230 protections to avoid liability for harmful AI-generated content. Another bill seeks to mandate disclosures in political ads that use generative AI. Furthermore, there have been restrictions placed on the use of generative AI in congressional offices.