AI

White House’s Call for Action: AI Companies to Commit to Safeguards

In a move towards responsible AI development, leading technology giants, including Microsoft, Google, and OpenAI, are set to commit to certain safeguards for their AI technologies on Friday, following a request from the White House. According to Bloomberg reports, these companies will voluntarily adhere to a set of principles aimed at ensuring the responsible use of AI. However, the agreement is intended to be temporary and will expire once Congress passes legislation to regulate AI.

The Biden administration has been proactive in emphasizing the importance of responsible AI development, with a focus on ensuring that tech companies innovate in generative AI while upholding safety, human rights, and democratic values for the broader public’s benefit.

Vice President Kamala Harris held a meeting in May with the CEOs of OpenAI, Microsoft, Alphabet (Google’s parent company), and Anthropic, urging them to take responsibility for the safety and security of their AI products. In continuation of this effort, President Joe Biden held a meeting with AI industry leaders last month to further discuss AI-related concerns.

A draft document seen by Bloomberg outlines the eight proposed measures that these tech firms are set to agree upon, which include:

  1. Allowing independent experts to test AI models for potential harmful behavior.
  2. Investing in cybersecurity to protect AI systems from potential attacks.
  3. Encouraging third parties to identify and report security vulnerabilities.
  4. Addressing and highlighting societal risks, such as biases and inappropriate uses of AI.
  5. Prioritizing research into the societal risks associated with AI.
  6. Sharing trust and safety information with other companies and government agencies.
  7. Employing audio and visual content watermarking to indicate when content is AI-generated.
  8. Utilizing cutting-edge AI systems, known as frontier models, to tackle significant societal challenges.

The voluntary nature of this agreement reflects the challenges lawmakers face in keeping pace with the rapid advancements in AI technology. Congress has seen several bills introduced in an attempt to regulate AI, with one focusing on preventing companies from exploiting Section 230 protections to avoid liability for harmful AI-generated content. Another bill seeks to mandate disclosures in political ads that use generative AI. Furthermore, there have been restrictions placed on the use of generative AI in congressional offices.

By embracing these safeguards voluntarily, the technology industry’s leaders are taking a proactive step to demonstrate their commitment to responsible AI development. As the landscape of AI regulation continues to evolve, the collaborative efforts between government and tech companies are vital to ensure AI’s potential is harnessed for the greater good of society while mitigating potential risks.