- President Biden issued an executive order establishing new standards and requirements for developing safe and responsible AI systems, including mandating safety testing for high-risk AI.
- The order tasks NIST with developing clear standards and tests for AI safety and security, and charges the new AI Safety Board with applying these across critical infrastructure.
- It aims to combat AI disinformation, prioritize privacy in AI development, prevent algorithmic discrimination, analyze AI’s impact on jobs, and promote AI innovation and competition.
President Biden has taken a major step to address the impacts of artificial intelligence by issuing a wide-ranging executive order. This directive establishes new standards and requirements aimed at ensuring AI systems are developed safely and responsibly.
Introducing New Safety Requirements for Powerful AI
Under the executive order, developers creating high-risk AI systems will now be mandated to share details on safety testing and other vital information with the government. This applies to companies making any foundational model that could seriously endanger national security, economic stability, or public health. The goal is to ensure powerful AI is safe and trustworthy before deployment.
Empowering NIST to Develop AI Safety Standards
A key part of the order tasks the National Institute of Standards and Technology (NIST) with developing clear standards, tools, and tests for AI safety and security. The newly formed AI Safety and Security Board will then apply these NIST standards across critical infrastructure. This promotes consistent assessment of AI risks.
Combating AI Disinformation and Deepfakes
The order takes aim at AI-generated disinformation and deepfakes. It directs the Department of Commerce to create best practices for detecting synthetic content and confirming authenticity. New watermarking guidelines will also help identify AI-created media. The goal is to limit the spread of deceptive AI content.
Prioritizing Privacy in AI Development
On the issue of privacy, the order calls for bipartisan federal privacy legislation from Congress. It also makes privacy-preserving techniques a priority for AI research funding. This will enable AI training without compromising user data privacy.
Preventing Algorithmic Discrimination
To address concerns over algorithmic bias, the directive provides guidance for preventing AI discrimination in housing, benefits programs, and federal contracting. It also promotes training and coordination to tackle algorithmic discrimination. The goal is to maximize equity in AI.
Analyzing AI’s Impact on Jobs
Recognizing AI’s potential labor market impacts, the order mandates an in-depth report on how AI will affect jobs, incomes, and workers. It also outlines principles to mitigate harm and maximize benefits for workers affected by AI-driven automation.
Promoting AI Innovation and Competition
To maintain U.S. leadership in AI, the order expands research grants in areas like healthcare and climate. The FTC is also tasked with providing technical assistance to small AI developers and startups. This fosters an innovative and competitive AI ecosystem.
Coordinating AI Security and Strategy
Finally, the order requires a National Security Memorandum to coordinate AI strategy and security across government. It also directs expanded international outreach on AI safety, risk management, and establishing global AI standards.
With this sweeping executive order, the Biden administration has signaled its commitment to addressing the full range of policy challenges posed by artificial intelligence. The directive lays the groundwork for managing AI risks while promoting trust, innovation, and leadership. It marks a major step toward developing AI responsibly for the future.