On October 30, 2023, less than a year after Open AI publicly released its ChatGPT model, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order ... ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­ ͏ ‌     ­

President Biden Signs Sweeping Executive Order on the Use and Development of Artificial Intelligence

Intellectual Property

On October 30, 2023, less than a year after Open AI publicly released its ChatGPT model, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110; referenced below as "the Executive Order”), which is a broad and comprehensive order that will affect businesses in all industries.

The Executive Order directs several agencies and calls on Congress to address critical issues arising from the development and use of artificial intelligence (AI) that both promise great potential benefits and may pose significant risks to national security and public safety as well as consumers, workers, and civil rights. In addition, the Executive Order sets a pathway to increase development and investment in safe and responsible AI technology. Stated principles and priorities guiding the federal government’s approach to the development and use of AI include the importance of safety and security; promotion of responsible innovation, competition, and collaboration; support of American workers; advancing equity and civil rights; protecting privacy and civil liberties; managing risk from the government’s own use of AI; and strengthening American leadership in AI.

Considerations for Businesses in All Industries

Due to its sweeping nature, the Executive Order will affect businesses in all industries – not just the information technology or life sciences industries. Any business that currently uses, or plans to use, AI-based technology will likely be affected by agency guidance or rulemaking directed by the Executive Order. A few notable elements of the Executive Order include:

  • Use of AI-Generated Content. The Department of Commerce is directed to develop guidance for content authentication and watermarking to clearly label AI-generated content by federal agencies. This will likely serve as a blueprint for identifying AI-generated content in the private sector, as well. Businesses using AI-generated content – such as marketing materials, customer service chat-bots, news releases and other communications, or social media posts – may be required to adhere to the standards developed by the Department of Commerce in the near future.
  • Worker Protections. The Department of Labor is directed to provide guidance to employers on the use of AI technology and compliance with employment and labor laws, which may affect the use of AI in hiring/firing decisions, evaluating job performance, monitoring employees, and determining compensation and benefits. The Department of Labor is also tasked with providing a report on the risks AI poses in displacing workers and potential federal programs to address these risks.
  • Data Privacy. Independent agencies are encouraged to use their full range of authorities to protect the privacy of consumers and individuals in connection with AI technology. This will likely result in agency guidance or rulemaking on the use of personally identifiable information or personal health information as data for training AI models or other AI use cases. Notably, the Executive Order specifically requires the Department of Health and Human Services to establish an AI task force to develop a strategic plan for the responsible use of AI technology in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health).
  • Development and Procurement of AI. Various agencies and executive officials are directed to establish clear standards and procedures for the federal government’s development, use, and procurement of AI technology. Again, the federal guidelines and best practices adopted by the agencies may be a blueprint for the private sector in the future. 

Considerations for the AI Industry

The Executive Order aims to promote the use and development of safe AI-based technologies, while also seeking to mitigate inherent risks and prevent catastrophic harms. In addition to the considerations described above, a few notable elements of the Executive Order directed to the AI industry, in particular, include:

  • Notification and Disclosure Requirements. Developers of an AI model that poses a significant risk to national security, national economic security, or public health and safety will be required to notify the federal government when training the model and share the results of red-team safety tests.[1]
  • AI Safety Standards. The National Institute of Standards is directed to set rigorous standards for extensive red-team testing to ensure safety before the model is publicly released. In addition, the Departments of Homeland Security and Energy will address threats to infrastructure as well as chemical, biological, nuclear, radiological, and cybersecurity risks.
  • Civil Rights Protections and Criminal Justice. The Department of Justice and civil rights offices are directed to coordinate to develop best practices for the use of AI in investigating and prosecuting civil rights violations. In addition, the Attorney General will provide a report on the use of AI in the criminal justice system, including its use in sentencing, parole and release, police surveillance, crime forecasting and predictive policing, and other related issues.
  • AI Investment and Competition. Federal programs will promote the development and investment in AI technology, including establishing a National AI Research Resource and grants to AI research in key areas, such as healthcare and climate change; increasing funding and resources for small businesses developing AI; and establishing new criteria for work visas for immigrants with skills in AI technology. Notably, the Federal Trade Commission is also encouraged to exercise its authorities, including rulemaking, to ensure fair competition in the AI marketplace (which might signal the Biden Administration’s focus on enforcing antitrust laws in this area).
  • Adoption of AI by the Federal Agencies. Various federal agencies will be required to prioritize the development and use of AI systems within those respective agencies, increase the hiring of AI-skilled workers, and develop policies and procedures related to the use of AI technologies. In addition, a multi-agency AI and Technology Task Force will be created to oversee the hiring of AI developers throughout the federal government. A White House AI Council will coordinate the activities of agencies across the federal government.

Although the Executive Order does not require immediate action by businesses outside of the AI industry, it does provide insight into the immense number and scope of regulations, guidance, and legislation that are likely to develop in the near future.

Businesses in all industries would be well-served to monitor the progress and development of AI regulations as the technology becomes increasingly prevalent in their everyday operations, including critical areas such as producing AI-generated materials and communications, using AI for employment and compensation decisions, using AI to offer or deny credit or leases to consumers, maintaining and safeguarding private or confidential information in connection with AI systems, and procuring and monitoring outside vendors using AI-based technology. 

This Executive Order confirms that AI is part of all of our futures – today. Please contact one of the authors or your Calfee attorney with any questions.


[1] “Red-team safety tests” are the use of authorized hackers to emulate tactics, procedures, and practices used by real-world hackers in an effort to test the safety and security of a system.

For additional information on this topic, please contact your regular Calfee attorney or the author(s) listed below:

   
 
   
 

For more updates and alerts, visit the News section of Calfee.com.

SUBSCRIBE | UNSUBSCRIBE

    LinkedIn    Facebook    Instagram    Threads