AI Security & US Policy:
A New Direction
"Americans have not yet grappled with just how profoundly the artificial intelligence (AI) revolution will impact our economy, national security, and welfare. Much remains to be learned about the power and limits of AI technologies. Nevertheless, big decisions need to be made now to accelerate AI innovation to benefit the United States and to defend against the malign uses of AI." - Eric Schmidt + Robert Work (Chair and Vice Chair of the National Security Commission on Artificial Intelligence).
Overview
The trajectory of commercial artificial intelligence utilization not only entails widespread civic value, but also associated rise in risks due to "AI attacks." AI attacks leverage unique inherent design characteristics of artificial intelligence systems to cause serious AI system failures or allow adversaries to directly control AI solutions. These attacks have many important characteristics that differentiate them from traditional cyber attacks and therefore must be treated not as a subset issue, but rather as a unique concern. As AI technology utilization expands ever wider, the impact potential of these attacks will begin to become more severe if they are not proactively addressed.
At the same time that risks of these attacks are growing, in the United States, there is currently a serious lack of coherence for how to approach these threats. This lack of clear strategic vision reflects a need for more education, awareness, and recognition of the growing scope and import of AI attacks.
While other nations are beginning to articulate their national strategy for addressing these threats, the U.S. approach of deferring key decisions to a later date is no longer acceptable given the growing potential for harm.
This project offers a high level view of a solution to this problem in the form of a sequenced national policy strategy that will establish enforceable standards to manage the risks of AI attacks while also maintaining the innovation and agility needed to facilitate competition in this transformational industry.
This projects offers two documents to help articulate this new strategy:
Policy Brief - An overview of decision makers and stakeholders of key issues and elements to implementing a new national strategy.
National Artificial Intelligence Regulatory Commission Sample Legislative Policy language to establish a AI Regulatory Commission to formalize and implement a new coherent national AI security strategic approach.
Policy Brief
The policy brief listed below is intended to help decision makers obtain an overview of the AI security issue as well as provide a strategic road map for how to address this issue via the introduction of a new regulatory framework.
National Artificial Intelligence Regulatory Agency
We've provided sample language on establishing a new National Artificial Intelligence Regulatory Agency as central authority for carrying out U.S. AI Security Policy Strategy. This sample language isn't meant to be comprehensive but rather set a starting point for considering composition, scope, authority, and enforcement for a independent regulatory commission to oversee AI safety and security concerns.
Examples Of Adversarial Attacks On AI
WHY IS AI SECURITY SO IMPORTANT?
■ The commercial trajectory of AI utilization will further interweave these technologies into the lives of all American citizens. This broadened adoption of AI and extensive reliance on these systems necessitates consistent and trustworthy performance. A key point in this development will be that AI systems will take on a similar level of importance as critical infrastructure and public utility.
■ Artificial Intelligence (AI) technologies are also susceptible to a unique class of attacks that can dramatically alter their performance. These malicious alterations to the performance of AI systems have the potential to cause a wide range of issues including substantive damage and loss of life.
■ While the Department of Defense has acknowledged the importance of addressing these risks in the context of U.S. defensive strategy, there has been little effort to establish a broad, coherent national strategy to address these risks in the commercial sector.
■ China, in particular, has shown interest, understanding, and considerable strategic thought pertaining to managing AI security attacks as part of their national AI strategy.
■ The United States needs to develop a coherent national policy to address AI-specific attacks and threats. This policy should emphasize a risk based orientation that sets for requirements for testing and validating AI implementations in areas that matter most, while also preserving performance, innovation, and cost control.
Additional Reading and Resources
Topical Overviews
2019 - Harvard Belfer Center Paper: Attacking AI by Marcus Comiter
2019 - Artificial Intelligence Security Standardization White Paper (2019 Edition) 人工智能安全标准化白皮书(201
2021 - National Security Commission Report On Artificial Intelligence
US Legal and Policy Materials
2019 - Executive Order 13859 - Maintaining America Leadership In Artificial Intelligence
2020 - Office of Management and Budget - Memorandum M-21-06 Guidance of Regulation of AI Applications
2021 - 2020 Artificial Intelligence Initiative Act (Division E of H.R. 6395 (116th): National Defense Authorization Act for Fiscal Year 2021)
EU Legal and Policy Materials
Standards & Guidelines Related Materials
2019 - NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning 2019 (DRAFT)
2020 - FTC - Business Guidance On Using Artificial Intelligence Systems
2021 - FDA- Good Machine Learning Practice for Medical Device Development: Guiding Principles
2021 - HHS OMG-M-21-06 - Guidance for Regulation of Artificial Intelligence Applications
2021 - Consumer Product Safety Commission Report - Artificial Intelligence and Machine Learning In Consumer Products
Technical/ AI Engineering
US Federal Agencies
U.S. Department of Defense
Research Papers