Regulation of artificial intelligence


The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence ; it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Background

In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.

Nature and scope of regulation

Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered. AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, the financial sector, robotics, the military and national security, and international law.

Global regulation

The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development. In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States. The OECD Recommendations on AI were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI. At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics.

Regional and national regulation

The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI. These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.

Regulation of AI in China

The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan", in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI. Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.

Regulation of AI in the European Union

The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence. In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence , following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019. On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust. The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’. The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework. Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification. AI applications that do not qualify as ‘high-risk’ could be governed by voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.

Regulation of AI in the United Kingdom

The UK supported the application and development of AI in business via the introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems. In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.

Regulation of AI in the United States

Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts. The first main report was the On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act established the "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States." On January 7, 2019, following an , the White House’s Office of Science and Technology Policy released a draft , which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI. Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.

Regulation of fully autonomous weapons

Legal questions related to lethal autonomous weapons systems, in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons. Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.

As a response to the AI control problem

Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary. Regulation of research into artificial general intelligence focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI, together with the possibility of differential intellectual progress or conducting international mass surveillance to perform AGI arms control. For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger." Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.