Artificial Intelligence: The Way Forward for Policy and Regulation

By Manasa Gummi

Artificial intelligence (AI) has so far occurred in a regulatory vacuum. With the exception of a few legislative measures regarding autonomous vehicles, very few laws or regulations exist that specifically address the unique benefits and challenges raised by AI. Given the potential for rapid advances in AI technology, the uncertainty of its path, the interplay between innovation, the economic and societal impact, and regulatory responses, designing an effective and dynamic policy strategy should be a government imperative. Google, Facebook, and a range of other companies are working to advance the field of Artificial Intelligence through machine learning, computer vision, and natural language processing, with potential application in a variety of fields. The policy strategy that supports their work must focus on a few key policy and regulatory elements.

First, the impact on labor markets should be thoroughly analyzed. Today, it may be challenging to predict exactly which jobs will be most immediately affected by AI-driven automation. Because AI is not a single technology, but rather a collection of technologies that are applied to specific tasks, the effects of AI will be difficult to distinguish and felt unevenly through the economy, both negatively and positively. Many jobs will be lost to automation, top among them being transportation and retail. Alternatively,  jobs are likely to be directly created in areas such as the development and supervision of AI, a significant portion of both unskilled and skilled workers will eventually be replaced. The economy in the internet age has proven itself capable of handling this scale of change, though it will depend on the timeframe over which they occur, the ability of workers to transfer their skills, and how quickly and effectively the players (public and private) can adapt to the technological shifts. A policy strategy that, to the extent possible, takes these factors into consideration, will help set the right tone for the regulatory dialogue. This will be a critical stage for economies to absorb the impacts of AI on labor markets and for government to adopt approaches that respond to these changes, as opposed to resisting them.

Second, labor force development is critical. Public policy can address the unemployment risks, ensuring that workers are trained and able to succeed in occupations that are complementary to, rather than competing with, automation. It can also ensure that the economic benefits created by AI are shared broadly, and assure that the industry is responsibly evolving within the global economy. One set of central policy measures is education. Educating and preparing new workers to enter the workforce, beginning from primary education all the way through higher education, will ensure that their skills meet the needs of the new markets. Another  is unemployment protections. Social safety net programs such as unemployment insurance, healthcare access, and others are necessary to support workers whose jobs are lost to economic transition. These programs, coupled with the relevant training and education, will help reallocate the workforce effectively with minimum damage to the economy and society.

The third and currently most contentious aspect of policy development in AI is regulation. Regulation of the industry alone cannot be an effective strategy to address the negative impacts of technological developments. Combined with other policy strategies, the regulatory approach must be driven by  three main elements: compliance, multi-stakeholder approach, and international cooperation. Compliance is at the heart of policy implementation. While regulatory enforcement threatens to increase the cost of compliance and can slow the development of beneficial innovations. Policymakers should consider how the regulations and the implementation machinery could be adjusted to lower costs and barriers to innovation without adversely impacting safety or public good. AI can be a major driver of economic growth and social progress if industry, civil society, government, and the public work together to support development of the technology and implement checks and balances to ensure accountability. Lastly, in the current global landscape, international cooperation is inevitable. Particularly the context of various interrelated technological applications, and the cross-border reach of AI technology, international engagement and cooperation, and regulatory harmonization are crucial.

Finally, the policy strategy to AI regulation should be driven by one overarching principle: combating inequality. If labor productivity increases do not translate into wage increases, then the large economic gains brought about by AI could accrue to a select few. Instead of broadly shared prosperity for workers and consumers, this might push towards reduced competition, increased wealth inequality, and greater social instability. To address these negative potential outcomes, government interventions such as a universal basic income, additional worker protections, and corporate tax reforms, while simultaneously maintaining competitive markets. The evolving Artificial Intelligence (AI) industry has tremendous potential as well as critical risks for economies and societies across the world. Policies and regulations that tackle them effectively in the present and leave flexibility for future growth, are essential today.


Manasa Gummi is a Master of Public Policy candidate at the Goldman School of Public Policy.


  • Artificial Intelligence, Automation, and the Economy, Executive Office of the President of the United States, December 2016
  • Preparing for the Future of Artificial Intelligence, Executive Office of the President of the United States, October 2016
  • Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, Harvard Journal of Law and Technology, Vol 29 N0. 2, Spring 2016