In the past several years, the topic of Artificial Intelligence (AI) have advanced rapidly. Most of the people who are reading and talking about AI concerns are commonly asking the same question, “If machines can do tasks normally requiring human intelligence, will there be jobs left for humans?”
In my view, this is a wrong question. There are plenty of horrible jobs still left for the humans, but as the AI making its place, the requirements is to study and consider the policies to be implied and implemented by the humans over the function.
Governments of various countries like France, Canada, Japan, US and many others around the world responded by developing AI strategies.
There have been various optimistic and pessimistic views on the impact of AI on society are widespread. Many say that it will be a boosting revolution for the world in the coming time whereas some even warn that rapid advances in AI could transform society for the worse.
More optimistically, AI could enhance productivity so dramatically that people have plenty of income and little unpleasant work to do regardless of whether one adopts a pessimistic or optimistic view, policy will shape how AI affects society.
How we define AI?
Before talking about the policies, let us first define what it is. Artificial intelligence is simply defined as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. It is the recent excitement in computer science driven by advances in machine learning based on the huge data and predictions.
Here I will discuss how AI is likely to have widespread consequences as a general purpose technology
There are two aspects of AI policy:
First, regulatory policy has an impact on the speed of diffusion of the technology and the form that the technology takes.
Second, a number of policies focus on mitigating potential negative consequences of AI with respect to labor markets and antitrust concerns.
Policies that will influence the diffusion of AI
Liability rules will also impact the diffusion of AI. Firms will be less likely to invest in the development of AI products in the absence of clear liability rules. If any accident or an unpredictable event occurs, it is a general question about the liability, whoever involved in the development of the autonomous product like the AI software firm, manufacturer, developer, data provider, telecommunications, etc
Without clear rules on who is liable, all may hesitate to invest for it. If, for example, autonomous vehicles would save lives, should manufacturers of non-autonomous vehicles be held to higher standards than current law requires? This would accelerate diffusion of the safer technology. In contrast, if the increases in liability focus are primarily on newer technology, then as a result, diffusion will slow.
Other most significant long-run policy issues relate to the potential changes to the distribution of the wealth generated by the widespread use of AI. In other words, AI may increase inequality over the people. If AI is like other types of information technology, it is likely to be skill-biased. The people who benefit most from AI will be educated people who already are doing relatively well. These people are also more likely to own the machines. Policies to address the consequences of AI for inequality relate to the social safety net.
Another policy question around the diffusion of AI relates to whether it will lead to monopolization of industry. The leading companies in AI are large in terms of revenue, profits, and especially market capitalization (high multiples on earnings). This has led to an increase in antitrust scrutiny of the leading technology firms from governments. Much of this antitrust scrutiny focuses on the role of these firms as platforms, not on their use of AI.
One of the very important feature that makes AI different is the importance of data. Firms with huge data can build better AI. Whether this leads to economies of scale and the potential for monopolization depends on whether a small lead early in the development cycle will be able to fight in the long-run advantage.
Much of economic policy for AI is simply economic policy. For the diffusion of AI, it resembles innovation policy. For the consequences of AI, it resembles public policy (the social safety net) and competition policy (antitrust). For the successful implementation of AI over the system, the policies need to be monitored and revised; only then such a powerful technology would successfully be able to transform the world in all the sectors.