In the late 1970s, my early years at the University of Massachusetts Boston (UMB), the Department of Economics had two secretaries. When I retired, in 2008, the number of faculty members and students in the department had increased, but there was only one secretary. All the faculty members had their own computers, with which they did much of the work that secretaries had previously done.
I would guess that over those thirty years, the number of departmental secretaries and other secretaries in the university declined by as many as 100, replaced by information technology—what has now become the foundation of artificial intelligence. As I started writing this column, however, I looked on the university’s web site and counted about 100 people with jobs in various parts of the Information Technology Department. Neither this department nor those jobs existed in my early years at UMB. The advance in technology that eliminated so many secretaries also created as many jobs as it eliminated—perhaps more.
My little example parallels the larger and more widely cited changes on U.S. farms in the 20th century—a century when the diesel engine, artificial fertilizers, and other products of industry reduced the percentage of the labor force working on farms from 40% to 2%. No massive unemployment resulted (though a lot of horses, mules, and oxen did lose their jobs). The great expansion of urban industrial production along with the growth of the service sector created employment that balanced the displacement of workers on the farms.
Other cases are cited in debates over the impact of artificial intelligence, examples ranging from handloom weavers’ resistance to new machinery in the early stages of the Industrial Revolution to a widespread concern about “automation” in the 1960s. Generally, however, the new technologies, while displacing workers in some realms of production, also raised productivity and economic growth. There has, as a result, been increased demand for old products and demand for new products, creating more and different jobs.
Historically, it seems, each time prophecies foretold massive unemployment resulting from major technological innovations, they turned out to be wrong. Indeed, often the same forces that threatened existing jobs created new jobs. The transitions were traumatic and harmful for the people losing their jobs, but massive unemployment was not the consequence.
Is This Time Different?
Today, as we move further into the 21st century, many people are arguing that artificial intelligence—sophisticated robotics—is different from past technological shifts, will replace human labor of virtually all types, and could generate massive unemployment. Are things really different this time? Just because someone, once again, walks around with a sign saying, “The world is about end,” doesn’t mean the world really isn’t about to end!
In much of modern history, the substitution of machines for people has involved physical labor. That was the case with handloom weavers in the early 19th century and is a phenomenon we all take for granted when we observe heavy machinery, instead of hand labor, on construction sites. Even as robotics entered industry, as on automobile assembly lines, the robots were doing tasks that had previously been done with human physical labor.
“Robotics” today, however, involves much more than the operation of traditional robots, the machines that simulate human physical labor. Robots now are rapidly approaching the ability, if they do not already have it, to learn from experience, respond to changes in situations, compare, compute, read, hear, smell, and make extremely rapid adjustments (“decisions”) in their actions—which can include everything from moving boxes to parsing data. In part, these capabilities are results of the extreme progress in the speed and memory capacity of computers.
They are also the result of the emergence of “Cloud Robotics” and “Deep Learning.” In Cloud Robotics, each robot gathers information and experiences from other robots via “the cloud” and thus learns more and does so more quickly. Deep Learning involves a set of software that is designed to simulate the human neocortex, the part of the brain where thinking takes place. The software (also often cloud-based) recognizes patterns—sounds, images, and other data—and, in effect, learns.
While individual robots—like traditional machines—are often designed for special tasks, the basic robot capabilities are applicable to a broad variety of activities. Thus, as they are developed to the point of practical application, they can be brought into a wide variety of activities during the same period. Moreover, according to those who believe “this time is different,” that period of transition is close at hand and could be very short. The disruption of human labor across the economy would happen virtually all at once, so adjustments would be difficult—thus, the specter of massive unemployment.
Skepticism
People under thirty may take much of what is happening with information technology (including artificial intelligence) for granted, but those of us who are older find the changes awe-inspiring. Nonetheless, I am persuaded by historical experience and remain skeptical about the likelihood of massive unemployment. Moreover, although big changes are coming rapidly in the laboratories, their practical applications across multiple industries will take time.
While the adoption of artificial technology may not take place as rapidly and widely as the doomsday forecasters tell us, I expect that over the next few decades many, many jobs will be replaced. But as with historical experience, the expansion of productivity and the increase of average income will tend to generate rising demand, which will be met with both new products and more of the old ones; new jobs will open up and absorb the labor force. (But hang on to that phrase “average income.”)
Real Problems
Even if my skepticism is warranted, the advent of the era of artificial intelligence will create real problems, perhaps worse than in earlier eras. Most obvious, even when society in general (on average) gains, there are always losers from economic change. Workers who get replaced by robots may not be the ones who find jobs in new or expanding activity elsewhere. And, as has been the case for workers who lost their jobs in the Great Recession, those who succeed in finding new jobs often do so only with lower wages.
Beyond the wage issue, the introduction of new machinery—traditional machines or robots—often affects the nature and, importantly, the speed of work. The mechanized assembly line is the classic example, but computers—and, we can assume, robotics more generally—allow for more thorough monitoring and control of the activity of human workers. The handloom weavers who opposed the introduction of machines in the early 19th century were resisting the speed-up brought by the machines as well as the elimination of jobs. (The Luddite movement of Northwest England, while derided for incidents of smashing machines, was a reaction to real threats to their lives.)
More broadly, there is the question of how artificial intelligence will affect the distribution of income. However intelligent robots may be, they are still machines which, like slaves, have owners (whether owners of physical hardware, patents on the machines, or copyrights on the software). Will the owners be able to reap the lion’s share of the gains that come with the rising productivity of this major innovation? In the context of the extremely high degree of inequality that now exists as artificial intelligence is coming online, there is good reason for concern.
As has been the case with the information technology innovations that have already taken place—Microsoft, Apple, Google, and Facebook leap to mind—highly educated or specially skilled (or just lucky) workers are likely to share some of the gains from artificial intelligence. But with the great inequalities that exist in the U.S. educational system, the gains of a small group of elite workers would be unlikely to dampen the trend toward greater income inequality.
Income inequality in the United States has been increasing for the past 40 years, and labor’s share of total income has fallen since the middle of the last century—from 72% in 1947 to 63% in 2014. The rise of artificial intelligence, as it is now taking place, is likely to contribute to the continuation of these trends. This has broad implications for people’s well-being, but also for the continuation of economic growth. Even as average income is rising, if it is increasingly concentrated among a small group at the top, aggregate demand may be insufficient to absorb the rising output. The result would be slow growth at best and possibly severe crisis. (See “Are We Stuck in an Extended Period of Economic Stagnation?” D&S, July/August 2016.)
Over the long run, technological improvements that generate greater productivity have yielded some widely shared benefits. In the United States and other high-income countries, workers’ real incomes have risen substantially since the dawn of the Industrial Revolution. Moreover, a significant part of the gains for workers has come in the form of an increase in leisure time. Rising productivity from artificial intelligence holds out the possibility, in spite of the trends of recent decades, for a shift away from consumerism towards a resumption of the long-term trend toward more leisure—and, I would venture, more pleasant lives.
Yet, even as economic growth over the past 200 years has meant absolute gains for working people, some groups have fared much better than others. Moreover, even with absolute gains, relative gains have been limited. With some periods of exception, great inequalities have persisted, and those inequalities weigh heavily against the absolute rises in real wages and leisure. (And in some parts of the last two centuries—the last few decades in particular—gains for working people have not followed from rising productivity and economic growth.)
So even though I’m skeptical that artificial intelligence will generate massive unemployment, I fear that it may reinforce, and perhaps increase, economic inequality.
This article originally appeared at dollarsandsense.org on September 29, 2016. Reprinted with permission.
Arthur Macewan is professor emeritus of economics at UMass-Boston and a Dollars & Sense Associate.