Historically, business leaders and technologists have portrayed new technology as a way to boost labor productivity, exports, and society’s wealth. This has long been considered part of the inevitable and necessary “creative destruction” of capitalist production that, while disruptive in the short term, is beneficial over time. Anyone who opposes this mantra is portrayed as a hopeless Luddite.
But over the last couple of decades, the economy has been restructured so that the wealth gains from higher productivity and new technology have flowed into the pockets of an ever smaller minority of 1-percenters. Now, the “rise of the robots” threatens to exacerbate these trends to an alarming degree never before imagined. However, the “robots” in question are not simply the assembly-line robots of Japanese automakers, but include so-called “smart” machines, artificial intelligence, software automation, networked communication, big data, and faster computer processing that are slowly being injected into just about everything at home or the office.
It has generally been assumed that automation is primarily a threat to workers who have little education and lower skill levels. But as software automation and sophisticated algorithms advance rapidly in capability, experts now predict that many college-educated, white-collar workers are going to discover that their jobs are also going to be “robotized” as computers take on jobs and tasks with significant intellectual content.
By all accounts, this “robotizing” of the economy is proceeding at a galloping pace. An Oxford University study of over 700 occupations estimates that 47 percent of existing U.S. jobs are at risk from computerization; that’s over 60 million jobs threatened by “technological unemployment.” As machines increasingly take on routine, predictable tasks in virtually every industry and employment sector, human jobs will inevitably decline and workers will face an unprecedented challenge as they try to adapt. What will be the impact of these technologies on individual workers, on the quality of jobs, on the labor force, and on the economy as a whole? Will the robots replace the humans?
Recently, a flurry of best-selling “technology books” have plumbed these issues and their consequences. These books have been snapped up by a public charmed by the crystal ball aspects of predicting the future. The most successful and highly cited of these, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014) by Erik Brynjolfsson and Andrew McAfee, has become the standard since its publication in 2014. It is a riveting account of how “smart” machines, robotics, automation, faster computer processing, and artificial intelligence will shape our collective future. The strength of the authors’ book is how it weaves an intriguing macro- and microeconomics vision, especially as they assert that the global economy is on the cusp of a dramatic growth spurt driven by the combination and recombination of these powerful technologies.
While the authors look critically at some of the broader economic implications of this Machine Age world, mostly their book maintains a posture of wide-eyed boosterism. It shies away from addressing many long-term consequences and challenges, particularly for the labor market and people’s jobs. For example, Kodak, which at one time ruled the world of photography and film, and at its peak employed 145,000 in mostly middle-class jobs, recently declared bankruptcy. Kodak has been replaced by Facebook, or rather by the recent Facebook acquisition Instagram, the Kodak of the digital age. Facebook employs a mere 8,000 people — where are the other 137,000 former Kodak employees supposed to find jobs?
Brynjolfsson and McAfee maintain that as the robots and smart machines take over certain occupations, these technologies will allow a shift in demand to other kinds of work, so in the longer term most displaced workers will find new work and the impact will be positive. That has been the impact of technology in the past, they reason, and it will be again. But their arguments are unconvincing, and rely more on wishful conjecture than hardheaded analysis. A lot is at stake for American workers, and Brynjolfsson and McAfee’s book, as well as most others of this genre, don’t really address the brave new world of this central dilemma, which can be illustrated by a simple thought experiment: What if the smart machines and robots could perform every single job there is to do, so that no human had to work anymore? Who would reap the benefit of this huge productivity increase? Would it be a handful of “Masters of the Universe,” i.e. the chief entrepreneurs and investors? Or would the gains be distributed to the general public?
Their book would have benefited greatly if they had taken the time to more thoroughly analyze the ongoing transition from the New Deal society to what I call a “freelance society.” Millions of American workers are losing the “good” jobs that provided a measure of job security, decent wages, and a safety net, and are becoming freelancers and independent contractors. Best estimates say that by 2020 – a mere five years away — a majority of the 130 million employed Americans– approximately 60-70 million workers – will be “independent workers,” tantamount to being freelancers and day laborers during at least part of their work week. Even many full-time and professional jobs will experience this precarious shift. Yet The Second Machine Age never really delves into the onset of this freelance society and its chilling consequences.
Another bestseller of this genre is Who Owns the Future? (Simon & Schuster) by Jaron Lanier. Lanier is a longtime Silicon Valley insider, and his provocative book reads like a somewhat meandering private conversation between him and his insider friends. Lanier takes some of his colleagues to task, especially those techie utopians who obsess over a sci-fi movie-type future steeped in a post-Singularity matrix (the hypothetical imminent merger of human biology and technology), as well as those seeking “methusalization” (i.e., immortality), and other fantastic futurist scenarios. Nevertheless, the book does attempt to look more deeply into the impacts that digital technologies and the Internet will have on jobs, the middle class, and inequality. Lanier suggests that it could very well result in widespread unemployment and be a threat to the middle class and to the broader economy (Lanier also touched on some of these issues in his previous book, You Are Not a Gadget, Alfred A. Knopf, 2010). But Lanier’s proposed solution is to make corporations like Google, Facebook and others pay consumers for any personal data they collect, providing a revenue source to average people. While that is an interesting idea with merit, it really does not go to the core of the challenges that our democratic society faces, in which the overwhelming economics are subsuming our politics and directly threatening the middle-class society and social contract.
Indeed, Lanier’s frame as a Silicon Valley insider is ultimately disappointing since he seems to accept that there is little that public policy can do, and so jobs are going to disappear, and it’s a matter of figuring out how to squeeze some lemonade from the lemons. Still, Lanier’s cautionary note in Who Owns the Future? provides a valuable counterpoint to all the starry-eyed tech boosters, even while it lacks in important detail or an understanding of what solutions are necessary to alter the course of these developments.
The New Digital Age: Transforming Nations, Businesses, and Our Lives (Knopf) by former Google CEO Eric Schmidt and Jared Cohen, published after Lanier’s book, took issue with some of Lanier’s cautionary tone, specifically concerning the impact on the middle class and employment. Schmidt acknowledges that inequality is growing and calls it the “number one issue” for democracies. He also admits that technological change is the most important driver of this explosion in inequality. Yet where Lanier saw the glass partially empty, Schmidt sees it more full.
For example, Schmidt says that self-driving vehicles will ease the strain on Teamster drivers, while Lanier writes of “Napstering the Teamsters” out of work, and of how such technology could go horribly wrong. Their two books also disagree on whether various occupations will be enhanced or diminished by robotics. A radiologist, for example, is likely to become obsolete since computers are rapidly getting better at analyzing images. Forbes magazine currently uses the services of a company called Narrative Science that “employs” computers to generate news stories containing financial data and other content without the need for human journalists. Other publications have begun doing this for sports reporting, using computers that are capable of taking a pile of statistics and cranking out a news story. Jobs for other skilled professionals, including lawyers, scientists and pharmacists, already are being filled by advancing information technology.
Schmidt’s solutions to the challenges he identifies are limited in their ambition and scope. Like President Barack Obama, he has pushed for more education in science, technology, engineering, and math (STEM) to prepare the workforce for the future. But he predicts that many jobs ultimately will be eliminated by robots that increasingly can automate practically any repetitive task, and at the same time that there are upper limits to the number of people who can hold advanced STEM jobs.
What about those who lose out in this winner-take-all society?, Schmidt looks to the government to ameliorate their situation, arguing that society needs a “safety net” for those who lose their jobs so they can “at least live somewhere and have health care.” While his compassion is noteworthy, even Schmidt’s fundamental optimism paints a rather dismal picture for society’s castaways, who for one reason or another will face a meager life on government assistance. But they are a mere boulder in the road of the Indy 500-speed bulldozer, with Schmidt prognosticating that change is coming and “the longer-term solution is to recognize that you can’t hold back technology progress.”
Jeremy Rifkin’s The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (Palgrave Macmillan) contains the author’s usual blend of “gee whiz” futurism with a provocative discussion of the impending age of robots, smart machines, increased connectivity, and digital information in service to a hyper-efficient economy. Some of the ground covered in this book is similar to what Rifkin covered in his 1994 book, The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era. Rifkin recognizes a paradox at the heart of capitalism: that the dynamism of competitive markets drives productivity up and costs down – yet the benefits of that productivity increase has not flowed into the pockets of average workers, since wages have remained flat. Instead they have gone into the pockets of a decreasing number of extremely wealthy people.
Despite that troubling reality, Rifkin remains fairly upbeat about the future, and he portrays a “Collaborative Commons” – which shares a great deal with what has been called the “sharing economy” – as the reason why. He says “hundreds of millions of people are already transferring parts of their economic lives to the global Collaborative Commons…making and sharing their own information, entertainment, green energy…They are also sharing cars, homes, clothes and other items via social media sites, rentals, redistribution clubs, and cooperatives at low or near zero marginal cost.” We are, says Rifkin, entering an increasingly interdependent world beyond markets.
Unfortunately, Rifkin never really grapples with the political ramifications of who exactly will control this “collaborative commons.” He seems to assume that these trends have a power and momentum all their own that will iron out any inequalities. That kind of optimism is classic Rifkin-ism, but I think it’s hopelessly naïve. Without a clear blueprint of what kind of public policy needs to be legislated in the near future, instead of in the distant future in which Rifkin often specializes, there’s little reason to hope that these trends will translate automatically into a bright future for the middle class. While Rifkin’s book is one of the few to really think out loud about the new economy and its consequences, ultimately he is blinded by a dead-end optimism that is misplaced as long as certain policy changes are not enacted that are capable of solidifying the wobbly ground beneath American workers.
The clear lesson of our recent economic history is that the general public is not certain to benefit from technological innovation or increases in labor productivity. As robotics and automation become more comprehensive and integrated into the economy, it’s very possible that the built-in inequalities of the U.S. political economy will render these technologies into ones that threaten the future of workers by reducing the number of human-occupied jobs, and further increasing inequality.
And that poses additional dilemmas because robots and machines don’t spend money, and they don’t add to consumer demand. So the decline in the number of human jobs would inevitably lead to a decline in aggregate consumer demand and spending, which in turn would reduce the capacity to buy up the goods and services produced by the economy, and would therefore unleash chronic recessionary pressures, precipitating a further loss of jobs for humans (what technologist Martin Ford has called “technological unemployment”).
It seems certain that the battles over who gets to control the benefits of these technologies is going to increase in intensity. If the “rise of the robots” is not accompanied by an adequate policy response, pursued during this interregnum before our society begins edging closer to that machine-age moment, then most human workers will end up being squeezed from multiple sides. The onset of that brave new world is coming sooner than the general public or policymakers realize. Are we ready? Judging by the shortcomings and omissions contained in these four leading technology books, the answer is “definitely not.”
Photo: Spencer Cooper via Flickr