Monday, August 14, 2017

Rise of the Robot Barons

The robots are coming, ready to remake our world according to the specifications of a cabal of millennial asswipes and megalomaniacal billionaires. These are the people who are destined to become our feudal overlords in the Age of Robotics. The question is: is there any way to stop them?

There are a few ways to approach the problem of a robot economy. One is to reject increased automation of labor as generally undesirable, because people need jobs and robots are tricky devils. The 19th century had an unsuccessful but memorable tradition of anti-machine fervor, from the Luddite machine-breaking riots of the 1810s, to this magnificently sinister pronouncement by Samuel Butler in 1863:

“Our opinion is that war to the death should be instantly proclaimed against [machines]. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude has commenced in good earnest, that we have raised a race of beings whom it is beyond our power to destroy, and that we are not only enslaved but are absolutely acquiescent in our bondage.”

Some people, of course, might regard this whole “war to the death against anything more complicated than an abacus” proposal as extreme. But full-on Unabomber-style technophobia aside, there’s an argument to be made that limits on automation ought to be established by regulation, in order to allow human beings to continue to work. This is not only because U.S. society, as currently constituted, makes livelihood contingent on employment, but also because work—so the argument goes—is generally a good thing for human beings. In his 2015 encyclical Laudato Si’, for example, Pope Francis wrote that we shouldn’t endeavor to have “technological progress increasingly replace human work,” as this outcome would, in his view, “be detrimental to humanity. Work is a necessity, part of the meaning of life on this earth, a path to growth, human development and personal fulfillment.” This line of thought is also fairly common within labor movements, because it holds that all occupations are equal in dignity, that a construction worker is no less deserving of respect, and entitled to a decent living, than a software engineer. Even if you could automate all labor, you wouldn’t want to, because human identity and psychological well-being are inextricably bound up in the work we do, and mechanization deprives people of the ability to contribute to their communities. The important thing is simply to make sure that workers are properly paid and protected. (...)

As we think about what increased automation ought to look like, in an ideal world where the economy serves human needs, we should first disabuse ourselves of the idea that efficiency is an appropriate metric for assessing the value of an automated system. Sure, efficiency can be a good thing—no sane person wants an inefficient ambulance, for example—but it isn’t a good in itself. (After all, an efficient self-disembowling machine doesn’t thereby become a “good” self-disembowling machine.) In innumerable contexts, inefficient systems and unpredictable forces give our lives character, variety, and suspense. The ten-minute delay on your morning train, properly considered, is a surprise gift from the universe, being an irreproachable excuse to read an extra chapter of a new novel. Standing in line at the pharmacy is an opportunity to have a short conversation with a stranger, who may be going through a hard time, or who may have something interesting to tell you. The unqualified worship of efficiency is a pernicious kind of idolatry. Often, the real problem isn’t that our world isn’t efficient enough, but rather that we lack patience, humility, curiosity, and compassion: these are not failings in our external environments, but in ourselves.

Additionally, we can be sure that, left to themselves, Market Forces will prioritize efficiencies that generate profit over efficiencies that really generate maximum good to humans. We currently see, for instance, a constant proliferation of labor-saving devices and services that are mostly purchased by fairly well-off people. This isn’t to say that these products are always completely worthless—to the extent that a Roomba reduces an overworked single parent’s unpaid labor around the home, for example, that might be quite a good thing—but in other respects, these minute improvements in efficiency (or perceived efficiency) generate diminished returns for the well-being of the human population, and usually only a very small percentage of it, to boot. The amount of energy that’s put into building apps and appliances to replace existing things that already work reasonably well is surely a huge waste of ingenuity, in a world filled with pressing social problems that need many more hands on deck.

Thinking about which kinds of jobs and systems really ought to be automated, however, can require complicated and nuanced assessments. Two possible baseline standards, for example, would be that robots should only do jobs at which they are equally good or better than humans, and/or that robots should only do jobs that are difficult, dangerous, or unfulfilling for humans. In some cases, these standards would be fairly straightforward to apply. A robot will likely never be able to write a novel to the same standard as a human writer, for example, so it doesn’t make sense to try to replace novelists with louchely-attired cyborgs. On the other hand, robots could quite conceivably be designed to bake cakes, compose generic pop songs, and create inscrutable canvases for major contemporary art museums—but to the extent that humans enjoy being bakers, pop stars, and con artists, we shouldn’t automate those jobs, either.

For certain professions, however, estimating the relative advantages of human labor versus robot labor is rather difficult by either of these metrics. For example, some commentators have predicted we’ll see a marked increase in robot “caregivers” for the elderly. In many ways, this would be a wonderful development. For elders who have health and mobility impairments, but are otherwise mentally acute, having robots that can help you out of bed, steady you in the shower, and chauffeur you to your destinations might mean several more decades of independent living. Robots could make it much easier for people to care for their aging loved ones in their own homes, rather than putting them in some kind of facility. And within institutional settings like nursing homes, hospitals, and hospices, it would be an excellent thing to have robots that can do hygienic tasks, like cleaning bedpans, or physically dangerous ones, like lifting heavy patients (nurses have a very high rate of back injury).

At the same time, the idea of fully automated elder care has troubling implications. There’s no denying that caring for declining elders can be difficult and often unpleasant work; anybody who has spent time in medical facilities knows that nursing staffs are overworked, and that individual nurses can be incompetent and profoundly unsympathetic. One might well argue that a caregiver robot, while not perfect, is still better than an exhausted or outright hostile human caregiver; and thus, that the pros of substituting robots for humans across a wide array of caregiving tasks outweigh the cons. However, as a society that already marginalizes and warehouses the elderly—especially the elderly poor—we ought to feel queasy about consigning them to an existence where the little human interaction that remains to them is increasingly replaced by purpose-built machines. Aging can be a time of terrible loneliness and isolation: imagine the misery of a life where no fellow-human ever again touches you with affection, or even basic friendliness. The problem is not just that caring for elders is often difficult, but that the humans who currently do it are undervalued and underpaid, despite the fact that they bear the immense burden of buttressing our shaky social conscience. It would be a lot easier to manufacture caregiver robots than to improve working conditions for human caregivers, but a robot can’t possibly substitute for a nurse who is actually kind, empathetic, and good at their job.

These sorts of concerns are common to most of the “caring” and educational professions, which are usually labor-intensive, time-consuming, poorly compensated, and insufficiently respected. Automating these jobs is an easy shortcut to meaningfully improving them. For example, people like Netflix CEO Reed Hastings think that “education software” is a reasonable alternative to a human-run classroom, despite the fact that the only real purpose of primary education, when it comes down to it, is to teach children about social interaction. (Do you remember anything substantive that you learned in school before, say, age 15? I sure don’t.) And what job is more difficult than parenting? If a robot caregiver is cheaper than a nanny and more reliable than a babysitter, we’d be foolish to suppose that indifferent, career-focused, or otherwise overtasked parents won’t readily choose to have robots mind their children for long periods of time. Automation may well be more cost-effective and easier to implement than better conditions for working parents, or government payouts that would allow people to be full-time parents to their small children, or healthier social attitudes generally about work-family balance. But with teaching and parenting, as with nursing, it’s intuitively obvious that software and robots are in no sense truly equivalent to humans. Rather than automating these jobs, we should be thinking about how to materially improve conditions for people who work in them, with the aim of making their work easier where possible, and rewarding them appropriately for the aspects of their work that are irremediably difficult. Supplementing some aspects of human labor with automated labor can be part of this endeavor, but it can’t be the whole solution.

Additionally, without better labor standards for human workers, determining which kinds of jobs humans actually don’t want to do becomes rather tricky. Some jobs are perhaps miserable by their very nature, but others are miserable because the people who currently perform them lack benefits and protections. We may all have assumptions and biases here that are not necessarily instructive. We might often, like Oscar Wilde, think primarily of manual labor when we’re imagining which jobs should be automated. Very likely there are some forms of manual labor so monotonous, painful, or unpleasant that nobody on earth would voluntarily choose to do them—we can certainly think of a few jobs, like mining, that are categorically and unconscionably dangerous for human workers. But it is also an undeniable fact that many people genuinely like physical labor. It’s even possible that many more people like physical labor than are fully aware of it, due to the class-related prejudices associated with manual work—why on earth else do so many people who work white-collar jobs derive their entire sense of self-worth from running marathons or lifting weights or riding stationary bicycles in sweaty, rubber-scented rooms? Why do so many retirees take up gardening? Why do some lunatics regularly clean their houses as a form of relaxation? Clearly, physical labor can be very satisfying under the right circumstances. The point is, there are some jobs we might intuitively think are morally imperative to automate, because in their current forms, they are undeniably awful. But if we had more humane labor laws, and altered our societal expectations about the appropriate relationship between work and leisure, they might be jobs people actually liked.

by Brianna Rennix, Current Affairs |  Read more:
Image: Pranas Naujokaitis