BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How We Can Embrace The Replacement Of Jobs By Artificial Intelligence

Following
This article is more than 7 years old.

What kind of existential problems does AI bring about? originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Bruce Gibney, Venture Capitalist, Author of A Generation of Sociopaths, on Quora:

The medium-term challenge of AI is not killer robots, it’s job replacement. This dynamic is already underway and the literature suggests it’s a more powerful driver of job loss than trade, though trade receives much more attention. (Of the 5.6 million manufacturing jobs lost between 2000-2010, Ball State University estimated that ~85% were lost due to automation, not trade.)

True AI has not arrived, and automation is not AI, but robots and human-written code are a reasonable preview of what employment challenges genuine AI will bring. (Obviously, things like deep learning are quite exciting, but their most significant impacts have yet to be felt.) Computers already manage warehouses, can drive reasonably well, and are making meaningful progress into areas like basic lawyering and radiology that we long considered to be immune to change. Estimates vary widely about the number of tasks that that can be automated, with some extreme forecasts arguing that half of all existing jobs are susceptible to automation. (Frey and Osborne, Oxford researchers, estimated up to 47% of jobs were susceptible to automation.) I doubt it’s that high, but even 5-10% is an enormous number of jobs. Even a conservative 5% estimate implies 7.6 million jobs lost to automation (using BLS’s calculations of the size of the workforce). Assuming those jobs are never replaced, that would push the core unemployment rate (now 4.7%) to close to the peaks of the Great Recession (9.7% future vs. 10% in 2009). A 5% displacement won’t happen overnight, though that doesn’t mean it won’t happen ever, or that we can ignore the problem. And even if some new jobs are created, net displacements will still be significant, especially as AI improves.

Jobs may not seem like “existential” problems, but they are: when people cannot support themselves with work at all – let alone with work they find meaningful – they clamor for sharp changes. Not every revolution is a good revolution, as Europe has discovered several times. Jobs provide both material comfort and psychological gratification and when these goods disappear, people understandably become very upset. This was a significant dynamic in the last election, though not the only one, of course.

The issue is not whether to stop AI, because that’s a moot issue for better and worse. AI/automation will keep developing because it provides its backers with attractive returns and because non-humans can do things humans cannot, and often do them more efficiently. (For example, Boston Consulting Group estimated that a human welder is about three times more expensive per hour of employment than using a robotic welder). Even if the United States banned all AI research, other countries would push ahead. And unlike nuclear weapons, which require large physical inputs and therefore can be regulated (mostly) by treaty, there are few limits on AI development other than talent, which is mobile and harder to regulate. So long as people have an incentive to develop AI, they will. And that’s the best outcome, assuming societies manage the results well. Medical diagnoses will be more accurate, people will be freed from tedious and dangerous kinds of labor, productivity should increase, etc.

But this leads to the key social question regarding AI: how to ensure social stability as AI/automation displaces workers. Answering that question well requires making public education more effective (and paying for it) and social transfers to cushion the transition between jobs. It also requires enlightened regulation: just because progress in AI is hard to control, that doesn’t mean it’s impossible to influence the development of AI. After all, the people funding AI can be motivated by incentives to progress in different directions.

I’m very much for AI and for technological development generally. If we can haul freight across the country more safely and more cheaply with automated trucking, that’s a social good. It doesn’t liberate us from the obligation to think about what we can do for unemployed truckers. That’s a question best put to government, though government hasn’t provided much of an answer and doesn’t seem inclined to do so in the near future.

People ask whether AI will make some catastrophic error (though it’s not as if humans don’t make catastrophic errors) or if AI will ultimately decide humans are a plague to be eliminated. We should think about these issues, because they will become more urgent, more quickly than most people think. For example, the AI community thought it reasonably probable that a computer would beat a human at chess by the early 1990s and IBM’s Deep Blue did so in 1997; IBM’s Watson beat Ken Jennings at Jeopardy! in 2011; notably, DeepMind beat a human at Go in 2015, and that was arguably the hardest challenge by far. We cannot joke anymore, as people once did, that AI is the technology of the future and always will be. It’s here, in weak form, now. AI is starting to live up to expectations and it may improve faster than anyone expects. Again, it’s not AGI, but weak AI/automation/robotics, but given how significant even these less powerful technologies are, they’re important.

While we cannot totally discount the possibility of an extreme AI failure at some point, that’s probably not the right problem for non-specialists to focus on immediately. What deserves attention is the jobs issue and it demands attention even from people who think AI is an unalloyed good; arguably they should be the most interested in dealing with AI’s dislocations. The last thing AI proponents want is some ham-fisted backlash in the democratic, developed countries that puts us at a disadvantage to less scrupulous regimes.

This question originally appeared on Quora. - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions: