Month: June 2017

Artificial IntelligenceEmployment

Keep on Truckin’

An article on Bloomberg suggests that in the short term at least, autonomous trucks have the potential to make the lives of truckers better by allowing them to teleoperate trucks and therefore see their families at night. Of course, many of them see this as the prelude to not being needed at all:

“I can tell the difference between a dead porcupine and a dead raccoon, and I know I can hit a raccoon, but if I hit a porcupine, I’m going to lose all the tires on the truck on that side,” says Tom George, a veteran driver who now trains other Teamsters for the union’s Washington-Idaho AGC Training Trust. “It will take a long time and a lot of software to program that competence into a computer.”

Perhaps.  Or maybe it just takes driving long enough in reality or in training on captured footage to encounter both kinds of roadkill and learn by experience.

Artificial IntelligenceScience

How many piano tuners are there in Chicago?

One of the chapters in Crisis of Control is on the Fermi Paradox, a fiendishly simply-stated problem with existential ramifications. That kind of simplification of the complex was the stock-in-trade of physicist Enrico Fermi, a man who could toss scraps of paper into the air when the atomic bomb test exploded and calculate in seconds an estimate of its yield that rivaled the official figures released days later. He taught his students to think the same way with this question: “How many piano tuners are there in Chicago?” No Googling. No reference books. Do your best with what you know. Go.

This is one of those questions where “Show your work” is the only possible way to evaluate the answer. The lazy ones will throw a dart at a mental board and say, “X,” and when asked how come, shrug. The way to solve this is to break it down into an equation containing factors that can be more readily estimated.  If we knew:

  • P – The population of Chicago
  • f – The number of pianos per person
  • t – The number of times a piano is tuned per year
  • H – The number of hours it takes to tune a piano
  • W – The number of hours per year a piano tuner works

then the number of piano tuners in Chicago is: P * f * t * H / W .  Here, let’s walk through this:

  • P * f gives the number of pianos in Chicago, call that N. P and f are each easier to estimate than how many pianos there are in a city.
  • N * t gives the number of piano tunings per year in Chicago, call that T.
  • T * H gives the number of hours spent tuning pianos per year in Chicago, call that Y.
  • Y / W gives the number of piano tuners it takes to provide that service. QED.

Of course, you could look at those figures and say, wait, I don’t even know the population of Chicago, much less how many hours a piano tuner works.  But it’s easier to make a good guess. To get f, you can go off your personal experience of how many friends’ houses you’ve seen with pianos, make a correction for the number of pianos in institutions of some kind (theaters, schools, etc), and at each stage, add in confidence limits of how far off you think you could be.

This process is what leads to the most important math in the Fermi Paradox chapter in Crisis, the Drake Equation:

N = R* · fₚ · nₑ · fₗ · fᵢ · fᶜ · L

where:

N = The number of civilizations in the Milky Way galaxy
(ours) whose electromagnetic emissions are detectable
(i.e., planets inhabited by aliens sending radio signals)
R* = The rate of formation of stars suitable for the
development of intelligent life
fₚ = The fraction of those stars with planetary systems
nₑ = The number of planets, per solar system, with an
environment suitable for life
fₗ = The fraction of suitable planets on which life actually
appears
fᵢ = The fraction of life-bearing planets on which
intelligent life emerges
fᶜ = The fraction of civilizations that develop a
technology that releases detectable signs of their
existence into space
L = The length of time such civilizations release
detectable signals into space

And that gives us a way of estimating how many intelligent civilizations there are in the galaxy right now, from quantities that we can estimate or measure independently.  Of course, the big question is, why haven’t we found any such civilizations yet when the calculations suggest N should be much larger than 1?  But NASA thinks it won’t be too long before that happens. And when we find them we can ask them how many piano tuners they have.

Artificial IntelligenceEmployment

This Time It’s Different

This superb video drives a stake through the heart of the meme that progress always equals more and better jobs:

All this and a cast of cartoon chickens. This is where it very much becomes clear that we need to analyze second-order effects. The video just starts wondering about those at the end. If we get very good at producing cheaper products at the expense of more and more jobs, who will buy those products? Who will be able to afford them if there is a rising underclass of unemployed that has trouble getting food, let alone iPhones? Sure, the market may turn to higher luxury items such as increasingly tricked-out autonomous cars, that can be afforded by the 1% (or less) who own the companies, but this is an unstable dynamic, a vicious circle. What will terminate that runaway feedback loop?

Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.

Artificial IntelligenceEmployment

When will a machine do your job better than you?

Katja Grace at the Future of Humanity Institute at the University of Oxford and fellow authors surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. They averaged the answers, and published them at https://arxiv.org/pdf/1705.08807.pdfThe results are… surprising.

First up, AIs will reach human proficiency in the game of Go in 2027… wait, what? Ah, but this survey was conducted in 2015. As I noted in Crisis of Control, before AlphaGo beat Lee Sedol in 2016, it was expected to be a decade before that happened; here’s the numeric proof. This really shows what a groundbreaking achievement that was, to blindside so many experts.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo. And when they analyzed the results by demographics, only one factor was significant: geography. Asian researchers think human level machine intelligence will be achieved much sooner:Screen Shot 2017-06-01 at 12.58.26 PM

Amusingly, their predictions for when different types of job will be automated are relatively clustered under 50 years from now with one far outlier over 80:  Apparently, the job of “AI Researcher” will take longer to automate that anything else, including surgeon. Might be a bit of optimism at work there…