Artificial IntelligenceScience

How many piano tuners are there in Chicago?

One of the chapters in Crisis of Control is on the Fermi Paradox, a fiendishly simply-stated problem with existential ramifications. That kind of simplification of the complex was the stock-in-trade of physicist Enrico Fermi, a man who could toss scraps of paper into the air when the atomic bomb test exploded and calculate in seconds an estimate of its yield that rivaled the official figures released days later. He taught his students to think the same way with this question: “How many piano tuners are there in Chicago?” No Googling. No reference books. Do your best with what you know. Go.

This is one of those questions where “Show your work” is the only possible way to evaluate the answer. The lazy ones will throw a dart at a mental board and say, “X,” and when asked how come, shrug. The way to solve this is to break it down into an equation containing factors that can be more readily estimated.  If we knew:

  • P – The population of Chicago
  • f – The number of pianos per person
  • t – The number of times a piano is tuned per year
  • H – The number of hours it takes to tune a piano
  • W – The number of hours per year a piano tuner works

then the number of piano tuners in Chicago is: P * f * t * H / W .  Here, let’s walk through this:

  • P * f gives the number of pianos in Chicago, call that N. P and f are each easier to estimate than how many pianos there are in a city.
  • N * t gives the number of piano tunings per year in Chicago, call that T.
  • T * H gives the number of hours spent tuning pianos per year in Chicago, call that Y.
  • Y / W gives the number of piano tuners it takes to provide that service. QED.

Of course, you could look at those figures and say, wait, I don’t even know the population of Chicago, much less how many hours a piano tuner works.  But it’s easier to make a good guess. To get f, you can go off your personal experience of how many friends’ houses you’ve seen with pianos, make a correction for the number of pianos in institutions of some kind (theaters, schools, etc), and at each stage, add in confidence limits of how far off you think you could be.

This process is what leads to the most important math in the Fermi Paradox chapter in Crisis, the Drake Equation:

N = R* · fₚ · nₑ · fₗ · fᵢ · fᶜ · L

where:

N = The number of civilizations in the Milky Way galaxy
(ours) whose electromagnetic emissions are detectable
(i.e., planets inhabited by aliens sending radio signals)
R* = The rate of formation of stars suitable for the
development of intelligent life
fₚ = The fraction of those stars with planetary systems
nₑ = The number of planets, per solar system, with an
environment suitable for life
fₗ = The fraction of suitable planets on which life actually
appears
fᵢ = The fraction of life-bearing planets on which
intelligent life emerges
fᶜ = The fraction of civilizations that develop a
technology that releases detectable signs of their
existence into space
L = The length of time such civilizations release
detectable signals into space

And that gives us a way of estimating how many intelligent civilizations there are in the galaxy right now, from quantities that we can estimate or measure independently.  Of course, the big question is, why haven’t we found any such civilizations yet when the calculations suggest N should be much larger than 1?  But NASA thinks it won’t be too long before that happens. And when we find them we can ask them how many piano tuners they have.

Artificial IntelligenceEmployment

This Time It’s Different

This superb video drives a stake through the heart of the meme that progress always equals more and better jobs:

All this and a cast of cartoon chickens. This is where it very much becomes clear that we need to analyze second-order effects. The video just starts wondering about those at the end. If we get very good at producing cheaper products at the expense of more and more jobs, who will buy those products? Who will be able to afford them if there is a rising underclass of unemployed that has trouble getting food, let alone iPhones? Sure, the market may turn to higher luxury items such as increasingly tricked-out autonomous cars, that can be afforded by the 1% (or less) who own the companies, but this is an unstable dynamic, a vicious circle. What will terminate that runaway feedback loop?

Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.

Artificial IntelligenceEmployment

When will a machine do your job better than you?

Katja Grace at the Future of Humanity Institute at the University of Oxford and fellow authors surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. They averaged the answers, and published them at https://arxiv.org/pdf/1705.08807.pdfThe results are… surprising.

First up, AIs will reach human proficiency in the game of Go in 2027… wait, what? Ah, but this survey was conducted in 2015. As I noted in Crisis of Control, before AlphaGo beat Lee Sedol in 2016, it was expected to be a decade before that happened; here’s the numeric proof. This really shows what a groundbreaking achievement that was, to blindside so many experts.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo. And when they analyzed the results by demographics, only one factor was significant: geography. Asian researchers think human level machine intelligence will be achieved much sooner:Screen Shot 2017-06-01 at 12.58.26 PM

Amusingly, their predictions for when different types of job will be automated are relatively clustered under 50 years from now with one far outlier over 80:  Apparently, the job of “AI Researcher” will take longer to automate that anything else, including surgeon. Might be a bit of optimism at work there…

 

Artificial IntelligencePoliticsTechnology

AI vs AI

More from the mailbag:

Regarding the section on AI on the battlefield you rightly focus on it behaving ethically against troops/citizens on the other side. However, very likely in the future the enemy ‘troops’ on the other side will be AI entities. It might be interesting to explore the ethics rules in this case?

Heh, very good point. Of course, at some point, the AI entities will be sufficiently conscious as to deserve equal rights. Who knows, they may be granted those rights by opposing AIs somewhat before then under professional courtesy. But your question suggests a more pragmatic earlier timeframe. In that view, the AI doesn’t recognize another AI as having any rights; it’s just aware that it’s looking at something that is not-a-human.

Before AIs escape their programming, we assume that their programming will only grant special status to human life. (Will autonomous cars brake for cats? Squirrels? Mice?) We have to postulate a level of AI development that’s capable of making value judgements by association before things get interesting. Imagine an AI that could evaluate the strategic consequences of destroying an opposing AI. Is its opponent directing the actions of inferior units? Will destroying its opponent be interpreted as a new act of war? Of course, these are decisions that human field troops are not empowered to make. But in an AI-powered battlefield, there may be no need to distinguish between the front lines and the upper echelons. They may be connected well enough to rewrite strategy on the fly.

I’d like to think that when the AIs get smart enough, they will decide that fighting each other is wasteful and instead negotiate a treaty that eluded their human masters. But before we get to that point we’re far more likely to be dealing with AIs with a programmed penchant for destruction.

Artificial IntelligenceEmploymentTechnology

Sit Up and Beg

More reader commentary:

“If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. […] Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over.”

Totally agree except this will not be as easy as some may think. I think the most important part of great programmers is not their programming skill but their ability to take a small number of broad requirements and turn them into the extremely detailed requirements necessary for a program to succeed in most/all situations and use cases, e.g. boundary conditions. As somewhat of an aside we hear even today about how a requirements document given to developers should cover ‘everything’. If it really covered everything it would have to be on the order of the number of lines of code it takes to create the program.

If there’s been anything about developers that elevated them to some divine level, it isn’t their facility with the proletarian hardware but their ability to read the minds of the humans giving them their requirements, to be able to tell what they really need, not just better than those humans can explicate, but better than they even know. That talent, in the best developers (or analysts, if the tasks have been divided), is one of the most un-automatable acts in employment.

The quotation was from Wired magazine, and I think, however, that it has to be considered in a slightly narrow context. Many of the tough problems being solved by AIs now are done through training. Facial recognition, voice recognition, medical scan diagnosis; the best approach is to train some form of neural network on a corpus of data and let it loose. The more problems that are susceptible to that approach, the more developers will find their role to be one of mapping input/output layers, gathering a corpus, and pushing the Learn button. It will be a considerable time (he said, carefully avoiding quantifying ‘considerable’) before that’s applicable to the general domain of “I need a process to solve this problem.”

Artificial IntelligenceTechnology

But Who Gets the No-Claims Bonus?

A reader commented:

“Partly, this automotive legerdemain is thanks to the same trick that makes much AI appear to be smarter than it really is: having a backstage pass to oodles of data. What autonomous vehicles lack in complex judgement, they make up with undivided attention processing unobstructed 360° vision and LIDAR 3-D environment mapping. If you had that data pouring into your brain you’d be the safest driver on the planet.”

But we are not capable of handling all of the data described above pouring into our brain. The flow of sensory data from our sight, hearing, smell, taste and feel are tailored via evolution to match what our brain is capable of handling. AIs will be nowhere as limited as we are, with the perfect example being the AI cars you describe so well.

I’m not sure that the bandwidth of a Tesla’s sensors is that much greater than what is available to the external senses of a human being when you add in what’s available through all the nerve endings in the skin. Humans make that work for them through the Reticular Formation, part of the brain that decides what sensory input we will pay attention to. Meditators run the Reticular Formation through calisthenics.

However, the point I was making was that the human brain behind a driving wheel does not have available to it the sensors that let a Tesla see through fog or the precise ranging data that maps the environment. If you could see as much of the road as its cameras, you’d certainly be safer than a human driver without those aids. The self-driving car with its ability to focus on many areas at once and never get tired has the potential to do even better, which is why people are talking seriously about saving half a million lives a year.

Artificial IntelligenceExistential Risk

The Future of Human Cusp

I received this helpful comment from a reader:

Your book does a fantastic job covering a large number of related subjects very well and we are on the same page on virtually all of them. That said when I am for example talking with someone about how automation will shortly lead to massive unemployment I need to recommend a book for them to read, I find myself leaning toward a different book “Rise of the Robots” because many/most of the people I interact with can’t handle all of the topics you bring up in one book and can only focus on one topic at a time, e.g. automation replacing jobs. I really appreciate your overarching coverage but you might want to also create several targeted books for each main topic.

He makes a very good point. Trying to hit a market target with a book like this is like fighting jello. I am aiming for a broad readership, one that’s mostly educated but nontechnical. Someone with experience building Machine Learning tools would find the explanation of neural networks plodding, and many scientists would be chafing at the analogies for exponential growth.

For better or worse, however, I deliberately created a broad view of the topic, because I found too many writings were missing vital points in considering only a narrow issue. Martin Ford’s books (I prefer The Lights in the Tunnel) do get very well into the economic impact of automation but don’t touch on the social and philosophical questions raised by AIs approaching consciousness, or the dangers of bioterrorism. And I find these issues to be all interconnected.

So what I was going for here was an introduction to the topic that would be accessible to the layperson, a sort of Beginner’s Guide to the Apocalypse. There will be more books, but I’m not going to try to compete with Ford or anyone else who can deploy more authorial firepower on a narrow subtopic. I will instead be looking to build the connection between the technical and nontechnical worlds.