My 2017 interview on the Concerning AI podcast was recently published and you can hear it here. Ted and Brandon wanted to talk about my timeline for AI risks, which has sparked a little interest for its blatant speculation.
Brandon made the point that the curves are falsely independent, i.e., if any one of the risks results in an existential threat eliminating a substantial portion of the population, the chart following that point would be invalidated. So these lines really represent some estimates as to the potential number of people impacted at each time, but under the supposition that everything until that point had failed to have a noticeable effect.
Why is such rampant guesswork useful? I think it helps to have a framework for discussing comparative risk and timetables for action. Consider the Drake Equation by analogy. It has the appearance of formal math, but really all it did was replace one unknowable (number of technological civilizations in the galaxy) with seven unknowables, multiplied together. At least, those terms were mostly unknowable at the time. But it suggested lines for research; by nailing down the rate of star formation, and launching spacecraft to look for exoplanets (another one of which just launched), we can reduce the error bars on some of those terms and make the result more accurate.
So I’d like to think that putting up a strawman timetable to throw darts at could help us identify the work that needs to be done to get more clarity. At one time, the weather couldn’t be predicted any better than saying that tomorrow would be the same as today. Because it was important, we can now do better than that through the application of complex models and supercomputers operating off enormous quantities of observations. Now, it’s important to predict the future of existential risk. Could we create models of the economy, society, and technology adoption that would give us as much more accuracy in those predictions? (Think psychohistory.) We have plenty of computing power now. We need the software. But could AI help?
Check out the Concerning AI podcast! They’re exploring this issue starting from an outsider’s position of concern and getting as educated as they can in the process.