Vanity Fair describes a meeting between Elon Musk and Demis Hassabis, a leading creator of advanced artificial intelligence, which likely propelled Musk’s alarm about AI:
Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.
Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.
This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).
Mostly about Musk, the article is replete with Crisis of Control tropes that are now playing out in the real world far sooner than even I had thought likely. Musk favors opening AI development and getting to super-AI before government or “tech elites” – even when the elites are Google or Facebook.
IBM will build quantum computers “millions of times faster than anything before.” Classic cryptography will become inadequate to protect information against these devices.
The repercussions of the January Asilomar Principles meeting continue to reverberate:
Importance Principle:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
As AI professor Roman Yampolskiy told me, “Design of human-level AI will be the most impactful event in the history of humankind. It is impossible to over-prepare for it.”
This is about effects vs probability. It evokes what I said in Crisis of Control, that the probability of being killed by an asteroid impact is roughly the same as that of dying in a plane crash: probability of event happening times number of people killed. Advanced AI could affect us more profoundly than climate change, could require even longer to prepare for, and could happen sooner. All that adds up to taking this seriously, starting now.
The UK’s Guardian produced this little set piece that neatly summarizes many of the issues surrounding AI-as-existential-threat. The smug ethicist brought in to teach a blossoming AI is more interested in defending human exceptionalism (and the “Chinese Room” argument), but is eventually backed into a corner, stating that “You can’t rely on humanity to provide a model for humanity. That goes without saying.” Meanwhile the AI is bent on proving the “hard take-off” hypothesis…
Venture capitalists point out that robots are taking over the jobs we’d rather they leave alone. Artificial intelligences are making useful progress at creating original art, music, and prose, the sort of tasks we’d hoped they would free us up to be able to do ourselves. Meanwhile, the jobs we want them to do are proving “shockingly hard to automate”:
The cleaning robot Roomba was one of the first commercially available robots to everyday consumers in 2002. Almost 15 years later, there has not been any real innovation in terms of cleaning robots that has seen commercial success.
Textile manufacturing, one of the first industries to be automated, remains incredibly hard to automate completely. Robots work best when manipulating solid objects, but textiles shear, stretch, and compress, making them difficult for robots to handle.
Automating the harvesting of crops that are today picked by hand has so far been hard because many of these crops can be damaged easily and computers have had trouble with visual recognition of the fruit or produce they are trying to pick.
Google’s DeepMind can now win at Breakout… and that makes the company worth half a billion dollars.
Of course, that’s not all it can win at. Go and Poker are the most important recent victories. And now, it has set its sights on StarCraft II.
The exciting (or scary) thing is many experts did not think AI would defeat a Go champion for 10 more years. I repeat: people who have devoted their lives to advancing AI did not believe this could be accomplished for 10 years. That should give us pause when pundits question how quickly AI will change the world.
The European Parliament has proposed standards – and rights – for autonomous robots:
…whereas, ultimately, robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage;
Including a legal definition of and system of registration for advanced autonomous robots; a code of conduct for engineers covering the ethical design, production, and use of robots; requirements for companies to report the contributions of robots and AI to financial results for purposes of taxation and social security contributions; and insurance plans for companies to cover damage caused by their robots.
Another speaker talking of a cusp is Maurice Conti, in this TEDx talk about the incredible advances of AI in making intuitive leaps. Bonus points for Star Trek analogy 🙂 Is AI about to progress from Spock to Kirk? Watch this video and see designs that could never have been achieved by humans alone.
Ray Kurzweil says we will have cloud-connected hybrid brains by 2030. Is he overoptimistic? Kurzweil’s predictions for dates already passed are 86% accurate. They tend to take longer than he thought, though. Are we on track for human-brain-equivalent computers by 2029?
Google’s DeepMind team is back with an updated deep neural net dubbed the “differentiable neural computer (DNC).” Taking inspiration from plasticity mechanisms in the hippocampus, our brain’s memory storage system, the team has added a memory module to a deep learning neural network that allows it to quickly store and access learned bits of knowledge when needed.
With training, the algorithm can flexibly solve difficult reasoning problems that stump conventional neural networks — for example, navigating the London Underground subway system or reasoning about interpersonal relationships based on a family tree.
That might not sound impressive, but DNCs could be a gateway to more powerful computational engines that marry deep learning with rational thinking.