The Guardian exposes some contradictory thinking on the progress of job automation:
Many of us recognize robotic automation as an inevitably disruptive force. However, in a classic example of optimism bias, while approximately two-thirds of Americans believe that robots will inevitably perform most of the work currently done by human beings during the next 50 years, about 80% also believe their current jobs will either “definitely” or “probably” exist in their current form within the same timeframe.
Somehow, we believe our livelihoods will be safe. They’re not: every commercial sector will be affected by robotic automation in the next several years.
What happens when bots start talking to each other? Check out this live video of two Google Home bots trapped in a never-ending conversation:
More great examples via Twitter, including a conversation where two bots decided to get married and then divorced within seconds.
Preceded by Barack Obama’s commentary on the employment impact, here is a video of a test run by an autonomous 18-wheeler. In the view from the cab, the reporter needles the human truck driver who got it started:
So you’ve been replaced by three LIDARs, a CPS, a camera, and a radar?
The driver managed a tight grin, nuance that the truck is currently incapable of.
Today I was giving a talk on space exploration to the eighth grade class at my daughter’s school. Their theme for this period is ‘Identity,’ so we did some discovery questions about the identities of planets and stars. Then, because so much space exploration is about looking for life, I asked them about the identity of life. We got it down to the usual answers like eating and pooping and reproducing. Then I said, “I see no one suggested ‘intelligence.’ Can we have life without intelligence?” It was decided that we could.
Then I asked, “Can we have intelligence without life?” There was immediate agreement and vigorous nodding. I did a double take, and one of them helpfully explained: “AI.” I recovered and remarked that that was not an answer I would have gotten twenty years ago.
Tomorrow’s adults have a good idea what’s coming.
This video from the dashcam of a Tesla in the Netherlands demonstrates the ability of a Tesla to take protective action that a human could not, by being able to see in front of the car it is following.
“What is most impressive is that fact that we can clearly hear the Forward Collision Warning alert before the lead vehicle even applied the brake, which shows that the Autopilot wasn’t only using the lead vehicle to plan the path, but also the vehicle in front of it – the black SUV.
“The driver of the Tesla also reported that Autopilot started braking before he could apply the brakes himself.”
MIT researchers have designed a new machine-learning system that can learn by itself to extract text information for statistical analysis when available data is scarce.
As KurzweilAI.net puts it, “And so it begins…” Here’s a system that can teach itself how to understand a topic by searching the Internet for more information. I know – what could possibly go wrong? Will this be a building block for all kinds of machine learning systems?
A superhero who was able to see two seconds into the future wouldn’t be invincible, but she’d have a leg up on mere mortals. On Monday, the Massachusetts Institute of Technology announced its new artificial intelligence, and it’s a prototype of such a being. Based on a photograph alone, it can predict what’ll happen next, then spit out a one-and-a-half second video clip depicting that possible future. The breakthrough could yield smarter autonomous cars or security systems.
The November issue of Wired – guest edited by President Obama, no less – contains the responses of thought leaders to six challenges issued by Obama. One of those was “Ensure that artificial intelligence helps us rather than hurts us,” and the response came from Facebook’s Mark Zuckerberg:
Whoever cares about saving lives should be optimistic about the difference that AI can make. If we slow down progress in deference to unfounded concerns, we stand in the way of real gains.
As I say in Crisis of Control, I’m not for limiting development of artificial intelligence. That would be a first-order thinking response to its existential threat. It would be futile. But it would also be counterproductive. AI is essential to the survival of the human race. It also happens to be the possible end of the human race. To an aficionado of story, it seems like we’re in someone’s idea of a suspense thriller. You couldn’t write something more gripping if you tried. Unfortunately, the stakes are humanity.
Wired Magazine’s November issue is guest edited by President Obama, and in an interview, he touches on so many issues raised in Crisis of Control that I could egotistically convince myself that someone sent him an advance copy.
He talks about the danger of AI, the potential for widespread unemployment, but also its promise. He points out that we have more to fear – in terms of immediate danger to national security – from AIs being focused on single tasks like penetrating nuclear security than we do a general takeover. He talks about bioterrorism. He even mentions the Singularity and gets into Star Trek.
But what it really means is that we’re heading into an era where more and more people are waking up to these issues. I have my part to play, Obama has another, and so do you.
Programmers will get this (from Reddit):