Category: Technology

Artificial IntelligenceExistential RiskTechnology

A.I. Joe

When I wrote in Crisis of Control about the danger of AI in the military being developed with an inadequate ethical foundation, I was hopeful that there would at least be more time to act before the military ramped up its development. That may not be the case, according to this article in TechRepublic:

…advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare … many of the most transformative applications of AI have not yet been addressed.

Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, who graciously provided one of the endorsements on my book, has been warning against this trend for some time:

Unfortunately, for the humanity that means development of killer robots, unsupervised drones and other mechanisms of killing people in an automated process.

“Killer robots” is of course a sensational catchphrase, but it captures attention enough to make it serviceable by both Yampolskiy and Elon Musk, and while scenarios centered on AIs roaming the cloud and striking us through existing infrastructure are far more likely,  roving killer robots aren’t entirely out of the question either.

I see only the open development of ethical AI as a way to beat the amount of entrenched money and power that is behind the creation of unethical AI.

 

Photo Credit: Wikimedia Commons
BioterrorismTechnology

Bringing Back the Dead

Viruses, that is.  Canadian researchers revived an extinct horsepox virus last year on a shoestring budget, by using mail-order DNA.

The researchers bought overlapping DNA fragments from a commercial synthetic DNA company. Each fragment was about 30,000 base pairs long, and because they overlapped, the team was able to “stitch” them together to complete the genome of the 212,000-base-pair horsepox virus. When they introduced the genome into cells that were already infected with a different kind of pox virus, the cells began to produce virus particles of the infectious horsepox variety.

Why this matters: It’s getting easier all the time to synthesize existing pathogens (like horsepox) and to create exciting new deadly ones (with CRISPR/cas9). Crisis of Control explores the consequences of where that trend is leading. Inevitably it will be possible one day for average people to do it in their garages.

This team didn’t even use their own facility for make the DNA fragments, but ordered them through the mail.  There’s some inconsistency about how easy it is to get nasty stuff that way. Here’s a quote from a 2002 article:

Right now, the companies making DNA molecules such as
the ones used to recreate the polio virus do not check what
their clients are ordering. “We don’t care about that,” says
a technician at a company that ships DNA to more than 40
countries around the world, including some in the Middle
East.

Things may be better now; apparently now at least the reputable western companies do care.  What about do-it-yourself?  What I can’t tell from the article is whether their mail order was for double stranded DNS (dsDNA) or oligonucleotides (“oligos”), and here I am exposing my ignorance of molecular biology because I am sure it is obvious to someone in that field what it must have been. What I do know is that there are no controls on what you can make with oligos because you can get those synthesizers for as little as a few hundred bucks off eBay. But you then have to turn them into dsDNA to get the genome you’re looking for, and that requires some nontrivial laboratory work.

We can assume that procedure will get easier, and it has in fact already been replicated in a commercially available synthesizer that will produce dsDNA. The last time I looked, one such machine had recently become available, but implemented cryptographic security measures that meant that it would not make anything “questionable” until the manufacturer had provided an unlocking code.

So far, so good, although it’s hard for me to imagine people at the CDC find it easy to sleep. But inevitably this will become easier. How do we have to evolve to disengage this threat?

 

Photo: Wikimedia Commons.
Artificial IntelligenceExistential RiskTechnologyWarfare

Is Skynet Inevitable?

Australia’s leading AI expert is afraid the army is creating ‘Terminator’, and no, that’s not taking him out of context:

I mean Hollywood is actually pretty good at predicting the future. And if we don’t do anything, then in 50 to 100 years time it will look much like Terminator. It wouldn’t be that dissimilar to what Hollywood is painting… there will be a lot of risk well before then, in fact.

Like so many scenarios I discuss in Crisis of Control, the question is not if, but when. 50 to 100 years away may be a timeframe that lulls us into a false sense of security when the real question is, how far in advance do we need to act to prevent this scenario?

That we will one day create “Skynet” is all but inevitable once you accept that we will develop conscious artificial intelligence (CAI). I go into the reasons why that will happen relatively soon in Crisis. The military is certain to develop CAI for the tremendous tactical leverage it grants. However, the hard take-off in intelligence levels and random mutations (which are all but assured in anything possessing creativity) will mean that at some point a CAI will evolve a motivation to control the real world machinery it is connected to… or can reach.

This is the point where many people argue that “we can just unplug it” and make sure it’s not connected to, say, weaponry.  The “just unplug it” argument has been disposed of before (basically, try unplugging Siri and see how far you get). The odds of successfully air gapping an AI from a network it wants to talk to are tiny; we had genetic algorithms that learned how to communicate across an air gap in 2004.

The answer that Crisis embraces is that we should develop ethical AI in the public domain first, so that it is in a position to dominate the market or defeat any unethical AI that may later arise. This is why we support the OpenAI initiative. It may sound naïve and reckless, but I believe it is the best chance we’ve got.

Artificial IntelligencePoliticsTechnology

AI vs AI

More from the mailbag:

Regarding the section on AI on the battlefield you rightly focus on it behaving ethically against troops/citizens on the other side. However, very likely in the future the enemy ‘troops’ on the other side will be AI entities. It might be interesting to explore the ethics rules in this case?

Heh, very good point. Of course, at some point, the AI entities will be sufficiently conscious as to deserve equal rights. Who knows, they may be granted those rights by opposing AIs somewhat before then under professional courtesy. But your question suggests a more pragmatic earlier timeframe. In that view, the AI doesn’t recognize another AI as having any rights; it’s just aware that it’s looking at something that is not-a-human.

Before AIs escape their programming, we assume that their programming will only grant special status to human life. (Will autonomous cars brake for cats? Squirrels? Mice?) We have to postulate a level of AI development that’s capable of making value judgements by association before things get interesting. Imagine an AI that could evaluate the strategic consequences of destroying an opposing AI. Is its opponent directing the actions of inferior units? Will destroying its opponent be interpreted as a new act of war? Of course, these are decisions that human field troops are not empowered to make. But in an AI-powered battlefield, there may be no need to distinguish between the front lines and the upper echelons. They may be connected well enough to rewrite strategy on the fly.

I’d like to think that when the AIs get smart enough, they will decide that fighting each other is wasteful and instead negotiate a treaty that eluded their human masters. But before we get to that point we’re far more likely to be dealing with AIs with a programmed penchant for destruction.

Artificial IntelligenceEmploymentTechnology

Sit Up and Beg

More reader commentary:

“If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. […] Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over.”

Totally agree except this will not be as easy as some may think. I think the most important part of great programmers is not their programming skill but their ability to take a small number of broad requirements and turn them into the extremely detailed requirements necessary for a program to succeed in most/all situations and use cases, e.g. boundary conditions. As somewhat of an aside we hear even today about how a requirements document given to developers should cover ‘everything’. If it really covered everything it would have to be on the order of the number of lines of code it takes to create the program.

If there’s been anything about developers that elevated them to some divine level, it isn’t their facility with the proletarian hardware but their ability to read the minds of the humans giving them their requirements, to be able to tell what they really need, not just better than those humans can explicate, but better than they even know. That talent, in the best developers (or analysts, if the tasks have been divided), is one of the most un-automatable acts in employment.

The quotation was from Wired magazine, and I think, however, that it has to be considered in a slightly narrow context. Many of the tough problems being solved by AIs now are done through training. Facial recognition, voice recognition, medical scan diagnosis; the best approach is to train some form of neural network on a corpus of data and let it loose. The more problems that are susceptible to that approach, the more developers will find their role to be one of mapping input/output layers, gathering a corpus, and pushing the Learn button. It will be a considerable time (he said, carefully avoiding quantifying ‘considerable’) before that’s applicable to the general domain of “I need a process to solve this problem.”

Artificial IntelligenceTechnology

But Who Gets the No-Claims Bonus?

A reader commented:

“Partly, this automotive legerdemain is thanks to the same trick that makes much AI appear to be smarter than it really is: having a backstage pass to oodles of data. What autonomous vehicles lack in complex judgement, they make up with undivided attention processing unobstructed 360° vision and LIDAR 3-D environment mapping. If you had that data pouring into your brain you’d be the safest driver on the planet.”

But we are not capable of handling all of the data described above pouring into our brain. The flow of sensory data from our sight, hearing, smell, taste and feel are tailored via evolution to match what our brain is capable of handling. AIs will be nowhere as limited as we are, with the perfect example being the AI cars you describe so well.

I’m not sure that the bandwidth of a Tesla’s sensors is that much greater than what is available to the external senses of a human being when you add in what’s available through all the nerve endings in the skin. Humans make that work for them through the Reticular Formation, part of the brain that decides what sensory input we will pay attention to. Meditators run the Reticular Formation through calisthenics.

However, the point I was making was that the human brain behind a driving wheel does not have available to it the sensors that let a Tesla see through fog or the precise ranging data that maps the environment. If you could see as much of the road as its cameras, you’d certainly be safer than a human driver without those aids. The self-driving car with its ability to focus on many areas at once and never get tired has the potential to do even better, which is why people are talking seriously about saving half a million lives a year.

Artificial IntelligenceDesignTechnology

What Is AI?

Chatbots Magazine offers a tidy summary of different aspects of AI, such as machine learning, expert systems (does anyone still call their product an ‘expert system’? That’s so ’90s and Prolog), and Natural Language Processing, etc. But yet, they meet the usual chimera such definition attempts run into. Defining AI by current technologies is like defining an elephant as a trunk, tail, and tusks. More familiar is their parting shot:

Tesler’s Law: “AI is whatever hasn’t been done yet.”

Yes, until someone does it, routing a car to you via a mobile view looks like either black magic or AI, but when they do it, it becomes just another app. If it can’t commiserate with you about a failed romance or discuss the finer points of Van Gogh appreciation, it’s just a stupid computer trick.

The definition of AI is at once both a perpetually elusive target and a bar we’ve already hurdled. Plenty of people are willing to call their products “artificial intelligence” because we’re in one of the “AI summers” and AI is once again a hot term free from its past stigma. So anyone with a big if-then-else chain hidden in their code slaps an AI label on it and doubles the price. Needless to say, it devalues the term if it applies to every shape of business analytics program.

Perhaps the point where hype and hope converge is on Artificial General Intelligence, i.e., where AI can have that midnight philosophy discussion or ask you how your date went. Until then… good luck.

 

Artificial IntelligenceExistential RiskScienceTechnologyThe Singularity

Rebuttal to “The AI Misinformation Epidemic”

Anyone in a field of expertise can agree that the press doesn’t cover them as accurately as they would like. Sometimes that falls within the limits of what a layperson can reasonably be expected to learn about a field in a thirty minute interview; sometimes it’s rank sensationalism.  Zachary Lipton, a PhD candidate at UCSD, takes aim at the press coverage of AI in The AI Misinformation Epidemic, and its followup, but uses a blunderbuss and takes out a lot of innocent bystanders.

He rants against media sensationalism and misinformation but provides no examples
to debate other than a Vanity Fair article about Elon Musk. He goes after Kurzweil in particular but doesn’t say what specifically he disagrees with Kurzweil on. He says that Kurzweil’s date
for the Singularity is made up but Kurzweil has published his reasoning
and Lipton doesn’t say what data or assertions in that reasoning he
disagrees with. He says the Singularity is a nebulous concept, which is
bound to be true in the minds of many laypeople who have heard the term
but he references Kurzweil immediately adjacent to that assertion and
yet doesn’t say what about Kurzweil’s vision is wrong or unfounded, dismissing it instead as “religion,” which apparently means he doesn’t have to be specific.

He
says that there are many people making pronouncements in the field who
are unqualified to do so but doesn’t name anyone aside from Kurzweil and
Vanity Fair, nor does he say what credentials should be required to
qualify someone. Kurzweil, having invented the optical character reader
and music synthesizer, is not qualified?

He takes no
discernible stand on the issue of AI safety yet his scornful tone is
readily interpreted as poo-poohing not just utopianism but alarmism as well. That
puts him at odds with Stephen Hawking, whose academic credentials are
beyond question. For some reason, Nick Bostrom also comes in for attack in the comments, for no specified reason but with the implication that since he makes AI dangers digestible for the masses he is therefore sensationalist.  Perhaps that is why I reacted so much to this article.

There is of course hype, but it is hard to tell exactly where. In 2015 an article claiming that a computer would beat the world champion Go player within a year would be roundly dismissed as hype.  In 2009 any article asserting that self-driving vehicles would be ready for public roads within five years would have been overreaching. Kurzweil has a good track record of predictions, they just tend to be behind schedule. The point is, if an assertion about an existential threat turns out to be well founded but we ignore it because existential threats have always appeared over-dramatized, then it will be too late to say, “Oops, missed one.” We have to take this stuff seriously.

Artificial IntelligenceTechnology

Self-driving cars still 25 years off in the USA? No.

Bill Gurley argues on CNBC that we are 25 years away from autonomous vehicle market penetration in the USA because we’re too litigation-hungry.  And concludes that AVs will instead take hold in a country like China which has relatively uncrowded roads and an authoritarian government that can make central planning decisions.

I don’t agree. Precisely because of rampant litigation in the USA, insurers are going to do the cold, hard math (like they always do), and realize that AVs will save a passel of lives and hence be good for their book. They will therefore indemnify manufacturers or otherwise shield them from opportunistic lawsuits launched in the inevitable few cases where the cars are apparently at fault. Money will smooth the path to AV adoption.

He also says:

The part we haven’t figured out yet, the last 3 percent, which is snow, rain, all the really, really hard stuff — it really is hard.  They have done all the easy stuff.

While I would agree that there are still some really, really hard things to work out in AVs, rain and snow aren’t among them. Sensors like radar can penetrate that stuff far more effectively than human eyesight.  Even pattern recognition in the optical spectrum could outperform humans.

The hard part part is getting the cars to know when they can break the rules. A recent viral posting about how to trap AVs hints at that. When a trash truck is taking up your lane making stops and you need to cross a double yellow to get around it, will an AV be smart enough to do that? Sure, it can just sit there and let the human take manual control, but that doesn’t get us to the Uber-utopia of cars making their way unmanned around the city to their next pickup.