Speaking of 2030, “by 2030, most Canadians will get some of their income through ‘virtual work’ found on online platforms such as freelancer.com, competing for contracts with workers from low-wage countries.”
The race to the bottom in pricing knowledge work accelerates. Ironically, the higher-paying work will be the easiest to automate. Creative online work, like logo design, will take longer to automate.
No sooner had I written that “creative work such as logo design will be the last knowledge work to be automated” than, you guessed it, an AI for designing logos popped up, courtesy of Peter Diamandis’ “Abundance Insider”. Go to http://emblemmatic.org/markmaker/#/ and enter a company name. A series of logos will appear. But it doesn’t end there. You tell it which ones you like, and tweak them if you want. As you scroll down, it will use genetic algorithms to create more logos that you may like better. Within a couple of minutes it had generated the passable:
and in true bleeding edge AI fashion, sent my MacBook fan speed soaring.
A top-notch logo designer would scoff. The FedEx arrow (http://www.fastcodesign.com/1671067/the-story-behind-the-famous-fedex-logo-and-why-it-works) isn’t coming out of this any time soon. But an average logo designer might feel the wind of change blowing a bit of a chill across their neck.
From May 30 to June 3 I was at the 29th Canadian Conference on Artificial Intelligence at the University of Victoria, British Columbia. This is the academic side of AI, where you’ll find lots of math, theory, and the bleeding edge of advances in the algorithms that drive the world’s artificial intelligences. I spent the week chatting with professors and videoing some excellent interviews that I will be editing for publication here later.
I learned that the first conference on Artificial Intelligence was held by the Canadian Artificial Intelligence Association, in 1973. That the people in the field are aware of the alarm raised by Hawking, Gates, and Musk, but can’t see how to apply it to their work – as Michael Bowling told me, “What am I supposed to do with that? It’s not like I should put ‘#include <ethics.h>’ in my code.”
Professor Bowling created a perfect poker-playing bot; before you clamor to download it to your phone for that next trip to Vegas, be aware of a few caveats. It’s only provably perfect in the game of heads-up, limit Texas Hold ‘Em (although it also performs well against more players), he’s not releasing the code, and if you consult a smartphone while playing a casino game you’ll be bounced out of there quicker than a deadbeat who just lost his last chip.
But the program does neatly illustrate the quandary of people in the field with respect to the well-known existential alarms. Their programs have–currently–narrow, specific applications. A poker-playing bot, as smart as it may seem to a human poker player, is not a threat to humanity. Its creators know every piece of code in it and nowhere is there a line that is in danger of subjugating the human race. The same observation applies to AlphaGo, the program that beat the human Go champion ten years ahead of expectations. How are AI coders supposed to react to those pleas for circumspection?
That dilemma might appear more real to the developers of another program I saw at the conference, which assimilates the events in a naval battle and in real time determines how to react. I saw simulated video of automated weapons deployments to defend a battleship against an aerial attack. You can draw a line from that to SkyNet a lot more readily than with the poker bot, although it is no more conscious or unstable. It’s just in charge of something more serious than a pile of gaming chips. So, of course, is the software inside a power station, a fly-by-wire-plane, and a pacemaker.
I’ll be unpacking the takeaways from the conference more in future blog entries. Stay tuned.