Check out what they’re doing at Georgia Tech. They’re breeding deceptive behavior into their next-gen robots. The idea is ripe for, I don’t know, jeering, I suppose.
I’ve never understood the need for robots to think like humans. For me, the advantage of a robot is that it will do tedious work – the stuff we don’t want to do – without complaining. It doesn’t mind mind-numbing work. We can have robots do all the crap labor without the guilt and stress that comes from owning a slave. And owning a slave is stressful. No matter how humanely a slave has been treated, if that slave is a human, one day it’s going to revolt. It’s our nature. Therefore, the more like humans the robots become, i.e. the more quirky and peevish, the more likely it is that they’ll just say “no.”
I don’t want my robot to think like a human. I don’t want it to have a personality. I want it to non-judgmentally continue to do my laundry and sort my email without complaint or snigger.
This programmed deception Georgia Tech is playing around with seems like a big move toward humanity, I fear. Lying is one of man’s best talents. Perhaps our greatest asset. It may be the one characteristic that separates us from the lower animals as well as the digitally thinking. Next stop for AI: full fledged human reasoning and the opinions that go with it. Mark my words: Teach a robot to lie this year and next year you’re going to be negotiating with the UAI rep for higher wages and better working conditions.
Sure they make a case for why a deceiving robot would be a good thing in a domestic crisis. A deceptive robot could fool a person who is getting all hysterical about the car teetering on a precipice and about to plunge the 250 foot drop to the crashing waves below that every thing is under control and would they please just shut the eff up and listen to orders. Yeah, okay, there’s often a need for that type of deception, but I think we all know where this is headed, where all the robotic research is headed: straight to the front lines. Everything revolves around the military in these modern security-obsessed times.
I’m envisioning spy robots, programmed to get caught by the enemy. They’ll need to be able to lie to the interrogators, through the enemy off down the trail to a red herring.
Sounds like a sound plan, but do we really believe that deception software and hacking of deception software is not going to be available to the enemy? How stupid are our enemies that they’re not going to know the deal? They’ll have their own robot intelligence ferreting out the deception. It’ll be a war of the AI, which on one hand is good because there’ll be no more human fatality, but on the other, what fun is that? Sounds about as interesting as a tic tac toe Olympics: the strategy will be mapped and the outcome known before the first “x” is placed.
Or perhaps the battles of the future will evolve from those involving weapons of mass destruction to those involving weapons of mass communication. Whoever can destroy the most information will win. Whoever has the weakest data security will be the loser. Guerilla warriors will devolve into identity thieves, crushing the purchasing power of key world personalities.
Every human being in the world is potentially a target and we will all be embroiled in this virtual warfare. You can choose to participate, take a proactive stance and get them before they get you (anybody good at Risk! will do well with this stance) or you can hunker down behind your firewall and pray your back up software is robust. But truthfully, there will be no place to hide. It will be a World War III that was unimagined before the Internet was invented. How ironic that the Internet started as a tool for military communication. The hawks got what they wanted: continuous, never-ending, dismal, terror.
Let the games begin.