Today on Blogcritics
Home » Singularity Watch: Teaching Robots to Lie

Singularity Watch: Teaching Robots to Lie

Please Share...Tweet about this on Twitter0Share on Facebook0Share on Google+0Share on LinkedIn0Pin on Pinterest0Share on TumblrShare on StumbleUpon0Share on Reddit0Email this to someone

Check out what they’re doing at Georgia Tech. They’re breeding deceptive behavior into their next-gen robots. The idea is ripe for, I don’t know, jeering, I suppose.

I’ve never understood the need for robots to think like humans. For me, the advantage of a robot is that it will do tedious work – the stuff we don’t want to do – without complaining. It doesn’t mind mind-numbing work. We can have robots do all the crap labor without the guilt and stress that comes from owning a slave. And owning a slave is stressful. No matter how humanely a slave has been treated, if that slave is a human, one day it’s going to revolt. It’s our nature. Therefore, the more like humans the robots become, i.e. the more quirky and peevish, the more likely it is that they’ll just say “no.”

I don’t want my robot to think like a human. I don’t want it to have a personality. I want it to non-judgmentally continue to do my laundry and sort my email without complaint or snigger.

This programmed deception Georgia Tech is playing around with seems like a big move toward humanity, I fear. Lying is one of man’s best talents. Perhaps our greatest asset. It may be the one characteristic that separates us from the lower animals as well as the digitally thinking. Next stop for AI: full fledged human reasoning and the opinions that go with it. Mark my words: Teach a robot to lie this year and next year you’re going to be negotiating with the UAI rep for higher wages and better working conditions.

Sure they make a case for why a deceiving robot would be a good thing in a domestic crisis. A deceptive robot could fool a person who is getting all hysterical about the car teetering on a precipice and about to plunge the 250 foot drop to the crashing waves below that every thing is under control and would they please just shut the eff up and listen to orders. Yeah, okay, there’s often a need for that type of deception, but I think we all know where this is headed, where all the robotic research is headed: straight to the front lines. Everything revolves around the military in these modern security-obsessed times.

I’m envisioning spy robots, programmed to get caught by the enemy. They’ll need to be able to lie to the interrogators, through the enemy off down the trail to a red herring.

Sounds like a sound plan, but do we really believe that deception software and hacking of deception software is not going to be available to the enemy? How stupid are our enemies that they’re not going to know the deal? They’ll have their own robot intelligence ferreting out the deception. It’ll be a war of the AI, which on one hand is good because there’ll be no more human fatality, but on the other, what fun is that? Sounds about as interesting as a tic tac toe Olympics: the strategy will be mapped and the outcome known before the first “x” is placed.

Or perhaps the battles of the future will evolve from those involving weapons of mass destruction to those involving weapons of mass communication. Whoever can destroy the most information will win. Whoever has the weakest data security will be the loser. Guerilla warriors will devolve into identity thieves, crushing the purchasing power of key world personalities.

Every human being in the world is potentially a target and we will all be embroiled in this virtual warfare. You can choose to participate, take a proactive stance and get them before they get you (anybody good at Risk! will do well with this stance) or you can hunker down behind your firewall and pray your back up software is robust. But truthfully, there will be no place to hide. It will be a World War III that was unimagined before the Internet was invented. How ironic that the Internet started as a tool for military communication. The hawks got what they wanted: continuous, never-ending, dismal, terror.

Let the games begin.

 

Powered by

About Sue Lange

  • http://viclana.blogspot.com/ Victor Lana

    Sue, just think HAL in 2001: A Space Odyssey. Or even worse the Terminator type robots. I think this is where we’re going, and if people aren’t worried they should be.

  • http://blogcritics.org/culture/article/must-we-always-cave-to-islamist/ A.B. Caliph

    It’ll be a war of the AI, which on one hand is good because there’ll be no more human fatality, but on the other, what fun is that?

    Obviously you’re being frivolous. But you’re also callous. First because many of today’s wars don’t involve high-tech. Look at sub-Saharan Africa. They have no AI. But they do have guns, machetes and clubs, plus the minimum natural intelligence to use them. (Unfortunately, they haven’t enough intelligence to not use them.) Their hundreds of thousands of innocent victims are tragic testament to the one-sidedness of the “fun” to be had in conventional warfare.

    Second you’re callous because even in postindustrial nations, combat fatalities are anything but fun. Just ask the families of American soldiers killed in action in Iraq or Afghanistan. Or ask our wounded warriors at Walter Reed Medical Center, many of whom are missing one or more limbs or will otherwise be disabled for the rest of their lives. You’re wrong to make fun of these young men, Ms. Lange.

    Besides being cruel, your attempt to come across as a born-again Luddite inadvertently validates your statement, “Lying is one of man’s best talents. Perhaps our greatest asset.” But only insofar as you speak for yourself, not for mankind. Thus you lie when you write, “Do we really believe that deception software and hacking of deception software is not going to be available to the enemy? How stupid are our enemies that they’re not going to know the deal? They’ll have their own robot intelligence ferreting out the deception.”

    You’re an intelligent woman, Ms. Lange, and no doubt well informed. You know very well about 21st-century asymmetrical warfare. Technologically advanced nations such as the United States are warring against some of the world’s most backward (and proud of it) countries. Insurgents in such places don’t have access to AI. That doesn’t mean they’re stupid, though. They are clever and evil and unconditionally determined. If AI can help us defeat these ruthless enemies, who despise not just America but the very concept of modernity, then AI deserves to be treated seriously, not dismissed with such breezy empty-headedness as your article evinces.

  • Brian aka Guppusmaximus

    Nice Article…

    BUT, to instruct a machine with an algorithm to follow a set of instructions isn’t Artificial Intelligence. A robot with AI would learn & retain knowledge the same way we do. Unfortunately, Human-like Artificial Intelligence is,quite possibly, a hundred or so years away. Not because we can’t build a robot but because we can’t match the human brain digitally. Don’t get me wrong I’m a bit of tech nerd myself but the Human brain is freaking amazing. Just its processing power alone (100 million gigaflops) can barely be matched by a football field full of supercomputers. AND, that doesn’t cover the brains parallel processing ability. So, it’s more likely that we’ll see the technology in the movie Surrogates before we ever see Hal or Terminator.

  • Brian aka Guppusmaximus

    *Oops* I forgot to mention that it took IBM 147,456 processors and 144 terabytes of memory to recreate a Cat’s cortex digitally and it still ran 100 times slower. IBM says they will have a virtual Human Brain in 10 years… I don’t think so. It took thirty years till HP finally realized the memristor technology.

  • http://www.suelangetheauthor.com Sue Lange

    A.B. Caliph,

    Please understand that the phrase “what fun is that?” is extreme sarcasm. Sorry if that wasn’t clear.

    Brian,

    Thanks for weighing in. I often need to be reminded that taking tech to the limits of science fiction imagination is silly. I love when the folks that are actually working in the field remind me how vast the human mind is.

    Stay tuned!

  • Brian aka Guppusmaximus

    Sue,
    Well I’m glad you didn’t take it the wrong way because I did enjoy reading your article. It’s about time that this section got back to the geeky stuff. As for me being a professional… I wish. I’m just a musician that loves researching these types of advancements in science. Honestly, if they figure out Quantum Computing then my whole argument would be decimated as well.

    But, I do think you’re right about the Military realizing this type of Science first as they are the ones either funding the research or creating the technology.

    Again, Great Article and keep it up because it fosters such great topics for discussion.

  • http://www.suelangetheauthor.com Sue Lange

    Thanks, Brian. Will have to post something around music tech at some point. I’m a musician myself at times.

  • John Lake

    Asimov would have had fun with the idea of a robot that could deceive.
    Forth rule of robotics:
    No robot should do harm to anyone anytime by mis-application of the truth!

  • http://www.suelangetheauthor.com Sue Lange

    Yeah, things are going to get tricky with the 3 laws once they have sentience.