Should AI bots lie?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Should AI bots lie?

    Should AI bots lie? Hard truths about artificial intelligence

    "People working in teams do things -- such as telling white lies -- that can help the team be successful. We accept that, usually, when a person does it. But what if an AI bot is telling the lie, or being told a lie?

    "More importantly, if we allow bots to tell people lies, even white lies, how will that affect trust? And if we do give AI bots permission to lie to people, how do we know that their lies are helpful to people instead of the bot?"

    And my favorite quote: "Compared to civil engineering ... software construction has all the discipline of a pack of rabid ferrets. So, yeah, let's celebrate AI's coming good times, before the bad times roll."


  • #2
    "You can't handle the truth."
    I Earned my Spurs in Vietnam
    48th AHC 1971-72

    Comment


    • #3
      "A robot must obey orders given it by a human being except where such orders would conflict with the First Law."

      Comment


      • #4
        Originally posted by Russell Holton View Post
        "More importantly, if we allow bots to tell people lies, even white lies, how will that affect trust? And if we do give AI bots permission to lie to people, how do we know that their lies are helpful to people instead of the bot?"
        Hi Russell,

        Perhaps it’s time to implement Isaac Asimov's "Three Laws of Robotics" with which I suspect you’re familiar:

        1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

        2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

        3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

        Regards,
        Tom Charlton



        "The aeroplane has unveiled for us the true face of the earth." - Antoine de Saint-Exupery

        Comment


        • #5
          I am increasingly becoming thankful that I am of an age to not really give a rat's kiester about how all this will play out.

          Humans are playing around with AI with all the consideration and forethought of a chimpanzee that has been given a hand grenade. Except that, unlike the chimp, we know that there is a non-zero possibility that things will work out very badly, as in a near-extinction level event for humanity, but we continue to muck around with it.

          Comment


          • #6
            Originally posted by Tom Charlton View Post
            Isaac Asimov's "Three Laws of Robotics"
            ...which assume the robot gets to decide what "harm" is.

            Comment


            • #7
              Liar! by Isaac Asimov.

              https://en.wikipedia.org/wiki/Liar!_(short_story)

              Comment


              • #8
                Fixed it for you!

                Comment


                • #9
                  Originally posted by Tom Charlton View Post
                  2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
                  That needs some adjustment. We already limit (for good reasons) who can give what orders.

                  Comment


                  • #10
                    Re favorite quote: If we built buildings the way we build software, the first woodpecker to come along would destroy civilization.
                    Bacon is the answer. I forgot the question.

                    Comment


                    • #11
                      I'm with Stephanie.

                      Comment

                      Working...
                      X