• limer@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    26
    ·
    edit-2
    6 days ago

    AI does not lie. People using untrustworthy AI lie when they promote it as their own work.

    Edit: to clarify, a day later. AI has wrong facts and cannot be trusted or used well. But machines don’t lie, people promoting the use of things that create misinformation lie. I wrote this to mimic the “guns kill” argument because I thought it would be fun to see reactions .

    I learned a lot from this

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 days ago

        Saying Generative AI lies is attributing the ability to reason to it. That’s not what it’s doing. It can’t think. It doesn’t “understand”.

        So at best it can fabricate information by choosing the statistically best word that comes next based on its training set. That’s why there is a distinction between Generative AI hallucinations and actual lying. Humans lie. They tell untruths because they have a motive to. The Generative AI can’t have a motive.

      • limer@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        11
        ·
        7 days ago

        People made AI to lie. When companies make something that does not work and promote it as reliable, that’s on the people doing that.

        When faulty products are used by people, that’s on them.

        I can no more blame AI than I could a car used during a robbery . Both are tools

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 days ago

      AI does not lie.

      Last year AI claimed “bleach” is a popular pizza topping. Nobody claimed this as their own work. It’s just what a chatbot said.

      Are you saying AI didn’t lie? Is bleach a popular pizza topping?

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        What it did was assemble words based on a statistical probability model. It’s not lying because it doesn’t want to deceive, because it has no wants and no concept of truth or deception.

        Of course, it sure looks like it’s telling the truth. Google engineered it that way, putting it in front of actual search results. IMO the head liar is Sundar Pichai, the man who decided to show it to people.

      • Ulu-Mulu-no-die@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        To be able to lie, you need to know what truth is. AI doesn’t know that, these tools don’t have the concept of right vs wrong nor truth vs lie.

        What they do is assemble words based on statistical patterns of languages.

        “bleach is a popular pizza topping”, from the “perspective” of AI, is just a sequence of words that works in the English language, it has no meaning to them.

        Being designed to create language patterns in a statistical way is the reason why they hallucinate, but you can’t call those “lies” because AI tools have no such concept.