cross-posted from: https://lemmy.world/post/43768262

Some may have believed they were against AI being used for war. They just don’t want it to make the final kill decision.

The argument given by those supporting them is that AI in the military was inevitable, so their position is a reasonable one.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    8
    ·
    edit-2
    2 days ago

    You said Anthropic didn’t want to develop autonomous weapons. Anthropic contradicts you. They do want to develop them.

    Can you acknowledge this fact?

    I love how Anthropic only draws the line at autonomously killing Americans, too. I guess some lives are worth more than others.

      • 0_o7@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        They’re building tools to <cull people and children> from half way across the world and you’re worried about the tone?

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        It’s not a very solid point. They said they may become necessary at some point, but right now they’re irresponsible.

        They’re not ruling it out in the future, but their focus is on today’s problem.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          2 days ago

          Serinus, did you see the part where Anthropic wants to develop them with the US military?

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            with our two requested safeguards in place.

            Said safeguards being that their technology isn’t being used for mass surveillance or the development of autonomous drones. It’s explicitly mentioned in their statement - the one you’re desperately trying to massage and misquote to make it seem like they’re saying something they’re not - yet anyone can just go and read it themselves

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Iconoclast, I see you edited your post after I replied. You did not answer whether you accept the fact that Anthropic explicitly wanted to develop fully autonomous AI alongside the Trump Department of “War.”

              Either you’re lying, or you’re the one desperately trying to reshape the truth.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Iconoclast, you have moved beyond accidental deception into intentional lies.

              Anthropic offered to work directly with the Department of “War” on R&D to improve the reliability of autonomous bombing systems.

              That’s what your link says. Do you deny this explicit fact?

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  Iconoclast, don’t be disingenuous.

                  The direct quote is “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems”. “We” meaning Anthropic. “These systems” meaning fully autonomous weapons.

                  Do you acknowledge they did this? Try not to weasel out of answering with more pedantry. It’s almost as disturbing as your apparent defense of that Silicon Valley AI cult.

                  • Iconoclast@feddit.uk
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 day ago

                    They are not willing to let their current models (Claude) be used in fully autonomous weapons right now, because they believe today’s frontier AI is still too unreliable and prone to errors. They explicitly say they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

                    However, they have offered to work directly with the Department of Defense on R&D to improve the reliability of autonomous weapons technology in general (with our two requested safeguards in place) - so that in the future these systems might become safe and trustworthy enough to use.

                    They’re not ideologically against autonomous weapons systems. They’re against ones that run on our current AI models.