• db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      68
      ·
      16 hours ago

      Couldn’t they make an argument with that pointing out that they’re being unjustly targeted because they’re smaller and easier to pick on?

      • SolacefromSilence@fedia.io
        link
        fedilink
        arrow-up
        59
        ·
        16 hours ago

        No one cares if they’re small or unjustly picked on. If they want to beat the charges, they need to announce their own AI trained on the data.

        • tempest@lemmy.ca
          link
          fedilink
          English
          arrow-up
          33
          ·
          12 hours ago

          It would make me laugh if they could train an LLM that could only regurgitate content verbatim

          • Dran@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 hours ago

            https://en.wikipedia.org/wiki/Markov_chain

            Before the advent of AI, I wrote a slack bot called slackbutt that made Markov chains of random lengths between 2 and 4 out of the chat history of the channel. It was surprisingly coherent. Making an “llm” like that would be trivial.

            • [object Object]@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              5 hours ago

              Reddit has at least one sub where the posts and the comments are generated by Markov-chain bots. More than a few times I’ve gotten a post from there in my feed, and read through it confusedly for several minutes before realizing. Iirc it’s called subreddit_simulator.

              • Meron35@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 hours ago

                The original subreddit simulator ran on simple Markov chains.

                Subreddit simulator GPT2 used GPT2, and was already so spookily accurate that IIRC its creators specifically said they wouldn’t create one based on GPT3 out of fear that people wouldn’t be able to tell the difference between real and not generated content

          • Natanael@infosec.pub
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            It’s actually kinda easy. Neural networks are just weirder than usual logic gate circuits. You can program them just the same and insert explicit controlled logic and deterministic behavior. To somebody who don’t know the details of LLM training, they wouldn’t be able to tell much of a difference. It will be packaged as a bundle of node weights and work with the same interfaces and all.

            The reason that doesn’t work well if you try to insert strict logic into a traditional LLM despite the node properties being well known is because of how intricately interwoven and mutually dependent all the different parts of the network is (that’s why it’s a LARGE language model). You can’t just arbitrarily edit anything or insert more nodes or replace logic, you don’t know what you might break. It’s easier to place inserted logic outside of the LLM network and train the model to interact with it (“tool use”).

          • ilinamorato@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            ·
            12 hours ago

            Well, it’s not an LLM, but “AI” doesn’t have a defined meaning, so from that perspective they kind of already did.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          15 hours ago

          If they want to beat the charges, they need to announce their own AI trained on the data several billion in Series A investment funding.