• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          1 month ago

          People here don’t seem to understand what LLM detection is. All it does is search for patterns that are very common in chatbot generated speech. It’s not some magical property that’s metaphysical in nature. Either the speech is written by a chatbot, or Carney naturally talks in this sort of vapid and content free fashion which is common for politicians to do.

          The real tell with AI writing is in the substance. It’s the weirdly balanced, almost bloodless neutrality on complex topics, the total lack of any authentic personal stake or lived experience, and a distinct feeling that you’re reading a brilliantly comprehensive Wikipedia summary instead of a thought that formed in a human mind with memories, biases, and a body.

            • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
              link
              fedilink
              arrow-up
              1
              ·
              1 month ago

              It’s obviously pretty reliable at statistically identifying patterns common to LLM generated text. Wikipedia, having had a problem with a flood of LLM written articles, has put put a whole detailed guideline of what these patterns are, and why they’re associated with LLM generated text. I implore you to spend at least a modicum of time to actually understand the subject you’re attempting to debate here.

              https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

              • Mongostein@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                1 month ago

                I know how LLMs work. Nothing you say is going to convince me that me trying it myself is going to be more reliable than you trying it.

                Like, what are you even disagreeing with me on?

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 month ago

                  At this point, I have no idea what you’re even trying to say here is. When you say stuff like ‘it doesn’t make it more reliable’, what do you mean by that?

                  If you agree that you can reliably detect LLM speech patterns, then do you agree or disagree that the speech contains many patterns that closely resemble LLM generated text?

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    This was a speech by a world leader for a global audience. I guarantee you that the tool you are using to identify LLM usage was not trained on or designed for this kind of data set. That LLM detector is working well above its pay grade.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      12
      ·
      1 month ago

      LLM speech detectors were never trained on mountains of publicly available data from world leaders is a hell of a cope buddy 🤣

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      1 month ago

      nah

      edit: I love how liberals, who see themselves as bastions of rational thinking, immediately start downvoting facts they don’t like 🤡