• Apepollo11@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    8 days ago

    With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.

    I used to teach Software Dev at a university, and even at MSc level some of the submissions would have paled in comparison to even GPT3 output. That said, I didn’t have to deal with the AI problem myself. I taught just before LLMs came into their own - Textsynth had just come out, and I used to use it as an example of how unintentional bias in training data shapes the outputs.

    While I no longer teach, I do still work in that space. Ironically the best way to catch AI papers these days is with another AI. This is included in the plagiarism-checking software, and breaks down where it detects suspicious passages and why it thinks they’re suspicious.

    • FiniteBanjo@feddit.online
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      8 days ago

      With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.

      Human students, and non-students, were the training data set. The LLMs will never reach 94% accuracy to that even with infinite resources. The AI is always always always always going to be worse.