Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • mosiacmango@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.

    They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.

      Even less existent is complete data.

      • mosiacmango@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.

        They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.

        It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.

        • davidgro@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          The original thread poster (OTP?) implied perfection when they emphasized the “will never” part, and I was responding to that. For that matter it also excludes actual brains.