Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • davidgro@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    5
    ·
    3 days ago

    … I want clear evidence that the LLM … will never hallucinate or make something up.

    Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.

    • BertramDitore@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.

      Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.

      • davidgro@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        especially when people will use these systems to make potentially life-changing decisions for them.

        That specifically is the problem. I don’t have a solution, but treating and advertising these things like they think and know stuff is a mistake that of course the companies behind them are encouraging.

    • mosiacmango@lemm.ee
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      3 days ago

      If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.

      The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”

      • Redex@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        3 days ago

        The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.

        • mosiacmango@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.

          They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.

          • davidgro@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.

            Even less existent is complete data.

            • mosiacmango@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.

              They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.

              It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.

              • davidgro@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                The original thread poster (OTP?) implied perfection when they emphasized the “will never” part, and I was responding to that. For that matter it also excludes actual brains.