• Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    217
    ·
    5 months ago

    The majority of “AI Experts” online that I’ve seen are business majors.

    Then a ton of junior/mid software engineers who have use the OpenAI API.

    Finally are the very very few technical people who have interacted with models directly, maybe even trained some models. Coded directly against them. And even then I don’t think many of them truly understand what’s going on in there.

    Hell, I’ve been training models and using ML directly for a decade and I barely know what’s going on in there. Don’t worry I get the image, just calling out how frighteningly few actually understand it, yet so many swear they know AI super well

    • waigl@lemmy.world
      link
      fedilink
      English
      arrow-up
      73
      arrow-down
      1
      ·
      5 months ago

      And even then I don’t think many of them truly understand what’s going on in there.

      That’s just the thing about neural networks: Nobody actually understands what’s going on there. We’ve put an abstraction layer over how we do things that we know we will never be able to pierce.

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        4
        ·
        5 months ago

        I’d argue we know exactly what’s going on in there, we just don’t necessarily, know for any particular model why it’s going on in there.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        17
        ·
        5 months ago

        Ding ding ding.

        It all became basically magic, blind trial and error roughly ten years ago, with AlexNet.

        After AlexNet, everything became increasingly more and more black box and opaque to even the actual PhD level people crafting and testing these things.

        Since then, it has basically been ‘throw all existing information of any kind at the model’ to train it better, and then a bunch of basically slapdash optimization attempts which work for largely ‘i dont know’ reasons.

        Meanwhile, we could be pouring even 1% of the money going toward LLMs snd convolutional network derived models… into other paradigms, such as maybe trying to actually emulate real brains and real neuronal networks… but nope, everyone is piling into basically one approach.

        Thats not to say research on other paradigms is nonexistent, but it is barely existant in comparison.

        • mrmacduggan@lemmy.ml
          link
          fedilink
          English
          arrow-up
          11
          ·
          5 months ago

          This method is definitely a great way to achieve some degree of explainability for images, but it is based on the assumption that nearby pixels will have correllated meanings. When AI is making connections between far-away features, or worse, in a feature space that cannot be readily visualized like images can, it can be very hard to decouple the nonlinear outputs into singular linear features. While AI explainability has come a long way in the last few years, the decision-making processes of AI are so different from human thought that even when it can “show its work” by showing which neurons contributed to the final result, it doesn’t necessarily make any intuitive sense to us.

          For example, an image-identification AI might identify subtle lens blur data to determine the brand of camera that took a photograph, and then use that data to make an educated guess about which country the image was taken in. It’s a valid path of reasoning. But it would take a lot of effort for a human analyst to notice that the AI is using this process to slightly improve its chances of getting the image identification correct, and there are millions of such derived features that combine in unexpected ways, some logical and some irrationally overfitting to the training data.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      40
      ·
      5 months ago

      Yeah, I’ve trained a number of models (as part of actual CS research, before all of this LLM bullshit), and while I certainly understand the concepts behind training neural networks, I couldn’t tell you the first thing about what a model I trained is doing. That’s the whole thing about the black box approach.

      Also why it’s so absurd when “AI” gurus claim they “fixed” an issue in their model that resulted in output they didn’t want.

      No, no you didn’t.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        14
        ·
        5 months ago

        Love this because I completely agree. “We fixed it and it no longer does the bad thing”. Uh no, incorrect, unless you literally went through your entire dataset and stripped out every single occurrence of the thing and retrained it, then no there is no way that you 100% “fixed” it

        • ragas@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 months ago

          I mean I don’t know for sure but I think they often just code program logic in to filter for some requests that they do not want.

          My evidence for that is that I can trigger some “I cannot help you with that” responses by asking completely normal things that just use the wrong word.

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            It’s not 100%, and you’re more or less just asking the LLM to behave, and filtering the response through another non-perfect model after that which is trying to decide if it’s malicious or not. It’s not standard coding in that it’s a boolean returned - it’s a probability that what the user asked is appropriate according to another model. If the probability is over a threshold then it rejects.

      • ragas@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        I once trained an AI in Matlab to spell my name.

        I alternate between feeling so dumb because that is all that my model could do and feeling so smart because I actually understand the basics of what is happening with AI.

        • Amberskin@europe.pub
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 months ago

          I made a cat detector using Octave. Just ‘detected’ cats in small monochrome bitmaps, but hey, I felt like Neo for a while!

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 months ago

            I made a neural net from scratch with my own neural net library that could identify cats from dogs 60% of the time. Better than a coin flip, baybeee!

            • monotremata@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              I made a neural net from scratch with my own neural net library and trained it on generating the next move in a game of Go, based on thousands of games from an online Go forum.

              It never even got close to learning the rules.

              In retrospect, “thousands of games” was nowhere near enough training data for such a complex task, and if we had had enough training data, we never could have processed all of it, since all we were using was a ca. 2004 laptop machine with no GPU. So we just really overreached with that project. But still, it was a really pathetic showing.

              Edit: I switched from “I” to “we” here because I was working with a classmate, but we did use my code. She did a lot of the heavy lifting in getting the games parsed into a form where the network could train on it, though.

    • GreenShimada@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      5 months ago

      I have personally told coworkers that if they train a custom GPT, they should put “AI expert” on their resume as it’s more than 99% of people have done - and 99% of those people didn’t do anything more than tricked ChatGPT into doing something naughty once a year ago and now consider themselves “prompt engineers.”

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 months ago

      Hell, I’ve been training models and using ML directly for a decade and I barely know what’s going on in there.

      Outside of low dimensional toy models, I don’t think we’re capable of understanding what’s happening. Even in academia, work on the ability to reliably understand trained networks is still in its infancy.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      NONE of them knows what’s going on inside.

      We are right back in the age of alchemy, where people talking latin and greek threw more or less things together to see what happens, all the while claiming to trying to make gold to keep the cash flowing.