I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.

In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.

So what’s the point of it all?

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    6 months ago

    In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.

    I’d actually challenge both of these. The property of “soulessness” is very subjective, and AI art has won blind competitions. On programming, it’s empirically made faster by half again, even with the intrinsic requirement for debugging.

    It’s good at generating things. There are some things we want to generate. Whether we actually should, like you said, is another issue, and one that doesn’t impact anyone’s bottom line directly.

    • nairui@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      To win a competition isn’t speaking to the purpose of art really, whose purpose is for communication. AI has nothing to communicate and approximates a mish mash of its dataset to mimic to great success the things it’s seen, but is ultimately meaningless in intention. It would be a disservice to muddy the art and writing out in the world created by and for human beings with a desire to communicate with algorithmic outputs with no discernible purpose.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        I feel like the indistinguishability implied by this undercuts the communicative properties of the human art, no? I suppose AI might not be able to make a coherent Banksy, but not every artist is Banksy.

        If you can’t tell if something was made by Unstable or Rutkowski, isn’t it fair to say either neither work has soul (or a message), or both must?

        • nairui@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          That is only if one assumes the purpose of art is its effect on the viewer which is only one purpose. Think of your favorite work of art, fiction, music, did it make you feel connected to something, another person? Imagine a lonely individual who connected with the loneliness in a musical artist’s lyrics, what would be the purpose of that artist turned out to be an algorithm?

          Banksy, maybe Rutkowski, and other artists have created a distinct language (in this case visual) that an algorithm can only replicate. Consider the fact that generative AI cannot successfully generate an image of a full glass of wine, since they’re not commonly photographed.

          I do think that the technology itself is interesting for those that use it in original works that are intended to be about algorithms themselves like those surreal videos, I find those really interesting. But in the case of passing off algorithmic output as original art, like that guy who won that competition with an AI generated image, or when Spotify creates algorithmically generated music, to me that’s not art.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            6 months ago

            That reminds me of the Matrix - “You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realise? Ignorance is bliss”

            Okay, so does it matter if there’s no actual human you’re connecting to, if the connection seems just as real? We’re deep into philosophy there, and I can’t reasonably expect an answer.

            If that’s the whole issue, though, I can be pretty confident it won’t change the commercial realities on the ground. The artist’s studio is then destined to be something that exists only on product labels, along with scenic mixed-animal barnyards. Cypher was unusually direct about it, but comforting lies never went out of style.

            That’s kind of how I’ve interpreted OP’s original question here. You could say that’s not a “legitimate” use even if inevitable, I guess, but I basically doubt anyone wants to hear my internet rando opinion on the matter, since that’s all it would be.

            Consider the fact that generative AI cannot successfully generate an image of a full glass of wine, since they’re not commonly photographed.

            Okay, I have to try this. @aihorde@lemmy.dbzer0.com draw for me a glass of wine.

  • saigot@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Here’s some uses:

    • skin cancer diagnoses with llms has a high success rate with a low cost. This is something that was starting to exist with older ai models, but llms do improve the success rate. source
    • VLC recently unveiled a new feature of using ai to generate subtitles, i haven’t used it but if it delivers then it’s pretty nice
    • for code generation, I agree it’s more harmful than useful for generating full programs or functions, but i find it quite useful as a predictive text generator, it saves a few keystrokes. Not a game changer but nice. It’s also pretty useful at generating test data so long as it’s hard to create but easy (for a human) to validate.
  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    I treat it as a newish employee. I don’t let it do important tasks without supervision, but it does help building something rough that I can work on.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    People keep meaning different things when they say “Generative AI”. Do you mean the tech in general, or the corporate AI that companies overhype and try to sell to everyone?

    The tech itself is pretty cool. GenAI is already being used for quick subtitling and translating any form of media quickly. Image AI is really good at upscaling low-res images and making them clearer by filling in the gaps. Chatbots are fallible but they’re still really good for specific things like generating testing data or quickly helping you in basic tasks that might have you searching for 5 minutes. AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great. It’s also used to de-noise raytracing and show cleaner reflections.

    Also people are missing the point on why AI is being invested in so much. No, I don’t think “AGI” is coming any time soon, but the reason they’re sucking in so much money is because of what it could be in 5 years. Saying AI is a waste of effort is like saying 3D video games are a waste of time because they looked bad in 1995. It will improve.

    • robot_dog_with_gun [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      AI is huge in video games for upscaling tech like DLSS which can boost performance by running the game at a low resolution then upscaling it, the result is genuinely great

      frame gen is blurry af and eats shit on any fast motion. rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.

      • PolandIsAStateOfMind@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        rendering games at 640x480 and then scaling them to sensible resolutions is horrible artistic practice.

        Is that a reason a lot of pixel art games are looking like shit? I remember the era of 320x240 and 640x480 and the modern pixel art are looking noticeably worse.

          • Horse {they/them}@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 months ago

            a good example is dracula’s eyes in symphony of the night, on crt the red bleeds over giving a really good red eyes effect
            on lcd they are just single red pixels and look awful

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Idea generation.

    E.g., I asked an LLM client for interactive lessons for teaching 4th graders about aerodynamics, esp related to how birds fly. It came back with 98% amazing suggestions that I had to modify only slightly.

    A work colleague asked an LLM client for wedding vow ideas to break through writer’s block. The vows they ended up using were 100% theirs, but the AI spit out something on paper to get them started.

    • Mr_Blott@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Those are just ideas that were previously “generated” by humans though, that the LLM learned

      • TheRealKuni@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Those are just ideas that were previously “generated” by humans though, that the LLM learned

        That’s not how modern generative AI works. It isn’t sifting through its training dataset to find something that matches your query like some kind of search engine. It’s taking your prompt and passing it through its massive statistical model to come to a result that meets your demand.

  • octochamp@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    AI saves time. There are few use cases for which AI is qualitatively better, perhaps none at all, but there are a great many use cases for which it is much quicker and even at times more efficient.

    I’m sure the efficiency argument is one that could be debated, but it makes sense to me in this way: for production-level outputs AI is rarely good enough, but creates really useful efficiency for rapid, imperfect prototyping. If you have 8 different UX ideas for your app which you’d like to test, then you could rapidly build prototype interfaces with AI. Likely once you’ve picked the best one you’ll rewrite it from scratch to make sure it’s robust, but without AI then building the other 7 would use up too many man-hours to make it worthwhile.

    I’m sure others will put forward legitimate arguments about how AI will inevitably creep into production environments etc, but logistically then speed and efficiency are undeniably helpful use cases.

    • bobbyfiend@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      As some witty folks have put it, LLMs can’t give you anything truly, interestingly new when all they’re capable of is some weighted average of what’s already there. And I’ll be clear in saying I hate with the force of a tsunami the way AI is being shoved at us by desperate CEOs, and how it’s being used to kill labor, destroy copyright law, increase income inequality, destroy the environment, and increase the power of huge corporations headed by assholes like Altman and Musk. But AI is getting pretty good at that weighted-average-of-what’s-out-there, and a lot of the work done in several industries can benefit from that. For me, one of the great perversities or tragedies of AI is that it could be a targeted, useful tool but, instead, it’s a hammer to further erode freedom. Even the coders, editors, advertisers, educators, etc. using it to do their jobs are participating in a short-term selloff of their profession to their CEOs, shareholders, etc. at the expense of large numbers of their colleagues or potential colleagues who will now never get jobs.

      It’s like if someone invented the wheel and Sam Altman immediately patented it and sold it to Raytheon.