• Flax@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Actual interesting question:

    How much energy and resources would we save by simply slowing down AI response time? A lot of the time you get an instant response from an LLM, and sure, it looks impressive, but most of the time you don’t need it that urgently.

    • Lulzagna@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 months ago

      The majority of energy consumed is for training the AI models, not providing output from those models.

      This means the resource consumption is not tied to usage and prompts. Also it means resource consumption to train models is temporary, relative to the model.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          That is how water use works, yes. The water goes back into the environment and is later reused.

          Also, there’s a good chance the AIs are not being trained in the same facilities that they’re later being run in. Different sorts of work is being done.

        • Lulzagna@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          That’s irrelevant to what I was responding to - the question being asked had an incorrect context and I was correcting it.