• ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 day ago

    I think with all the guardrails current models have you have to talk to it for weeks if not months before it degrades to a point that it will let you talk about anything remotely harmful. Then again, that’s exactly what a lot of people do.

    • Fedizen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      If its sold as a permanent solution to a problem but the guardrails are temporary… idk man, seems like anyone who incorporates this into solving any problem with AI will eventually degrade the guardrails.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        Definitely not everyone. We’re talking about users that have a single chat open and have endless conversations about personal topics there. I think majority of users will ask single question or have a short conversation and then create new chat. And don’t talk about personal problems. We’re also talking about people with specific metal issues. AI is a terrible for many reasons but I think “it helps people kill themselves” is exaggerated. Before AI people were getting sucked into online communities that were encouraging suicide but the media barely noticed the issue. Marijuana is very dangerous for people with predisposition to some mental issues like schizophrenia but we just agree that people should have that in mind if they are going to use it. It’s the same with AI. Some people shouldn’t be using it but it’s not a reason for a total ban. The reason for a total ban is that it’s bad for environment, jobs and education and offers little benefits.

        • Fedizen@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I’m seeing people use LLM’s for:

          • Dating
          • Email/work tasks
          • Customer support
          • Mental health hotlines

          The dating, customer support, and mental health hotlines, notably, are not people who are always informed they’re talking to an LLM bot.

          I don’t think the “exposure to marijuana” analogy works here because people are getting exposed to to it by businesses without consent.

          https://sfstandard.com/2025/08/26/ai-crisis-hotlines-suicide-prevention/

          • ExLisper@lemmy.curiana.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            The issue we’re talking about is not getting a reply from bot in a chat or phone call. We’re talking about people with metal issues using AI in a way that exasperates their problems. Specifically we’re talking about people believing AI is their personal companion and creating personal connection with it to a point that wrong answers generated by AI affect their well being. Vast majority of people don’t use AI like that.

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      24 hours ago

      Exactly, and this is why their excuses are bullshit. They know that guardrails become less effective the more you use a chatbot, and they know that’s how people are using chatbots. If they actually gave a fuck about guardrails, they’d make it so that you couldn’t do conversations that take place over weeks or months. This would hurt their bottom line though.