Title says it all

  • TheFriar@lemm.eeM
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    If I were you I’d send this to some media outlets. Tank some AI stock and create some more negative news around it.

  • Lyra_Lycan@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 days ago

    I’ve got a couple ads for an AI chat on Android, can’t remember the name but it has a disclaimer onscreen that reads something like “All characters shown are in their grown-up form”, implying that there are teen or child forms that you can communicate with.

    • ckmnstr@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      I saw something similar! Reported it to Google ads and of course they “couldn’t find any ToS violations”

  • viciouslyinclined@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    And the bot has 882.9k chats.

    Im not surprised and I dont think you or anyone else is either. But that doesn’t make this less disturbing.

    Im sure thw app devs are not interested in cutting off a huge chunk of their loyal users by doing the right thing and getting rid of those types of bots.

    Yes, its messed up. In my experience, it is difficult to report chat bots and see any real action taken as a result.

  • jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    there’s plausible denia… nah i got nothin. That’s messed up. Even for the most mundane, non-gross use case imaginable, why the fuck would anybody need a creepy digital facsimile of a child?

  • Ceedoestrees@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    Yep. I dick around on a similar platform because a friend built it.

    The amount of shit I’ve reported is insane. Pedos just keep coming back with new accounts. Even with warnings and banned words, they find a way.

  • bdonvr@thelemmy.club
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    Unfortunately in a lot of places there’s really nothing illegal if it’s just fantasy and text.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you “protect” the chatbot? would you “protect” the good-think of the users? do you think it’s about preventing “normalization” of these thoughts?

      in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.

      • zalgotext@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        23 days ago

        I don’t think it’s the same discussion, video games and AI chatbots are two very different things that you engage with in very different ways.

    • ckmnstr@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      23 days ago

      I agree in principle, but look at the number of interactions. I think there’s a fine line between creating safe spaces for urges and downright promoting and normalizing criminal activity. I don’t think this should be a) this accessible and b) happening without psychiatric supervision. But maybe I’m being too judgemental