For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

  • ℍ𝕂-𝟞𝟝@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    There are two answers to that, both equally valid.

    One is that it extrapolates based on knowing what a naked adult looks like compared to a clothed adult, and how a child looks like compared to an adult, it can “add those vectors” and figure out how a naked child looks like.

    The other is that one of the biggest porn datasets that most of these will have in their training data has recently been taken down because it had a bunch of CSAM in it. Ironically, how it happened was that an independent guy uploaded it to Google Cloud, and Google flagged and banned the guy for it.

    The dataset would not have been taken down if it wasn’t for the guy doing the rounds afterwards though. Google didn’t care beyond banning a user.