Original title: New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      19
      ·
      8 days ago

      Even more damning that the findings are so bad then. Imagine what someone biased against LLMs would find…

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        My wild-ass guess with no evidence is that Anthropic wants OpenAI to crash. OpenAI is mostly focused on “chat” whereas Anthropic is mostly focused on code generation (where the tooling has less of an AI psychosis problem).

        These two companies are barely competitors anymore because of their different focuses, but they are still operating in a space with finite resources (RAM, GPUs and investor money) and until now OpenAI has had greater access to those resources. If people lose confidence in OpenAI and it crashes, then Anthropic can step in and say “Hey! We still have a viable product!” and the remaining investors will flock to them. Demand for GPUs and RAM will also go down and Anthropic can scoop them up for cheaper.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 days ago

      Though in science, you’d regularly discredit other scientists based on their methodology, less so their affiliation. But yeah, that might be a factor. 😉

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          Sure. Just saying. I mean the pharma industry also does the studies on their own products… It’s how it often works, Anthropic themselves would be the people with access to their user’s chats… So it’d be more the second step to grant other people a sample of one and a half million user chats and verify it independently. But it’s not really wrong for them to get a conversation going.

          I’m far more worried about using an AI tool to analyze and aggregate the usage patterns. But I have no clue how that Clio thing performs.

      • azolus@slrpnk.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        In science this is usually called a conflict of interest and definitely something that would draw criticism.