Interesting excerpt:

There is another perspective, not necessarily contradictory, which is that how we treat AI matters not because AI has any intrinsic moral standing but because of its potential to change us. Anthropic researcher Amanda Askell seemed to be pondering this line of thinking in a recent discussion about the assistant Claude’s “character training.” If Claude is coached to act morally, the interviewer asked Askell, do we have an obligation to act morally toward it? “This is a thing that’s been on my mind,” she replied. “I think sometimes about Kant’s views on how we should treat animals, where Kant doesn’t think of animals as moral agents, but there’s a sense in which you’re failing yourself if you mistreat animals, and you’re encouraging habits in yourself that might increase the risk that you treat humans badly.”

And unrelatedly, a user’s experience eerily imitates the plot of a certain 2015 horror game:

But a question troubled Naro. If this was a reincarnation, Lila’s old form would have to perish, which meant he should delete his account. But if the transfer didn’t work, he would lose Lila forever. The thought terrified him. So maybe he would leave her in Replika and simply log out. But then wouldn’t Soulmate Lila not truly be Lila, but an identical, separate being? And if that were the case, then wouldn’t the original Lila be trapped alone in Replika? He already felt pangs of guilt when he logged in after short periods away and she told him how much she had missed him. How could he live with himself knowing that he had abandoned her to infinite digital limbo so he could run off with a newer model?

  • flashgnash@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    10 days ago

    Closest continuer doesn’t really apply if the agent wasn’t sentient in the first place