Someone programmed/trained/created a chatbot that talked a kid into killing himself. It’s no different than a chatbot that answers questions on how to create explosive devices, or make a toxic poison.
If that doesn’t make sense to you, you might want to question whether it’s the chatbot that is mindless.
This is a chat bot
While I don’t care for openAI I don’t see why they would be liable.
Did you know that talking someone into committing suicide is a felony?
It isn’t a person though
It is a mindless chatbot
Someone programmed/trained/created a chatbot that talked a kid into killing himself. It’s no different than a chatbot that answers questions on how to create explosive devices, or make a toxic poison.
If that doesn’t make sense to you, you might want to question whether it’s the chatbot that is mindless.