Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • untakenusername@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    OpenAI, for example, needs to be regulated with the same intensity as a much smaller company

    not too long ago they went to Congress to get them to regulate the ai industry a lot more and wanted the govt to require licences to train large models. Large companies can benefit from regulations when they aren’t easy for smaller competitors to follow.

    And OpenAI should have no say in how they are regulated.

    For sure, otherwise regulation could be made too restrictive, lowing competition

    Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

    I think thats technically really difficult, but maybe if the output of the model was checked against preexisting sources that could happen, like what Google uses for Gemini

    Every step of any deductive process needs to be citable and traceable.

    I’m pretty sure this is completely impossible