Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.
I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.
Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.
Every step of any deductive process needs to be citable and traceable.
not too long ago they went to Congress to get them to regulate the ai industry a lot more and wanted the govt to require licences to train large models. Large companies can benefit from regulations when they aren’t easy for smaller competitors to follow.
For sure, otherwise regulation could be made too restrictive, lowing competition
I think thats technically really difficult, but maybe if the output of the model was checked against preexisting sources that could happen, like what Google uses for Gemini
I’m pretty sure this is completely impossible
Their creators can’t even keep them from deliberately lying.
Exactly.
Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.
I mostly agree, but “never” is too high a bar IMO. It’s way, way higher than the bar even for humans. Maybe like 0.1% or something would be reasonable?
Even Einstein misremembered things sometimes.
Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.
Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.
Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.
That specifically is the problem. I don’t have a solution, but treating and advertising these things like they think and know stuff is a mistake that of course the companies behind them are encouraging.
If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.
The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”
The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.
That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.
They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.
There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.
Even less existent is complete data.
Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.
They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.
It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.
The original thread poster (OTP?) implied perfection when they emphasized the “will never” part, and I was responding to that. For that matter it also excludes actual brains.
This is awesome! The citing and tracing is already improving. I feel like no hallucinations is gonna be a while.
How does it all get enforced? FTC? How does this become reality?