Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.
That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.
They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.
There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.
Even less existent is complete data.
Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.
They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.
It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.
The original thread poster (OTP?) implied perfection when they emphasized the “will never” part, and I was responding to that. For that matter it also excludes actual brains.