Aside from the obvious Law is there another blocker for this kind of situation?

I Imagine people would have their AI representatives trained to each individual personal beliefs and their ideal society.

What could that society look like? Or how could it work. Is there a term for this?

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    20 days ago

    Advances? First we have to actually invent it. Text LLMs are just word prediction and generative models generally are neither intelligent nor have much room to grow at this point. And aside from that, every model is only as good as the training data it was trained on. If you train a model on smut and romance novels, then you have your perfect little eRP model for kinky chats, if you train your model on various programming languages then you have a good coding assistant, if you train your model on Reddit then you have an insufferable racist edgelord who wants to see the world burn. Point being, models are flawed in every sense of the meaning. All their word predictions end up going back to what humans have written in the past. All their word predictions have an inherent randomness to them due to how LLMs work, making them unreliable in their output, which includes even the best and largest models with access to the largest databanks & indexes out there. But the again, the biggest flaw is that they are not actually AI. They have no thoughts on their own, they don’t really evaluate things on various factors. They all just still follow their simple programming of mimicking language, without being aware of anything. If you want to have a computer like this run your politics then go right ahead, but you already have to ask yourself, what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it? Because ultimately it’s that person who is the decision maker. Politicians, for all their flaws, are still intelligent human beings that can be reasoned with. A computer can’t be really swayed, not in the classical sense. You can sway a chatbot easily, because they typically use your chat history as context for their own output. This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power. That’s why current models are sort of at their limit, sans some optimizations. You can’t just upscale them because their energy requirements grow exponentially faster than their actual text output.

    TLDR: “AI” is just overhyped corporate marketing of something that comes down to word prediction, fueled by sensationalist media scaremongering from people who don’t understand how LLMs work. Using them for decision making would just give power to the shadowy person who oversees the model and its flawed bias of its training data.

    • wabafee@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      19 days ago

      That is interesting thanks for this. I’ll try address some of your questions let me know what you think?

      “what model do we use? Based on what data - since it is inherently biased? How often can we re-roll / regenerate an answer until we like its outcome? Who has oversight over it?”

      I imagine a government like this would not still be fully run by AI. Laws proposed would still have human touch perhaps what they would act is almost like an assistant per citizen. They would be briefed on the laws proposed and have the citizen vote for it, or if they give consent have the AI do it. Argue about it in the floor for them.

      In the end the president or whoever is at the very top who’s human still have the final say if he approves the proposed law.

      The model could be based on whatever is available today or the future or a curated model. Though I agree it being bias could be a huge blocker though us humans are also inherently bias maybe that is something we just need to be aware if such thing cannot be removed at all if we have this kind of government.

      If the law breaks the constitution for example there will still be the supreme court who are all humans declaring the law invalid.

      Rather than have a representative who may or may not be contacted depending how revelant your are to this human representative.

      “This is inherently flawed because it means that the existing chat history will sort of lead the future responses, but it’s incredibly limited due to context size requiring such vast amounts of vram / ram and processing power.”

      Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed. If the person decide his AI assistant no longer aligns with his view he can then correct it.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        19 days ago

        Oh, so you don’t want an AI government, but an AI voter. That’s probably even worse to be honest.

        Won’t that be ideal that would mean this LLM inherently knows your choices or belief, aside from the huge increase in processing needed.

        Only if it was trained on me and only me personally. But that would make me what we in German describe as “Gläserner Mensch”, gläsern coming from Glas, as in being a transparent person, which is a metaphor used in privacy topics. I’d have to lay myself open to an immense amount of data hording, to create a robot that may or may not decide like I would decide. Aside from the terrible privacy violations & implications that this would entail for every single person, it would also just be a snapshot of current me. Humans change over time. Our experiences and our perception of the world around us forms and changes us, constantly, and with that our decision making.

        But coming back to the privacy issue… We already have huge problems on that front. Companies hoard massive amounts of user data, usually through very thin veiled consent through those little checkbox agreements, or they just do it illegal now when it comes to their LLMs where they tend to just scrape everything on the internet, regardless of consent or copyright infringements. I think the whole LLM topic is one that should go nowhere until we have a globally agreed framework of regulations on how we want to handle those and future technologies. If you make an LLM based on all the data on the internet, then such models should inherently be Free Open Source, including everything that they create. That’d be the only agreeable term in my book. Whether true AI in the future would even rely on data scraping is another topic though.