• sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 months ago

    Ding ding ding.

    It all became basically magic, blind trial and error roughly ten years ago, with AlexNet.

    After AlexNet, everything became increasingly more and more black box and opaque to even the actual PhD level people crafting and testing these things.

    Since then, it has basically been ‘throw all existing information of any kind at the model’ to train it better, and then a bunch of basically slapdash optimization attempts which work for largely ‘i dont know’ reasons.

    Meanwhile, we could be pouring even 1% of the money going toward LLMs snd convolutional network derived models… into other paradigms, such as maybe trying to actually emulate real brains and real neuronal networks… but nope, everyone is piling into basically one approach.

    Thats not to say research on other paradigms is nonexistent, but it is barely existant in comparison.