• JustVik@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Well, at least they differ in that AlphaFold has a specific goal and we can verify it, perhaps not easily, and it has practical scientific benefits, while the LLM trains to solve all tasks at once and even unknown which ones.

    • Assian_Candor [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      LLMs don’t train to “solve” anything. They’re just sequence predictors. They predict the next item in sequences of words, that’s it. Some predict better than others in specific scenarios.

      Through techniques like RAG and multi-model frameworks you can tailor the output to fit specific tasks or use cases. This is very powerful for automating routine workflows.

    • PolandIsAStateOfMind@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 month ago

      You can also verify output of chatbot or artbot, even easier, for example i have no clue about protein folding whatsoever, but (i hope) we all can recognize word salad from coherent text.