Inspired by a recent talk from Richard Stallman.

From Slashdot:

Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”

Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.

Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.

  • illi@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I get where you are coming from, but ultimately words and their meanings are social constructs. Words mean what society determine they mean.

    If you need to distinguish them, we can use the already coined name of LLMs as that’s what they actually are. Maybe let’s use Large Image Models for those non-text ones? I feel like “generators” is too generic a term to work.

    But I do agree that calling them AI gives them more power than they have.