Inspired by a recent talk from Richard Stallman.
From Slashdot:
Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”
Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.
Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.


And this is based on what exactly?
Based on historical examples of situations when a false narrative was perpetuated for the sake of convenience, and it assisted the person or group who benefited from the perpetuation of the false narrative.
Right, but what examples?
Nobody likes a sealion.
deleted by creator
It’s based on the fact that a false narrative is the dominant narrative… and y’all seem to approve.
Gaming NPC behaviour decision trees that are even the slightest bit sophisticated have been called “AI” for decades. Nobody complained about a “false narrative” and nobody thought NPCs in games were like Data. It’s just a word.
I thought we left all that purity testing bullshit in the 2010s. We all hate AI. We haven’t been convinced this issue in particular is the hill worth dying on.