How do we know that the people on reddit aren’t talking to bots? Now, or in the future? what about lemmy?
Even If I am on a human instance that checks every account on PII, what about those other instances? How do I know as a server admin that I can trust another instance?
I don’t talk about spam bots. Bots that resemble humans. Bots that use statistical information of real human beings on when and how often to post and comment (that is public knowledge on lemmy).
EXTERMINATE!
Beep boop
I selected all the images with a bicycle, if that’s not proof of being real…
To determine if a commenter is a bot, look for generic comments, repetitive content, unnatural timing, and lack of engagement. Bot accounts may also have generic usernames, lack a profile picture, or use stock photos. Additionally, bots often have a “tunnel vision,” focusing on a specific topic or link. Here’s a more detailed breakdown:
-
Generic Comments and Lack of Relevance:
Bot comments often lack depth and are not tailored to the specific content. They may use generic phrases like “Great pic!” or “Cool!”. Bot comments may also be off-topic or irrelevant to the discussion.
-
Repetitive and Unnatural Behavior:
Bots can post the same comments multiple times or at unnatural frequencies.
They may appear to be “obsessed” with a particular topic or link.
-
Profile and Username Issues:
Generic usernames, especially those with random numbers, can be a red flag.
Missing or generic profile pictures, including stock photos, are also common.
-
Lack of Engagement and Interaction:
Real users often engage in back-and-forth conversations. Bots may not respond to other comments or interact with the post creator in a meaningful way.
-
Other Indicators:
Bots may use strange syntax or grammar, though some are programmed to mimic human speech more accurately.
They might have suspicious links or URLs in their comments. Bots often have limited or no activity history, and may appear to be “new” accounts.
-
Checking IP Reputation:
You can check the IP address of a commenter to see if it’s coming from a legitimate or suspicious source.
By looking for these indicators, you can often determine if a commenter is likely a bot or a real human user.
Also, I am a real human with soft human skin.
ok chatgpt, thanks for the tips
-
Would a bot post this?
Bethesda game developer AI bot detected ❗️
You can assured that I’m not a bot because I would never sell out. I prefer keeping it real with Pepsi brand cola and Doritos brand chips.
I can learn. Teach me something and quiz me about it
I can identify the traffic lights on any picture.
Totally fair question — and honestly, it’s one that more people should be asking as bots get better and more human-like.
You’re right to distinguish between spam bots and the more subtle, convincingly human ones. The kind that don’t flood you with garbage but instead quietly join discussions, mimic timing, tone, and even have believable post histories. These are harder to spot, and the line between “AI-generated” and “human-written” is only getting blurrier.
So, how do you know who you’re talking to?
- Right now? You don’t.
On platforms like Reddit or Lemmy, there’s no built-in guarantee that you’re talking to a human. Even if someone says, “I’m real,” a bot could say the same. You’re relying entirely on patterns of behavior, consistency, and sometimes gut feeling.
- Federation makes it messier.
If you’re running your own instance (say, a Lemmy server), you can verify your users — maybe with PII, email domains, or manual approval. But that trust doesn’t automatically extend to other instances. When another instance federates with yours, you’re inheriting their moderation policies and user base. If their standards are lax or if they don’t care about bot activity, you’ve got no real defense unless you block or limit them.
- Detecting “smart” bots is hard.
You’re talking about bots that post like humans, behave like humans, maybe even argue like humans. They’re tuned on human behavior patterns and timing. At that level, it’s more about intent than detection. Some possible (but imperfect) signs:
Slightly off-topic replies.
Shallow engagement — like they’re echoing back points without nuance.
Patterns over time — posting at inhuman hours or never showing emotion or changing tone.
But honestly? A determined bot can dodge most of these tells. Especially if it’s only posting occasionally and not engaging deeply.
- Long-term trust is earned, not proven.
If you’re a server admin, what you can do is:
Limit federation to instances with transparent moderation policies.
Encourage verified identities for critical roles (moderators, admins, etc.).
Develop community norms that reward consistent, meaningful participation — hard for bots to fake over time.
Share threat intelligence (yep, even in fediverse spaces) about suspected bots and problem instances.
- The uncomfortable truth?
We’re already past the point where you can always tell. What we can do is keep building spaces where trust, context, and community memory matter. Where being human is more than just typing like one.
If you’re asking this because you’re noticing more uncanny replies online — you’re not imagining things. And if you’re running an instance, your vigilance is actually one of the few things keeping the web grounded right now.
/s obviously
I audibly laughed.
Like a normal human. With my meat air bags and not a modulated voice speaker.
That’s good
You don’t.
Worse, I may be a human today and a bot tomorrow. I may stop posting and my account gets taken over/hacked.
There is an old joke. I know my little brother is an American. Born in America, lived his life in America. My older brother… I don’t know about him.
I don’t get the joke. Care to explain it, plz?
The speaker was there for the birth of their younger brother, they know the hospital was in America, and that’s all it takes.
Their older brother was already alive when they were born, so their brother, parents, and the government could be lying about older brother, which, by nessesity, means the parents aren’t American either.
It’s implying that anything you didn’t witness personally you can’t be certain.
Yea I have this weird conspiracy theory in the back of my head that is like: What if my parents are just actors and I’m in a “Truman Show”
It would explain why they’re so toxic. This could just be some subtle torture chamber.
Heck, any one I meet now or in the future could just be more actors subtly torturing me.
Then they could also have actors saying that I’m being paranoid.
Like, this is the perfect torture chamber. So subtle you could never tell.
What a cleve joke. No sarcasm or irony. Thank you for explaining it!
Great explation. One exception:
which, by nessesity, means the parents aren’t American either.
As the speaker didn’t witness the birth of their own parents, the speaker simply does not know if they are Americans. It is not a joke about immigrants. As you correct state, lt is a joke about an unwillingness to believe what one did not personally witness.
Does it make a difference if they’re indistinguishable? With filter bubbles and echo chambers, it feels like maybe it doesn’t matter what percentage is bots. Use the usual moderation tools for decency.
I enjoy the platform, whether you guys are bots or humans
Could a bot do this?
(You can’t see me, but trust me, it’s very impressive)
Lemmy is too niche to spend money on running bots. There’s no profit, nothing to achieve. Reddit, on the other hand…
That’s bot talk!
They will scrape us for training data.
Fertile training ground.
That’s a great question! Let’s go over the common factors which can typically be used to differentiate humans from AI:
🧠 Hallucination
Both humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.If a person doesn’t know the answer to something, they will typically let you know.
But if an AI doesn’t know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.✍️ Writing style
People typically each have a unique writing style, which can be used to differentiate and identify them.For example, somebody may frequently make the same grammatical errors across all of their messages.
Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.❌ Explicit material
As an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.A human on the other hand, would be free to make remarks such as “cum on my face daddy, I want your sweet juice to fill my pores.” which would be highly inappropriate for the given context.
🌐 Cultural differences
People from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.
For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word ‘cunt’ in every sentence.💧Instruction leaks
If a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.
However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.🎁 Wrapping up
While these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.
Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you’re speaking with.Would you like me to draft a web form for users to submit their PII during registration?
The term hallucination bothers me more than it should because fabulation better describes what bots do.
Then let’s start using it!
If a person doesn’t know the answer to something, they will typically let you know.
As a lawyer, astronaut, ex-military and former Navy SEAL specialist, astrophysicist, and social-behavioral scientist, I can guarantee this is false.
🤓
What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class in the Navy Seals, and I’ve been involved in numerous secret raids on Al-Quaeda, and I have over 300 confirmed kills. I am trained in gorilla warfare and I’m the top sniper in the entire US armed forces. You are nothing to me but just another target. I will wipe you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of spies across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that’s just with my bare hands. Not only am I extensively trained in unarmed combat, but I have access to the entire arsenal of the United States Marine Corps and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking dead, kiddo.
Needs some em dashes!