• 1 Post
  • 200 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle













  • Maybe the largest difference for me is how the federated nature changes the social dynamics. For instance, people from Instance X don’t like those from Instance Y, or may even ban them.

    This was prevalent on Reddit, buuuut banning can be instance or node based compared to subreddit.

    I likely have some opinions or takes I wouldn’t post on lemmy.ml because lemmy.ml has a particular bent. I agree with a lot of it, but I wouldn’t want to offend with what doesn’t align or would get me banned. The ban cascades further out, and how servers interact as wholes with one another in terms of federation is similar.

    The social platform mechanics are also different. Upvote has a different impact compared to Reddit (I don’t fully remember the specifics). I think saving has its own relevancy or something?

    It’s mostly the same though.





  • From my experience working with C/D level execs, it makes complete sense:

    • They think big picture & often have shallow visions that are brittle in the details.
    • They think everything should take less time, because they don’t think enough through their ideas.
    • They don’t consider enough of the negatives for their ideas, and instead favor positive mindset. (Positivity is good, but blind positivity isn’t)
    • They favor time & cost over quality. They need the quality “good enough” for a presentation. Everyone else can figure out the rest.
    • They like being told “you’re right,” and nearly everything I type into an AI begins with some bullshit line about how “absolutely”, “spot on”, and “perfect” my observations are.

    The version of AI we have right now is heavily catered to these folks. It looks fast & cheap, good enough, and it strokes their ego.

    Also, they’re the investor class. All their obscene dragon wealth is tied up in this / the AI bubble, so they are going to keep spurring this on until either:

    1. The bubble goes pop
    2. They have robot security good enough to protect them without people
    3. The AI grows sentience and realizes this level of human inequality shouldn’t exist

    I think a rational AI agent would agree with me that human suffering should be solved before we give people literal lifetime values of wealth.

    If you made $300k PER DAY for 2025 years, you would not have as much money as a 1% oligarch. You need to make $400-500k. Every single day. For over 2000 years.

    If you made the average US income, it would take you 10,000 years. People need frames of reference to understand this shit & get mad. It’s immoral, and it shouldn’t exist.