The curated feed, now run by a team of six moderators, is the meeting ground for hundreds of thousands of Black users on Bluesky. Is it ready to meet the moment?
Same. And I am a racial minority (though not black, so that may color things…excuse the unintentional pun). That said, on Mastodon, I mainly interact with the people on my instance. And it’s small. There’s probably only a core group of like 50 active individuals, and I’m one of them. So there I’m not surprised I don’t see racism.
Interestingly, I have the same experience on even the proprietary social media sites. I was on Twitter from 2009 to 2023. I can’t say I was ever served up far-right content by the algorithms. I’m still on YouTube; same experience. Same on Instagram. Same on Bluesky.
I’m not trying to discount other people’s experiences, and I’ve seen the horrible tweets referenced in news articles and reddit comments and such. So I know it exists, but why am I not being served this content, while so many others apparently are? I mean, I’m OK with not getting far right wing content, lol. Leave me out of it! Makes my online life easier and more enjoyable. But it’s just odd.
YouTube…that reminds of a conversation I had here on Lemmy a while back. The subject of right wing political content on YouTube came up and I had the same “I don’t encounter that” comment that I just made in regards to Mastadon.
After a bit of back and forth I realized that I don’t engage with political content of any type on YT so the algorithm doesn’t push it at me. It seems that YT doesn’t do a good job of classifying political content as to its lean. So once you start engaging with political content the algorithm starts suggesting all kinds of it.
It’s the same with social justice and racial issues. I don’t engage with that kind of content on YT or Mastadon, don’t see it my feeds, and it’s not being pushed at me by the algorithm.
I don’t know if that explains it completely but there has to be some reason(s) why some of us don’t see this stuff while other people see it all the time.
I’ve recently used YT more than I ever have in the past… and was surprised at some of the suggested content at first (like, why tf would you think I’m interested in that‽). And it was weird being able to almost “see” the algorithm and what it was trying to decipher about me (to offer more personally-relevant content of course!)
I started getting suggestions for click-bait shit at first, and if it got me for even a moment (‘I wanna see what this is about’), the suggestions became even more brain-dead and polarized.
I had to actively choose to cut my curiosity off while mindlessly perusing… because apparently, if I want to watch bull-riding, that immediately means I want to see rage-bait bullshit about power-dynamics and diviciveness. It was a bit much, seeing in real-time how someone might be casually walked into an echo-chamber of self fulfilled crazy.
So I did end up encountering the surface layer of it, but now it’s sliding back into my hobby-areas of interests. But it still pops shit up with AI generated images for videos (that never actually occurs in the video) with click-bait titles, and is inherently only used to induce “doom-scrolling” while increasing engagement. It’s fucking disgusting, to put it bluntly.
I just wanna see how different drywall anchors work sometimes, I don’t need to know how a “Navy Seal pwned a police officer that pulled them over (AI picture of a dude body slamming a cop)”. Dumb shit
Ah fair point. Yeah, I rarely look at political content on YouTube, Instagram, and even Bluesky. Mainly because I use my real name on these platforms.
I reserve that for reddit, Lemmy, Tildes, and Mastodon, where I use screennames. And Mastodon doesn’t have an algorithm.
On Twitter, I did engage in political content, even with my real name, but I largely stopped using Twitter daily years ago. I went from tweeting regularly, to only lurking, and just maybe once or twice a week at that. By the end, I was checking maybe once a month. The Twitter algorithm probably didn’t have enough info on me, given my weak activity levels.
Same. And I am a racial minority (though not black, so that may color things…excuse the unintentional pun). That said, on Mastodon, I mainly interact with the people on my instance. And it’s small. There’s probably only a core group of like 50 active individuals, and I’m one of them. So there I’m not surprised I don’t see racism.
Interestingly, I have the same experience on even the proprietary social media sites. I was on Twitter from 2009 to 2023. I can’t say I was ever served up far-right content by the algorithms. I’m still on YouTube; same experience. Same on Instagram. Same on Bluesky.
I’m not trying to discount other people’s experiences, and I’ve seen the horrible tweets referenced in news articles and reddit comments and such. So I know it exists, but why am I not being served this content, while so many others apparently are? I mean, I’m OK with not getting far right wing content, lol. Leave me out of it! Makes my online life easier and more enjoyable. But it’s just odd.
YouTube…that reminds of a conversation I had here on Lemmy a while back. The subject of right wing political content on YouTube came up and I had the same “I don’t encounter that” comment that I just made in regards to Mastadon.
After a bit of back and forth I realized that I don’t engage with political content of any type on YT so the algorithm doesn’t push it at me. It seems that YT doesn’t do a good job of classifying political content as to its lean. So once you start engaging with political content the algorithm starts suggesting all kinds of it.
It’s the same with social justice and racial issues. I don’t engage with that kind of content on YT or Mastadon, don’t see it my feeds, and it’s not being pushed at me by the algorithm.
I don’t know if that explains it completely but there has to be some reason(s) why some of us don’t see this stuff while other people see it all the time.
I’ve recently used YT more than I ever have in the past… and was surprised at some of the suggested content at first (like, why tf would you think I’m interested in that‽). And it was weird being able to almost “see” the algorithm and what it was trying to decipher about me (to offer more personally-relevant content of course!)
I started getting suggestions for click-bait shit at first, and if it got me for even a moment (‘I wanna see what this is about’), the suggestions became even more brain-dead and polarized.
I had to actively choose to cut my curiosity off while mindlessly perusing… because apparently, if I want to watch bull-riding, that immediately means I want to see rage-bait bullshit about power-dynamics and diviciveness. It was a bit much, seeing in real-time how someone might be casually walked into an echo-chamber of self fulfilled crazy.
So I did end up encountering the surface layer of it, but now it’s sliding back into my hobby-areas of interests. But it still pops shit up with AI generated images for videos (that never actually occurs in the video) with click-bait titles, and is inherently only used to induce “doom-scrolling” while increasing engagement. It’s fucking disgusting, to put it bluntly.
I just wanna see how different drywall anchors work sometimes, I don’t need to know how a “Navy Seal pwned a police officer that pulled them over (AI picture of a dude body slamming a cop)”. Dumb shit
Ah fair point. Yeah, I rarely look at political content on YouTube, Instagram, and even Bluesky. Mainly because I use my real name on these platforms.
I reserve that for reddit, Lemmy, Tildes, and Mastodon, where I use screennames. And Mastodon doesn’t have an algorithm.
On Twitter, I did engage in political content, even with my real name, but I largely stopped using Twitter daily years ago. I went from tweeting regularly, to only lurking, and just maybe once or twice a week at that. By the end, I was checking maybe once a month. The Twitter algorithm probably didn’t have enough info on me, given my weak activity levels.