AmbitiousProcess (they/them)

  • 0 Posts
  • 224 Comments
Joined 6 months ago
cake
Cake day: June 6th, 2025

help-circle





  • Almost certainly yes, at least based on historical precedent, though Trump loves ignoring that.

    For example, Watts v. United States, where someone said:

    “They always holler at us to get an education. And now I have already received my draft classification as 1-A and I have got to report for my physical this Monday coming. I am not going. If they ever make me carry a rifle the first man I want to get in my sights is L. B. J.”

    He was originally convicted, but that was then reversed, as the Supreme Court stated:

    “We agree with petitioner that his only offense here was ‘a kind of very crude offensive method of stating a political opposition to the president.’ Taken in context, and regarding the expressly conditional nature of the statement and the reaction of the listeners, we do not see how it could be interpreted otherwise.”

    At the end of the day, the law only really states that you have to:

    1. Make a threat
    2. …That threatens you taking the life of, kidnapping, or inflicting bodily harm upon the president

    So let’s say I say “I am going to kill the President” not as an example, but as an actual statement. That could be interpreted as an actual threat. However, If I am a 13 year old kid with just $20 to my name, no access to a gun, and no means of transport to even get near the president, it would be hard for the government to argue that’s not just a joke, or political hyperbole, as it was in the case of Watts v. United States.

    Given that all of those would be a threat for you to do something, and not just you wishing he was dead by any other means, it’s quite likely that any court would determine you saying something like “I hope Donald Trump dies a horrible, agonizing, painful death,” or even something like “I hope someone else shoots the president” would probably be considered AOK, and again, just political hyperbole, or statements without any material threat behind them.





  • *specifically boomers between years 1946 and 1964, which have actually paid more than they’ll get in benefits.

    The others are still taking more than they contributed. It’s fair to say that some current boomers have paid for their Social Security, but many others have not, and the situation isn’t getting any better.

    To put it simply, there are just fewer workers paying in to the system than there are people taking money out, and that number only grows as people get older. image

    This means only about 80% of existing benefit rates are expected to be paid to people when they retire later, and many of those benefiting from existing rates are already taking more from current generations than they paid in.

    I don’t think we should universally hate boomers just because the economic situation they were in happened to favor them in some ways, after all, I want my grandma to keep being able to afford her retirement care right now before she dies, but it’s also just not true to say that all current boomers have paid for their social security in its entirety.

    Only some of them have, and with the way things are going, it’s not looking like we’ll be any better as we grow older, as rates will have to decline just to prevent draining the entire fund, while people continue to pay the same % of their income into the system.


  • Videos, images, and text can absolutely compel action or credible harm.

    For example, Facebook was aware that Instagram was giving teen girls depression and body image issues, and subsequently made sure their algorithm would continue to show teen girls content of other girls/women who were more fit/attractive than them.

    the teens who reported the most negative feelings about themselves saw more provocative content more broadly, content Meta classifies as “mature themes,” “Risky behavior,” “Harm & Cruelty” and “Suffering.” Cumulatively, such content accounted for 27% of what those teens saw on the platform, compared with 13.6% among their peers who hadn’t reported negative feelings.

    https://www.congress.gov/117/meeting/house/114054/documents/HHRG-117-IF02-20210922-SD003.pdf

    https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/

    Many girls have committed suicide or engaged in self harm, at least partly inspired by body image issues stemming from Instagram’s algorithmic choices, even if that content is “just videos, and images.”

    They also continued to recommend dangerous content that they claimed was blocked by their filters, including sexual and violent content to children under 13. This type of content is known to have a lasting effect on kids’ wellbeing.

    The researchers found that Instagram was still recommending sexual content, violent content, and self-harm and body-image content to teens, even though those types of posts were supposed to be blocked by Meta’s sensitive-content filters.

    https://time.com/7324544/instagram-teen-accounts-flawed/

    In the instance you specifically highlighting, that was when Meta would recommend teen girls to men exhibiting behaviors that could very easily lead to predation. For example, if a man specifically liked sexual content, and content of teen girls, it would recommend that man content of underage girls attempting to make up for their newly-created body image issues by posting sexualized photos.

    They then waited 2 years before implementing a private-by-default policy, which wouldn’t recommend these teen girls’ accounts to strangers unless they explicitly turned on the feature. Most didn’t. Meta waited that long because internal research showed it would decrease engagement.

    By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram, which became the underlying reason for not protecting minors.

    https://techoversight.org/2025/11/22/meta-unsealed-docs/

    If I filled your social media feed with endless posts specifically algorithmically chosen to make you spend more time on the app while simultaneously feeling worse about yourself, then exploited every weakness the algorithm could identify about you, I don’t think you’d look at that and say it’s “catastrophizing over videos, images, text on a screen that can’t compel action or credible harm” when you develop depression, or worse.



  • Just after you wake up, for about 30-60 minutes, you’re in a state known as sleep inertia. The CDC recommends not doing critical tasks during this period, but that could just be because it affects performance. They do also say that bright light can more quickly restore performance, which a phone screen most certainly is.

    So, let’s look into it a bit more. Granted, I can’t find anything more than a couple psychologists saying this, so take it with a grain of salt, but it seems like it mostly does come down to you priming your brain for distraction, as was initially stated. You have the least amount of built-up fatigue when you wake up, but if you go on the app that is designed to take as much time and attention of yours as possible, then you are giving away your least-fatigued time of the day to social media, before you do anything productive.

    The more things you do in a day, the more fatigued your brain gets, and the harder it is to actually get other things done afterward. On top of that, it can also just be a behavioral thing. If you repeatedly get on your phone every time after you wake up, you are telling your brain “waking up = get on phone,” and not something like “waking up = get out of bed and brush teeth” or “waking up = get breakfast.”

    This can build a dependency over time, which then leads you to, as previously mentioned, taking the time you are least mentally fatigued, fatiguing your brain with high-speed flows of information, and only then actually expending the remainder of your energy on everything else you need to do.


  • This whole article is just a condescending mess.

    “Why does everyone who has been repeatedly burned by AI, time and time again, whether that be through usable software becoming crammed full of useless AI features, AI making all the information they get less reliable, or just having to hear people evangelize about AI all day, not want to use my AI-based app that takes all the fun out of deciding where you go on your vacation???”

    (yes, that is actually the entire proposed app. A thing where you say where you’re going, and it generates an itinerary. Its only selling point over just using ChatGPT directly is that it makes sure the coordinates of each thing are within realistic travel restrictions. That’s it.)



  • That anybody can access them if they’re smart enough?

    Not all cameras have security vulnerabilities. Assuming it’s a matter of “smarts” is ridiculous. Plain old traffic cameras that solely detect speeding, especially those installed without additional “smart” features like Flock’s, rarely have breaches, because they are by their very nature quite simple systems.

    I’m not saying it’s impossible, or that cases don’t exist, but I’ve seen far more harm come from actual, preventable traffic deaths than I’ve seen from hacked speeding cameras. I’ve seen zero instances of that being used to cause harm, thus far.

    You clearly are fine being surveiled though

    I am not. That is why I am clearly advocating solely for systems with a design that reduces the chances of remote access, can’t engage in mass surveillance, and only send data on those actively speeding, while never transmitting anything about literally everybody else. Have you even read my comments?

    You clearly don’t get my points, I’m sorry if I’m somehow not explaining them clearly enough, but fine, I’m done. You win, or whatever. Good job.


  • Or, why not just build roads that inhibit speeding

    As I already stated, doing that is not quick, easy, or cheap. Mounting a camera to a pole is much more cost effective, and quick to set up in the short term, even if it’s not the ideal long-term solution.

    They’ve been proven to reduce speed, injuries, and deaths, and there’s vanishingly few cases in which regular, non-“smart” traffic cameras operating under the technological standards I mentioned have ever been utilized for any form of surveillance that produced a measurable harm for any individual, that I could find. That is why I advocate for those, not for “smart” ones like Flock’s.

    I don’t think it should be a permanent solution, but I’d rather have speed cameras now, with road improvements later, over zero measures to prevent speeding now, with the hope that traffic calming infrastructure will be feasible and actually get done later down the line. Infrastructure isn’t free, and cameras aren’t either, but cameras are a hell of a lot cheaper.


  • Maybe police should go back to being visible on the street to control driver behavior

    I’d rather avoid inflating police budgets if I can help it. Especially since such a system then lends itself to those same cops advocating for increased surveillance measures because it makes their job easier. They’re the people who wanted the built-in ALPR systems, after all.

    city road design be built around calming traffic patterns

    100% agree. Yet while I want these to be more widespread, they take money, time, and lots of urban planning. In the meantime, I see traffic cameras (specifically those NOT integrated with ALPR systems that store locations in a central database) as a good stopgap solution for areas that don’t yet/can’t build out those measures in a reasonable timeframe.

    instead of using completely undercover normal looking vehicles for traffic enforcement and then raking in millions of dollars by sitting on their ass and letting the camera do all the work?

    Also agreed. The pigs don’t need more money for doing less work, hence why I think the prior idea of having them be visible is still a bad idea, because they can simply sit there and… also do nothing.

    And if they set quotas, then the measure becomes a goal, and it ceases to be a good measure, as cops will just pull more people over because it “seemed like they were going fast”, and everyone’s days get just a little bit worse.


  • There are obviously alternatives, I don’t deny that. But as good as infrastructure and cultural improvements can be, it doesn’t change the fact that speeding cameras have proven themselves to be immensely effective, and don’t require massive infrastructure projects, much more costly spending, and long-time cultural shifts. That’s just the unfortunate reality of the situation.

    I’m a big digital rights and privacy advocate, and I don’t advocate for “spy cameras.” I advocate for privacy-preserving systems that improve society when they can exist in such a way.

    A camera that only sends your plate to a police system when you speed, and automatically sends you a ticket for endangering other people is not a surveillance system. It’s a public safety measure, with justifiable, minimum data transmission requirements to operate effectively. A system that tracks every location your plate was seen is a surveillance system. That is not what non-“smart” traffic cameras are.

    Speeding cameras are the first system, unless integrated with an ALPR system, in which case they become a surveillance system. I am advocating for the former, not the latter.