• 7 Posts
  • 395 Comments
Joined 3 years ago
cake
Cake day: August 15th, 2023

help-circle
  • I am staunchly anti-hype and a firm “whatevers” on the rest of AI. Like you said, it has its place. The biggest issue I have is that it is being pushed as another dopamine fix. I ain’t gonna lie, turning an idea into code in 5 mins is really fucking cool. Unfortunately, its really fucking addictive so I give myself multi-day breaks after a day or two of coding and fucking around with my homelab.

    (Tinfoil hat time) If anyone hasn’t noticed, there are some clear distinctions between enterprise LLM tools and regular consumer LLM tools. The consumer-grade plans are more prone to random mistakes and forgetfulness. My theory is that more “mistakes” and resulting fixes not only burn more tokens, but also push dopamine levels higher and lower. Aside from coding blatantly obvious security issues, enterprise LLMs are much more reliable.

    LLMs are a tools. You solve problems. If an LLM is solving all of your problems, you are the tool.



  • remotelove@lemmy.catoNo Stupid Questions@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    10 days ago

    It’s always been broken, disjointed and tribal. You can tell everyone, but many have already known this. Hell, most of humanity is like this naturally.

    Almost every large organization is this way, really. Most of it is just covered up by goverment or corporate propaganda or some weird sense of duty people have to jobs or organizations.

    This ain’t anything new, is my point. It’s new and shocking to you, sure. Welcome to the tribe of the disillusioned. It was always better in the past and new people are always going to make it “like it was” and “better”. (Quite literally the selling point myth of MAGA, to be honest.)



  • Yeah, its a hell of an experience many people should have. (Many people probably also shouldn’t.) At the core of it all, I believe that being able to view problems through a very different lense is a big part of how psychedelics work when used for deep therapy. In many cases, I could see and interact with my emotions and feelings like they were an independent thing. I could almost visualize and touch my own emotions. Being able to see through my problems and get closure for issues that were supposed to be long in my past was a very beautiful thing. Trippy stuff, quite literally.

    Also, (and this is really for others that are reading this) I am not really joking with my personification of a mushroom. I used to think that was just some crazy burned-out hippy talk, but there is so much more to it than that. Yes. A mushroom talking is absolutely a hallucination. That isn’t what that literally is though…

    It’s more of a very primal, internal dialogue. It’s like the voice that we choose not to listen to when we have a “gut feeling” about something and can’t vocalize the concern. It’s the voice in your head that always knows the right decisions to make even if we brush it off through a normal day. That is the mushroom talking and it’s got a really powerful voice if you ever choose to follow Alice down that rabbit hole far enough.


  • I attribute mushrooms to finally breaking my years long journey as a fairly committed alcoholic.

    The decisions or realizations people can have during an intense trip tend to be really sticky for a very long time regardless if it’s a good trip or a bad one. It’s the nature of the beast.

    But mushrooms be like you described sometimes. I won’t go near the dosages I was taking when I was kicking booze. 1-2 grams every once in a while is just fine for me.

    After my last power trip (+5 grams) I saw what I needed to see and probably will never go in that range again. It was a life changing trip and thankfully not a bad one. However, when the mushrooms speak to you like that, you listen. They told me I was done and I was ready to heal on my own.



  • Most of this is just marketing crap from Anthropic.

    Finding vulnerabilities in code and generating complex, multistep exploits with publicly available models is possible now. This biggest hurdles now is setting correct context and actually knowing what to look for. Any “guardrails” for this behavior are easily bypassed by framing the detection and exploit generation as a valid dev style question in the most difficult of situations.

    They likely just trained a model without guardrails in this case.

    What they are doing here is over-hyping a problem and framing it like they are the only ones with a solution. LLM security issues are more in-focus now that companies have dumped a ton of resources into building AI systems they don’t really understand.


  • Environmental impacts aside for a sec, that would be cool if Taiwan dropped a fab up in Canadia. Fortunately/Unfortunately, I am not sure if a fab is compatible with Canada, it’s climate or geo formations. Likely not.

    Such a double-edged sword. There is a bunch of suck that comes attached to a fab, but from an economic and technology perspective it would be awesome.

    (10/10, would rather see a fab managed somewhat responsibly in Canada rather than here in the US. I have no proof to go with that statement, but it seems logical.)





  • Confirmed or not, get better sources. Equipment damaged in a war is completely plausible and even inevitable. Propaganda is also inevitable, from either side of a conflict. In this case, an Indian source has proxied state news meant for Iranians.

    Trying to sort out their mix of AI slop posts from legit unbiased news isn’t worth my time, even if the news is proxied by another source. (Indian news is generally shit as well, I just ignore that by default. If you actually want extreme sensationalized trash, then good on you.)