what does i0 even mean, like what is the sign even for
he/him
Alts (mostly for modding)
(Earlier also had @sga@lemmy.world for a year before I switched to @sga@lemmings.world, now trying piefed)
- 9 Posts
- 129 Comments
they are technically correct, the best kind of correct
reason for them not appearing is that xmpp is a largely relaxed platform, that is, all implementations are not equally strict. some may implement certain extensions, others may implement other. encryption (omemo) is a common one that most implement, but then client (the user apps like gajim) may or may not implement them correctly, or they may have a fallback (first communication between 2 clients maybe is not encrypted), and other different problems with encryption being flaky (firstly, it is not perfect forward secrecy, it is a bit prone to failure (messages unable to decrypt), etc.), hence it is not recommended much.
sga@piefed.socialto
PeerTube@lemmy.wtf•How do I find a list of my comments and liked videos?English
1·4 days agolike find a video where you commented from peertube, and the same link was posted on piefed, and there your comment shows up in grouped view (from all crossposts)
sga@piefed.socialto
PeerTube@lemmy.wtf•How do I find a list of my comments and liked videos?English
1·4 days agocan you try to find your peertube profile in a piefed post? if so, there you can check user profile, cause effectively all peertube videos are posts in piefed (threadiverse land).
i tried doing that, and found my profile at https://piefed.social/u/sga@peertube.wtf, so maybe just do a simple url check. you will only get videos you commented on and not likes
sga@piefed.socialto
Linux@programming.dev•Not Kidding! Bash Shell Manual is Part of Epstein Files 🫣English
51·7 days agogives me a good reason to say why i do not use bash
sga@piefed.socialto
Open Source@lemmy.ml•Supac - a declarative package manager written in Rust, scriptable in nushellEnglish
3·7 days agoI am interested in it, because i have 3 package managers, arch, uv and cargo (binstall) so this covers me well hopefully.
sga@piefed.socialto
Opensource@programming.dev•Is there any software that can use it that benefits average user or is it just a waste of silicon???English
1·8 days agoi rarly use it, mostly to do sentiment/grammar analysis for some formal stuff/legalese. I kinda rarely use llms (1 or 2 times a month)(i just do not have a usecase). As for how good, tiny models are not good in general, but that is because they do not have enough knowledge to store info, so my use case often is purely language processing. though i have previously used it to do some work demo to generate structured data from unstructured data. basically if you provide info, they can perform well (so you can potentially build something to fetch web search results, feed into context, and use it(many such projects are available, basically something like perplexity but open)).
adding to this comment, the best way that we currently know how to extract this energy is using spinning black holes, with theoretical efficiency of ~42% (answer to the universe)(src: a minute physics video precisely on this). the naive solution to just touch them gets like 0.01-0.1% of total energy, so in bad case, we need trillion years.
technically, it uses a lot of energy (depending on how much the blade weighs). it is not electrical energy, but gravitational potential energy
sga@piefed.socialto
Opensource@programming.dev•Is there any software that can use it that benefits average user or is it just a waste of silicon???English
1·10 days agopretty much this. I use smolllm (a 3B param model, trained only on openly available datasets)
sga@piefed.socialto
Opensource@programming.dev•Is there any software that can use it that benefits average user or is it just a waste of silicon???English
4·11 days agofurther clarification - ollama is a distribution of llama cpp (and it is a bit commercial in some sense). basically, in ye olde days of 2023-24 (decades in llm space as they say), llama cpp was a server/cli only thing. it would provide output in terminal (that is how i used to use it back then), or via a api (an openai compatible one, so if you used openai stuff before, you can easily swap over), and many people wanted a gui interface (a web based chat interface), so ollama back then was a wrapper around llama cpp (there were several others, but ollama was relatively main stream). then as time progressed ollama “allegedly enshittified”, while llama cpp kept getting features (a web ui, ability to swap models during run time(back then that required a separate llama-swap), etc. also llama cpp stack is a bit “lighter” (not really, they both are web tech, so as light as js can get), and first party(ish - the interface was done by community, but it is still the same git repo) so more and more local llama folk kept switching to llama cpp only setup (you could use llama cpp with ollama, but at that point, ollama was just a web ui, and not a great one, some people prefered comfyui, etc). some old timers (like me) never even tried ollama, as plain llama.cpp was sufficient for us.
as the above commenter said, you can do very fancy things with llama cpp (the best thing about llama cpp is that it works with both cpu and gpu - you can use both simulataneously, as opposed to vllm or transformers, where you almost always need gpu. this simultaneous thing is called offloading. where some of the layers are dumped in system meory as opposed to vram, hence the vram poor population used ram )(this also kinda led to ram inflation, but do not blame llama cpp for it, blame people), and you can do some of them on ollama (as ollama is wrapper around llama cpp), but that requires ollama folks to keep their fork up to date to parent, as well as expose the said features in ui.
sga@piefed.socialto
Opensource@programming.dev•Is there any software that can use it that benefits average user or is it just a waste of silicon???English
232·11 days agotry to use it with llama cpp if you folks are interested in runinng locall llms - https://github.com/ggml-org/llama.cpp/issues/9181
the issue is closed, but that is not because it is solved, check it out, and find link to your relevant hardware (amd or intel or something else), and see if your particular piece is available. if so, you have hope.
in case it is not, try to find first party stuff (intel vino or intel one or amd rocm stack, and use that with transformers python or see if vllm has support).
also, try to check r/localllama on the forbidden website for your particular hardware - there is likely someone who has done something with it.
sga@piefed.socialto
Linux@lemmy.world•Xfwl4 - The roadmap for a Xfce Wayland CompositorEnglish
1·15 days agoif that is the case, then it is great. I personally am a rust fan, and use a smithay based wm (niri). and that is basically a single man project, but with active community support. XFCE can pull more man power, but still feels like wasted effort. if just the lang was the choice, they could have considered cosmic wm. it is mroe heavy than xfce needs, but they would have probably had an easier time.
sga@piefed.socialto
Linux@programming.dev•Systemd Founder Lennart Poettering Announces Amutable CompanyEnglish
91·15 days agoDid lennart leave microsoft? Probably a good thing in general for linux (it definitely was wierd that lead for systemd was working at microsoft)
sga@piefed.socialto
Linux@lemmy.world•Xfwl4 - The roadmap for a Xfce Wayland CompositorEnglish
2·15 days agoI personally do not hink it is a great decision. xfce is not really large enough to afford making a wayland compositor. smithay lets you start from 10 or 20, instead of 0, but you still need to get to 100. They should have probably chosen something like wayfire/labwc or some other wayland floating wm. Though I wish them good luck, I used to use xfce, and loved it.
sga@piefed.socialOPto
Science@mander.xyz•Footprint tracker identifies tiny mammals with up to 96% accuracyEnglish
3·15 days agoml has existed for like 50 or so years, since basically first computers. if your model does not have billions of parameters (in this case, for 2 species, only 9 such parameters were identified), it uses far less compute.
well i seemingly have a very different viewpoint, because the most interesting economics bits are econometrics, essentially data science - the same things all other stem folks use to find the underlying distribution, estimators, their significance, finding the p value. Using this to model whole world is just as wrong as saying all of chem is solved by taking mendelev periodic table. sourely it works, and explains some stuff, but just knowing it does not predict all of chemistry. same way, for example ls-lm model (suppply demand curve) does not explain the whole world, and good economists do not claim they can explain it (sorry for using bad examples, 1 only took 2-3 eco courses).
sga@piefed.socialto
Free and Open Source Software@beehaw.org•Some tips for an open source read it later serviceEnglish
3·15 days agoinline with above, there is singlefile extension, which just generates a single .html file with all images and scripts embeded. if on a chromium browser, you can directly save to .mhtml files which are similar.











(we can still make fun of the og nodebb folks)