he/him

Alts (mostly for modding)

@sga013@lemmy.world

(Earlier also had @sga@lemmy.world for a year before I switched to @sga@lemmings.world, now trying piefed)

  • 9 Posts
  • 129 Comments
Joined 11 months ago
cake
Cake day: March 14th, 2025

help-circle




  • reason for them not appearing is that xmpp is a largely relaxed platform, that is, all implementations are not equally strict. some may implement certain extensions, others may implement other. encryption (omemo) is a common one that most implement, but then client (the user apps like gajim) may or may not implement them correctly, or they may have a fallback (first communication between 2 clients maybe is not encrypted), and other different problems with encryption being flaky (firstly, it is not perfect forward secrecy, it is a bit prone to failure (messages unable to decrypt), etc.), hence it is not recommended much.






  • i rarly use it, mostly to do sentiment/grammar analysis for some formal stuff/legalese. I kinda rarely use llms (1 or 2 times a month)(i just do not have a usecase). As for how good, tiny models are not good in general, but that is because they do not have enough knowledge to store info, so my use case often is purely language processing. though i have previously used it to do some work demo to generate structured data from unstructured data. basically if you provide info, they can perform well (so you can potentially build something to fetch web search results, feed into context, and use it(many such projects are available, basically something like perplexity but open)).


  • sga@piefed.socialtoScience Memes@mander.xyzHD 137010 b
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    adding to this comment, the best way that we currently know how to extract this energy is using spinning black holes, with theoretical efficiency of ~42% (answer to the universe)(src: a minute physics video precisely on this). the naive solution to just touch them gets like 0.01-0.1% of total energy, so in bad case, we need trillion years.




  • further clarification - ollama is a distribution of llama cpp (and it is a bit commercial in some sense). basically, in ye olde days of 2023-24 (decades in llm space as they say), llama cpp was a server/cli only thing. it would provide output in terminal (that is how i used to use it back then), or via a api (an openai compatible one, so if you used openai stuff before, you can easily swap over), and many people wanted a gui interface (a web based chat interface), so ollama back then was a wrapper around llama cpp (there were several others, but ollama was relatively main stream). then as time progressed ollama “allegedly enshittified”, while llama cpp kept getting features (a web ui, ability to swap models during run time(back then that required a separate llama-swap), etc. also llama cpp stack is a bit “lighter” (not really, they both are web tech, so as light as js can get), and first party(ish - the interface was done by community, but it is still the same git repo) so more and more local llama folk kept switching to llama cpp only setup (you could use llama cpp with ollama, but at that point, ollama was just a web ui, and not a great one, some people prefered comfyui, etc). some old timers (like me) never even tried ollama, as plain llama.cpp was sufficient for us.

    as the above commenter said, you can do very fancy things with llama cpp (the best thing about llama cpp is that it works with both cpu and gpu - you can use both simulataneously, as opposed to vllm or transformers, where you almost always need gpu. this simultaneous thing is called offloading. where some of the layers are dumped in system meory as opposed to vram, hence the vram poor population used ram )(this also kinda led to ram inflation, but do not blame llama cpp for it, blame people), and you can do some of them on ollama (as ollama is wrapper around llama cpp), but that requires ollama folks to keep their fork up to date to parent, as well as expose the said features in ui.



  • if that is the case, then it is great. I personally am a rust fan, and use a smithay based wm (niri). and that is basically a single man project, but with active community support. XFCE can pull more man power, but still feels like wasted effort. if just the lang was the choice, they could have considered cosmic wm. it is mroe heavy than xfce needs, but they would have probably had an easier time.






  • well i seemingly have a very different viewpoint, because the most interesting economics bits are econometrics, essentially data science - the same things all other stem folks use to find the underlying distribution, estimators, their significance, finding the p value. Using this to model whole world is just as wrong as saying all of chem is solved by taking mendelev periodic table. sourely it works, and explains some stuff, but just knowing it does not predict all of chemistry. same way, for example ls-lm model (suppply demand curve) does not explain the whole world, and good economists do not claim they can explain it (sorry for using bad examples, 1 only took 2-3 eco courses).