• 0 Posts
  • 42 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle

  • nous@programming.devtoLinux@lemmy.mlGetting used to Helix
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    12 days ago

    IMO the best thing is to just start using it. You will start to pick things up fairly quickly then. Puzzles don’t often ingrain different ways todo things and often focus on weird or niche things that don’t come up as often. They can be a nice supplement to not a substitute for just using it in real world usescases.

    I do also find it helpful to read the shortcut keys on their site to get a feel for what is available. You won’t remember everything but it can be useful to know what is possible. Then when you hit a problem you may remember reading about something that can help and go look it up again.


  • Just dont format the drive when installing a new distro. BTRFS or not you can delete the system folders manually first if needed but I believe that some if not all distros will delete the system folders for you (at least ubuntu used to do this last I tried). And if not you can do it manually.

    It does not matter if you have a separate partition or not for /home installers won’t touch it if it already exists except to create a new user if needed. Remember, all the installers do is optionally format the drives, mount them then install files into those drives. If you skip the formatting and manually do that partitioning (or using an existing partition layout) it will still mount and write to the same places regardless of it they are separate partitions or not. So a separate partition does not add any extra protection to your home files at all.

    But regardless of what you do you should ALWAYS backup your home data anyway. Even with separate partitions or subvolumes the installer can touch or delete anything it wants to and you can easily click the wrong button or accidentally wipe thing. At most preserving your home saves you from restoring from a backup it should not be done instead of backup.



  • By far the most important thing is consistency

    This is not true. The most important thing is correctness. The code should do what you expect/want it to do. This is followed closely by maintainability. The code should be easy to read and modify. These are the two most important aspects and I believe all other rules or methodologies out there are in service of these two things. Normally the maintainability side of things as correctness is not that hard to achieve with any system of rules out there.

    You must resist the urge to make your little corner of the code base nicer than the rest of it.

    Uhg. I really don’t like these words. I agree with their sentiment, to a degree, but they make it sound like you should not try to improve anything at all. Just leave it as it is and write your new code in the same old crappy way that it always has been. Which is terrible advice. But I get what they are trying to say - you should not jump into a area swinging a wrecking ball around trying to make the code as locally nice as possible at the expense of the rest of the code base and other developmental practices around.

    In reality there is a middle ground. You should strive to make your corner of the code base as nice as possible but you should also take into account the rest of the code base and current practices as well. Sometimes having a little bit better local maintainability is not worth the cost of making the code base as a whole less maintainable. Sometimes a big improvement to local maintainability is worth a minor inconvenience to the code base as a whole - especially for fast moving parts of the code base. You don’t want something that no one has touched in 10 years to drastically slow down current features you are working on just to keep things consistent.

    Yes consistency is important. But things are far more nuanced than that statement alone. You should strive for consistency of a code base - it does after all have a big effect on the maintainability of the code base. But there are times that it hampers maintainability as well. And in those situations always go for maintainability over consistency.

    Say for instance some new library or an update to a library introduces a new much better way of working. Your code base is full of the old way though. Should you stick to the old way just to keep up with consistency? If the improvement is good enough then it might be worthwhile. Ideally if you can you would go though and update the whole code base to the new way of working. That way you improve things overall and keep consistency of the code base. But that is not always practical to do. It might be better to decide that the new way is worth switching to for new code, and worth refactoring old code when you are working in that area anyway but not worth the effort of converting the whole code base at once. This makes maintainability of the new code better, at the expense of old less used code.

    But the new way might not be a big enough jump in maintainability of new code that it is worth sacrificing the maintainability of the code base as a whole. Every situation like this needs to be discussed with your team and you need to decide on what makes most sense for your project. But the answer is not always that consistency is the most important aspect. Even if it is an important aspect.


  • Consistency as a means to correctness still means correctness is the more important aspect. Far too many projects and people that go hard on some methodologies and practices lose sight of their main goal and start focusing on the methods instead. Even to the point were the methods are no longer working toward the goal they originally set out to accomplish.

    Always have the goal in mind, once your practices start to interfere with that goal then it is time to rethink them.


  • There is no problem with having home on a different disk. But why do you want swap on the slower disk? These would benefit from being on the faster disks. Same with all the system binaries.

    Personally I would put as much as possible on the faster disk and mount the slower somewhere that the speed matters less. Like for photos/videos in your home dir.

    /boot can be anywhere though if you are getting a grub error that suggests the UEFI firmware is finding grubs first stage but grub is having issues after that. Personally I don’t use grub anymore, systemd-boot is far simpler as it does not need to deal with legacy MBR booting.


  • My point is the different levels of just working are subjective, not objective. I personally have spent far more time fixing bugs or just reinstalling ubuntu systems then I have over the same period for Arch systems. So many of my ubuntu installs just ended up breaking after a while where I have had the same Arch install on systems for 5+ years now. Could never get a Ubuntu system to last more then a year.

    Everyone has different stories about the different OSs. It is all subjective.


  • nous@programming.devtoLinux@lemmy.mlWindows doesn't "just work"
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 months ago

    You can cherry-pick examples of problems from every OS. That is my point. They all have issues that you may or may not encounter and quite a few that would make people from other OSs scratch their head and think what the hell the devs are thinking. Pointing out one issue of one OS does not change any of that.

    Which is proven by the other replys to your comment - others dont find this issue to be as show stopping as you do and just live with it or dont use it at all. How many issues do you do the same for on your favorite OS?


  • There is no perfect OS that just works for everyone. They are all software so they all have bugs. People how say an OS just works have never hit those bugs or have gotten used to fixing/working around or flat out ignoring them.

    This is true of all OSs, including Windows, Linux and MacOS. They are all differently buggy messes.

    Linux is the buggy mess that works best for me though.


  • How risky is it for Google sanning those mails in terms of privacy?

    Afraid to tell you but Google already scans thousands emails if you use proton or not. The company you are sending mail to likely uses gmail internally. Does not matter how private your end is if the other end is wide open.

    Though I am not convinced that anyone would care if you use a non gmail account for any technical role. Hell add a custom domain to proton and you can hide the fact you are using proton and create a even more professional looking address.


  • Realtime is important on fully fledged workstations where timing is very important. Which is the case for a lot of professional audio workloads. Linux is now another option for people in that space.

    Not sure Linux can run on microcontrollers. Those tend to not be so powerful and run simple OSs if they have any OS at all. Though this might help the embedded world a bit increasing the number of things you can do with things that have full system on chips (like the Raspberry pi).


  • Don’t ignore the responses. If you abuse it too much there is a chance that the api will just block you permanently and is generally seen as not very nice, it does take resources on both ends to process even that response.

    The ratelimit crate is an OK solution for this and simple enough to implement/include in your code but can create a miss-match between your code and the API. If they ever change the limits you will need to adjust your program.

    A proxy solution seems overly complex in terms of infra to setup and maintain so I would avoid that.

    A better solution can depend on the API. Quite often they send back the request quotas you have left either on every request or when you exceed the rate limit. You can build into your client for the API (or create a wrapper if you have not done so already) that understands these values and backs off when the limits are reached or nearly reached.

    Otherwise there are various things you can do depending on the complexity rate limit rules they have. The ratelimit crate is probably good for more complex things but you can just delay all requests for a while if the rate-limiting on the API is quite simple.

    You can also do an exponential backoff algorithm if you are not sure at all what the rules are (basically quickly retry with an exponentially increasing delay until you get a successful response with an upper limit on the delay). This is also a great all round solution for other types of failures to stop your systems from hammering them if they ever encounter a different problem or go down for some reason. Though not the best if you have more info about the time you should be waiting.


  • I disagree. It is more than just a nitpick. Saying black holes suck things in implies that they are doing something different than any other mass. Which they are not. Would you say a star sucks in stuff around it? Or a planet? Or moon? No. That sounds absurd. It makes it sound like blackholes are doing something different to everything else - which is miss-leading at best. They way things are described matter as it paints a very different picture to the layman.



  • IMO it is clickbaity because it promises to compare rust and zig, but in reality it is just comparing unsafe rust and zig. IMO all it really needed to not be is the word unsafe in the title like they use everywhere else in the article. That is fundamentally my only problem with it. I do agree with your other points on knowing when a tool is good or not to use but I would have been much less likely to click on it if it mentioned unsafe in the title - I already know rusts unsafe is not the best and was expecting some arguments around the advantages of zig over safe rust -ie what most people write, not a small subset of the language.


  • Title is clickbait. They only talk about unsafe rust, which I can see zip being safer/easier than unsafe rust. But 99.9% of code I write is safe rust - which most people just call rust. Even the author calls out the vast difference to writing safe vs unsafe rust:

    Overall, the experience was not great, especially compared to the joy of writing regular safe Rust.


    Then I would re-write the project in Zig to see if would be easier/better.

    Of course it will be. The second time you write any project it will be easier and faster as you learn a lot form the first time you write something. If zig is always the rewrite it will come off better. Almost all rewrites are better or faster, even if you are moving to a slower language - the language makes a difference to performance and ease of writing. But far more does how you write things and the data structures/algorithms you use.

    Overall they seem to want to write as much unsafe as they can and are writing rust like it is C. This is not a good idea and why zig will be better suited. But you can write a VM without large amounts of unsafe if you want to and it can be performant. Unsafe can be used in small parts where performance matters and cannot be done without it (though this is not that common I find).


  • And how did you, advanced Linux user, get to the stage your at now?

    Incrementally over time by reading the documentation and/or manuals of the commands I need to run and looking up how others solve the problems that I need to get other ideas about things (even, periodically, for things that I already know how to do to see if anyone has found a better way to do it or if a new tool has come out that helps). And trying things out/experimenting with different ways of doing things to find out what works well or not.


  • nous@programming.devtoLinux@lemmy.mlHow to distrohop!?
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Huh? You seem to be arguing both ways? If the system drive is full you have problems well before you risk losing data and if the home drive is full you have problems saving data? Both of these things can happen in a split partition or single partition setup. The split partition just means you have to get the space correct or end up with long resizing options for juggling the size around. And with a single partition it gives you more places to free up space when you do run out.

    Need to save a file but the disk is full? Clean out the package manager cache. You cannot do that if the partitions are separate. An update does not have enough space? Delete a steam game or clear out your downloads folder.

    Ext also has a reserved space option which when there is less free space than that option it refuses writes to anything but the root user - which is meant to solve the issue of a user trying to use up to much space, there is always a reserved bit that the system can do what it needs to. Though I have never seen this configured correctly for a running system and root can blast past the default 5% on smaller drives with a simple update. Or some other process is running as root is already consuming that space.

    Other partition types like btrfs have proper quotas that can be set per directory or user to prevent this type of issue as well and gives you a lot more control over the allocated space without needing to reboot into a live USB to resize the partitions.

    People seem to think a split partition helps but I have generally found it just causes more problems then it solves and there are now better tools that actually solve these problems in more elegant ways.


  • nous@programming.devtoLinux@lemmy.mlHow to distrohop!?
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    You don’t actually require a separate partition - you just need to not reformat the current one when reinstalling. Most distros I have seen will delete system folders if you don’t format but will always leave the home folder intact. Manually deleting the system folders is also an option if the installer does not.

    TBH I am not sure a separate partition actually buys you anything but false confidence (which we do sometimes need ;) ). During the partitioning phase you can easily delete or format the wrong one (hell, if you only have one then it is less error prone to skip it all together). And after that step the drives are mounted and there is nothing protecting your files from the installer deleting them. It is just installers don’t touch the home folder or anything other then the system ones if it is on one partition or 50 different ones - it just sees the files in the directory it wants to install to. The only way a separate partition would add protection is if it were mounted after the install - which I do not know of any installer that actually does that.

    As with anything. ALWAYS backup the data you care about before installing a new OS. The separate partition does NOT protect your data from deletion in any way. Leaving your home folder is simply a convenience option so you don’t need to restore all your files after the installation - not a replacement for a backup.