• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle
  • Your $1 has absolutely changed in value by 10pm. What do you think inflation is? It might not be enough change for the store to bother changing prices but the value changes constantly.

    Watch the foreign exchange markets, your $1 is changing in value compared to every other currency constantly.

    The only difference between fiat and crypto is that changing the prices in the store is difficult, and the volume of trade is high enough to reduce volatility in the value of your $. There are plenty of cases of hyperinflation in history where stores have to change prices on a daily basis, meaning that fiat is not immune to volatility.

    To prevent that volatility we just have things like the federal reserve, debt limits, federal regulations, etc that are designed to keep you the investor (money holders) happy with keeping that money in dollars instead of assets. The value is somewhat stable as long as the government is solvent.

    Crypto doesn’t have those external controls, instead it has internal controls, i.e. mining difficulty. Which from a user perspective is better because it can’t be printed at will by the government.

    Long story short fiat is no different than crypto, there is no real tangible value, so value is what people think it is. Unfortunately crypto’s value is driven more by speculative “investors” than by actual trade demand which means it is more volatile. If enough of the world changed to crypto it would just as stable as your $.

    Not saying crypto is a good thing just saying that it isn’t any better or worse. It needs daily usage for real trade by a large portion of the population to reduce the volatility, instead of just being used to gamble against the dollar.

    Our governments would likely never let that happen though, they can’t give up their ability to print money. It’s far easier to keep getting elected when you print the cash to operate the government, than it is to raise taxes to pay for the things they need.

    The absolutely worthless meme coin scams/forks/etc are just scammers and gamblers trying to rip each other off. They just make any sort of useful critical mass of trade less and less plausible because it gives all crypto a bad name. Not that Bitcoin/Ethereum started out any different but now that enough people are using them splitting your user base is just self defeating



  • Nope, the switch only keeps saves on the internal storage or synced to their cloud if you pay for it. When doing transfers between devices like this there is no copy option only a move and delete.

    There are some legitimate reasons they want to prevent this like preventing users from duplicating items in multiplayer games, etc. Even if you got access to the files they are encrypted so that only your user can use them.

    I think the bigger reason they do this is there are occasionally exploits that are done through corrupted saves. So preventing the user from importing their own saves helps protect the switch from getting soft modded.

    If you mod your switch you can get access to the save files and since it has full access it can also decrypt them, so that you can back them up. One of several legitimate reasons to mod your switch.



  • Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

    Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

    This also better facilitates easier cleanup. The apps documentation can say “docker compose down -v”, and they are done. Instead of listing a bunch of directories that need to be cleaned up.

    Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won’t have been deleted for them when they start up the services again.

    All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

    • When running production applications I don’t want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

    • The default location for named volumes doesn’t work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

    • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

    So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

    Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.


  • For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.

    For wireless service providers for instance let’s say you have 20 customers on a single access point. Like a walkie-talkie you can’t both transmit and receive at the same time, and no two customers can be transmitting at the same time either.

    So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.

    Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.

    Most users don’t really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.

    So let’s say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.

    Contrast that to asymmetrical. Let’s say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.

    In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren’t home or aren’t active at the time that means that much more for those who are active.

    I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don’t have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.

    A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.


  • I am assuming this is the LVM volume that Ubuntu creates if you selected the LVM option when installing.

    Think of LVM like a more simple more flexible version of RAID0. It isn’t there to offer redundancy but it take make multiple disks aggregate their storage/performance into a single block device. It doesn’t have all of the performance benefits of RAID0, particularly with sequential reads, but in the cases of fileservers with multiple active users it can probably perform even better than a RAID0 volume would.

    The first thing to do would be to look at what volume groups you have. A volume group is one or more drives that creates a pool of storage that we can allocate space from to create logical volumes. Run vgdisplay and you will get a summary of all of the volume groups. If you see a lot of storage available in the ‘Free PE/Size’ (PE means physical extents) line that means that you have storage in the pool that hasn’t been allocated to a logical volume yet.

    If you have a set of OS disks an a separate set of storage disks it is probably a good idea to create a separate volume group for your storage disks instead of combining them with the OS disks. This keeps the OS and your storage separate so that it is easier to do things like rebuilding the OS, or migrating to new hardware. If you have enough storage to keep your data volumes separate you should consider ZFS or btrfs for those volumes instead of LVM. ZFS/btrfs have a lot of extra features that can protect your data.

    If you don’t have free space then you might be missing additional drives that you want to have added to the pool. You can list all of the physical volume which have been formatted to be used with LVM by running the pvs command. The pvs command show you each formatted drive and if they are associated with a volume group. If you have additional drives that you want to add to your volume group you can run pvcreate /dev/yourvolume to format them.

    Once the new drives have been formatted they need to be added to the volume group. Run vgextend volumegroupname /dev/yourvolume to add the new physical device to your volume group. You should re-run vgdisplay afterwards and verify the new physical extents have been added.

    If you are looking to have redundancy in this storage you would usually build an mdam array and then do the pvcreate on the volume created my mdadm. LVM is usually not used to give you redundancy, other tools are better for that. Typically LVM is used for pooling storage, snapshots, multiple volumes from a large device, etc.

    So one way or another your additional space should be in the volume group now, however that doesn’t make it usable by the OS yet. On top of the volume group we create logical volumes. These are virtual block devices made up of physical extents on the physical disks. If you run lvdisplay you will see a list of logical volumes that were created by the Ubuntu installer which is probably only one by default.

    You can create new logical volumes with the lvcreate command or extend the volume that is already there. Or resize the volume that you already have with lvresize. I see other posts already explained those commands in more detail.

    Once you have extended the logical volume (the virtual block device) you have to extend the filesystem on top of it. That procedure depends on what filesystem you are using on your logical volume. Likely resize2fs for ext4 by default in Ubuntu, or xfs_growfs if you are on XFS.


  • greyfox@lemmy.worldtoNintendo@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    The point is that you can still treat it like a physical game. So there are upsides in that you can borrow it to your friends or resell it.

    If it is a game that gets updated often or requires updates to even play it (multiplayer games) then having the game data on the card is next to worthless anyways and just makes publishing the game more difficult because they can’t start manufacturing the cards until the game is 100% ready.

    Nintendo’s audience goes for physical much more than the other consoles, much easier swapping cards than dealing with family sharing, a lot of their adult users collect games, and generally Nintendo games hold their value much more so being able to resell is important. So this is a compromise between what their users want and what they need for modern game development.

    Slippery slope for sure if they start doing the same with single player games but there are valid reasons for them to do this, and the alternative is they just start forcing everyone to download all of their games which is even worse. MIG switch would never have been an issue for them if there just weren’t game card slots to begin with.

    Of course end users should assume the store is going to get shutdown someday and their games will be inaccessible at that time. Nintendo needs to shutdown those stores so that a couple of generations later they can sell everyone the same games for the second/third/fourth time.


  • Sales taxes are state/city level taxes, there are no federal sales taxes (yet). But he is essentially using the tariffs as a way to enact sales taxes without really adding a sales tax.

    With the tariffs he can add a massive tax on the people which Republicans would normally be very much against, but he can say it is about being pro American and most of them forget about all of the extra money they will be paying.

    This shifts the tax burden further onto middle/lower income homes and lets him give more income tax cuts to higher earners without increasing the deficit so much that congress would turn on him.

    The Republicans have actually been talking about this for a long time they called it the “fair tax”. Their fair tax plan was basically a flat ~23% federal sales tax that would replace income tax, but they could never get their base behind it.

    Someone on Trump’s team realized that we buy so much from other countries that he could accomplish the same thing the fair tax aimed to do via tariffs while selling them to his party as “buy American”. His lower/middle income base eats that up, and his campaign donors see it as killing their overseas competition.

    If it weren’t for the other countries reciprocating it would have been a good plan for them.


  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldSharing Jellyfin
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Depending on how you setup your reverse proxy it can reduce random scanning/login attempts to basically zero. The point of a reverse proxy is to act as a proxy, as a sort of web router, and to validate that the http requests are correctly formatted.

    For the routing depending on what DNS name/path the request comes in with it can route to different backends. So you can say that app1.yourdomain.com is routed to the internal IP address of your app1, and app2.yourdomain.com goes to app2. You can also do this with paths if the applications can handle it. Like yourdomain.com/app1.

    When your client makes a request the reverse proxy uses the “Host” header or the SNI string that is part of the TLS connection to determine what certificate to use and what application to route to.

    There is usually a “default” backend for any request that doesn’t match any of the names for your backend services (like a scanner blindly trying to access your IP). If you disable the default backend or redirect default requests to something that you know is secure any attacker scanning your IP for vulnerabilities would get their requests rejected. The only way they can even try to hit your service is to know the correct DNS name of your service.

    Some reverse proxies (Traefik, HAproxy) have options to reject the requests before the TLS negation has even completed. If the SNI string doesn’t match the connection just drops it doesn’t even bother to send a 404/5xx error. This can prevent an attacker from doing information gathering about the reverse proxy itself that might be helpful in attacking it.

    This is security by obscurity which isn’t really security, but it does reduce your risk because it significantly reduces the chances of an attacker being able to find your applications.

    Reverse proxies also have a much narrower scope than most applications as well. Your services are running a web server with your application, but is Jellyfin’s built in webserver secure? Could an attacker send invalid data in headers/requests to trigger a buffer overflow? A reverse proxy often does a much better job of preventing those kinds of attacks, rejecting invalid requests before they ever get to your application.


  • Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can’t be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.

    This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn’t there is never any in-between.

    While COW is good for data integrity it isn’t always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn’t a issue but on HDDs it can slow things down and fragment your filesystem considerably.

    Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.

    Other filesystems like ext4/xfs are “journaling” filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a “journal” on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.

    ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.

    Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.

    This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.

    Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.

    Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).


  • Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.

    Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.

    The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.

    Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.

    I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.

    VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.


  • You are paying for reasonably well polished software, which for non technical people makes them a very good choice.

    They have one click module installs for a lot of the things that self hosted people would want to run. If you want Plex, a onedrive clone, photo sync on your phone, etc just click a button and they handle installing and most of the maintenance of running that software for you. Obviously these are available on other open source NAS appliances now too so this isn’t much of a differnentiator for them anymore, but they were one of the first to do this.

    I use them for their NVR which there are open source alternatives for but they aren’t nearly as polished, user friendly, or feature rich.

    Their backup solution is also reasonably good for some home labs and small business use cases. If you have a VMware lab at home for instance it can connect to your vCenter and it do incremental backups of your VMs. There is an agent for Windows machines as well so you can keep laptops/desktops backed up.

    For businesses there are backup options for Office365/Google Workspace where it can keep backups of your email/calendar/onedrive/SharePoint/etc. So there are a lot of capabilities there that aren’t really well covered with open source tools right now.

    I run my own built NAS for mass storage because anything over two drives is way too expensive from Synology and I specifically wanted ZFS, but the two drive units were priced low enough to buy just for the software. If you want a set and forget NAS they were a pretty good solution.

    If their drives are reasonably priced maybe they will still be an okay choice for some people, but we all know the point of this is for them to make more money so that is unlikely. There are alternatives like Qnap, but unless you specifically need one of their software components either build it yourself or grab one of the open source NAS distros.


  • I’ve had one of these 3d printed keys in my wallet as a backup in case I get locked out for 5 years now. I certainly don’t use it often but yeah it holds up fine.

    The couple of times I have used it works fine but you certainly want to be a little extra careful with it. I’ve got locks that are only 5ish years old so they all turn rather easily, and I avoid my door with the deadbolt when I use it because that would probably be too much for it.

    Mine is PETG but for how thin it is, it flexes a lot. I figured flexing is better than snapping off, but I think PLA or maybe a polycarbonate would function better. A nylon would probably be too flexible like the PETG.


  • If your NAS has enough resources the happy(ish) medium is to use your NAS as a hypervisor. The NAS can be on the bare hardware or its own VM, and the containers can have their own VMs as needed.

    Then you don’t have to take down your NAS when you need to reboot your container’s VMs, and you get a little extra security separation between any externally facing services and any potentially sensitive data on the NAS.

    Lots of performance trade offs there, but I tend to want to keep my NAS on more stable OS versions, and then the other workloads can be more bleeding edge/experimental as needed. It is a good mix if you have the resources, and having a hypervisor to test VMs is always useful.


  • If you are just using a self signed server certificate anyone can connect to your services. Many browsers/applications will fail to connect or give a warning but it can be easily bypassed.

    Unless you are talking about mutual TLS authentication (aka mTLS or two way ssl). With mutual TLS in addition to the server key+cert you also have a client key+cert for your client. And you setup your web server/reverse proxy to only allow connections from clients that can prove they have that client key.

    So in the context of this thread mTLS is a great way to protect your externally exposed services. Mutual TLS should be just as strong of a protection as a VPN, and in fact many VPNs use mutual TLS to authenticate clients (i.e. if you have an OpenVPN file with certs in it instead of a pre-shared key). So they are doing the exact same thing. Why not skip all of the extra VPN steps and setup mTLS directly to your services.

    mTLS prevents any web requests from getting through before the client has authenticated, but it can be a little complicated to setup. In reality basic auth at the reverse proxy and a sufficiently strong password is just as good, and is much easier to setup/use.

    Here are a couple of relevant links for nginx. Traefik and many other reverse proxies can do the same.

    How To Implement Two Way SSL With Nginx

    Apply Mutual TLS over kubernetes/nginx ingress controller


  • The biggest question is, are you looking for Dolby Vision support?

    There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.

    If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.

    List of supported devices here

    CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.

    Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.


  • All of the “snooping” is self contained. You run the network controller either locally on a PC, or on one of their dedicated pieces of hardware (dream machine/cloud key).

    All of the devices connect directly to your network controller, no cloud connections. You can have devices outside of your network connected to your network controller (layer 3 adoption), but that requires port forwarding so again it is a direct connection to you.

    You can enable cloud access to your network controller’s admin interface which appears to be some sort of reverse tunnel (no port forwarding needed), but it is not required. It does come in handy though.

    As far as what “snooping” there is, there is basic client tracking (what IP/mac/hostnames) to show what is connected to your network. The firewall can track basics like bandwidth/throughout, and you can enable deep packet inspection which classifies internet destinations (streaming/Amazon/Netflix sort of categories). I don’t think that classification reaches out to the internet but that probably needs to be confirmed.

    All of their devices have an SSH service which you can login to and you have pretty wide access to look around the system. Who knows what the binaries are doing though.

    I know some of their WISP (AirMAX) hardware for long distance links has automatic crash reporting built in which is opt out. There is a pop up to let you know when you first login. No mention of that on the normal Unifi hardware, but they might have it running in the background.

    I really like their APs and having your entire network in the network controller is really nice for visibility but my preference is to build my own firewall that I have more control over and then Unifi APs for wireless. If I were concerned about the APs giving out data, I know I could cut that off at the firewall easily.

    A lot of the Unifi APs can have OpenWRT flashed on them, but the latest Wifi7 APs might be too locked down.


  • Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.

    Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won’t be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.

    To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.

    There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn’t an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can’t get rid of it. If you get into a situation where you don’t have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.

    If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn’t too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup

    More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896