• NickwithaC@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    4 gigs of RAM is enough to host many singular projects - your own backup server or VPN for instance. It’s only if you want to do many things simultaneously that things get slow.

  • dmtalon@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I’m sure a lot of people’s self hosting journey started on junk hardware… “try it out”, followed by “oh this is cool” followed by “omg I could do this, that and that” followed by dumping that hand-me-down garbage hardware you were using for something new and shiny specifically for the server.

    My unRAID journey was this exactly. I now have a 12 hot/swap bay rack mounted case, with a Ryzan 9 multi core, ECC ram, but it started out with my ‘old’ PC with a few old/small HDDs

  • Revv@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    7 websites, Jellyfin for 6 people, Nextcloud, CRM for work, email server for 3 domains, NAS, and probably some stuff I’ve forgotten on a $4 computer from a tiny thrift store in BFE Kansas. I’d love to upgrade, but I’m always just filled with joy whenever I think of that little guy just chugging along.

      • Revv@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        EspoCRM. I really like it for my purposes. I manage a CiviCRM instance for another job that needs more customization, but for basic needs, I find espo to be beautiful, simple, and performant.

        • brbposting@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Sweeeeet thank you! Demo looks great. Now to figure out whether an uber n00ber can self host it in a jiffy or not. 🙏

      • Revv@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        It does fine. It’s an i5-6500 running CPU transcoding only. Handles 2-3 concurrent 1080p streams just fine. Sometimes there’s a little buffering if there’s transcoding going on. I try to keep my files at 1080p for storage reasons though. This thing’s not going to handle 4k transcoding very well, but it does okay if you don’t expect too much from it.

        • PieMePlenty@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I’m skeptical that you are doing much video transcoding anyway. 1080p is supported on must devices now, and h264 is best buddies with 1080p content - a codec supported even on washing machines. Audio may be transcoded more often.

          • RogueBanana@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Most of my content is h265 and av1 so I assume they are also facing a similar issue. I usually use the jellyfin app on PC or laptop so not an issue but my family members usually use the old TV which doesn’t support it.

            • PieMePlenty@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              AV1 is definitely a showstopper a lot of the time indeed. H265 I would expect to see more on 2k or 4k content (though native support is really high anyway). My experience so far has been seeing transcoding done only becuase the resolution is unsupported when I try watching 4k videos on an older 1080p only chromecast.

              • N0x0n@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                What do you mean by showstopper? I only encode my shows into AV1/opus and I never had any transcoding happening on any of my devices.

                It’s well supported on any recent Browser compared to x264/x265… specially 10bit encodes. And software decoding is nearly present on any recent device.

                Dunno about 4k though, I haven’t the necessary screen resolution to play any 4k content… But for 1080p, AV1 is the way to go IMO.

                • Free open/source
                • Any browser supported
                • Better compression
                • Same objective quality with lower bitrate
                • A lot of cool open source project arround AV1

                It has it’s own quirks for sure (like every codec) but it’s far from a bad codec. I’m not a specialist on the subject but after a few months of testing/comparing/encoding… I settled with AV1 because it was comparative better than x264/x265.

                • PieMePlenty@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Showstopper in the sense that it may not play natively and require transcoding. While x264 has pretty much universal support, AV1 does not… at least not on some of my devices. I agree that it is a good encoder and the way forward but its not the best when using older devices. My experience has been with Chromecast with Google TV. Looks like google only added AV1 support in their newest Google TV Streamer (late 2024 device).

          • Revv@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Not a huge amount of transcoding happening, but some for old Chromecasts and some for low bandwidth like when I was out of the country a few weeks ago watching from a heavily throttled cellular connection. Most of my collection is h264, but I’ve got a few h265 files here and there. I am by no means recommending my setup as ideal, but it works okay.

            • PieMePlenty@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Absolutely, whatever works for you. I think its awesome to use the cheapest hardware possible to do these things. Being able to use a media server without transcoding capabilities? Brilliant. I actually thought you’d probably be able to get away with no transcoding at all since 1080p has native support on most devices and so does h264. In the rare cases, you could transcode beforehand (like with a script whenever a file is added) so you’d have an appropriate format on hand when needed.

  • jws_shadotak@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I was for a while. Hosted a LOT of stuff on an i5-4690K overclocked to hell and back. It did its job great until I replaced it.

    Now my servers don’t lag anymore.

    • lka1988@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      My cluster ranges from 4th gen to 8th gen Intel stuff. 8th gen is the newest I’ve ever had (until I built a 5800X3D PC).

      I’ve seen people claiming 9th gen is “ancient”. Like…ok moneybags.

  • TMP_NKcYUEoM7kXg4qYe@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I used to selfhost on a core 2 duo thinkpad R60i. It had a broken fan so I had to hide it into a storage room otherwise it would wake up people from sleep during the night making weird noises. It was pretty damn slow. Even opening proxmox UI in the remotely took time. KrISS feed worked pretty well tho.

    I have since upgraded to… well, nothing. The fan is KO now and the laptop won’t boot. It’s a shame because not having access to radicale is making my life more difficult than it should be. I use CalDAV from disroot.org but it would be nice to share a calendar with my family too.

  • lnxtx (xe/xem/xyr)@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Maybe not shit, but exotic at that time, year 2012.
    The first Raspberry Pi, model B 512 MB RAM, with an external 40 GB 3.5" HDD connected to USB 2.0.

    It was running ARM Arch BTW.

    Next, cheap, second hand mini desktop Asus Eee Box.
    32 bit Intel Atom like N270, max. 1 GB RAM DDR2 I think.
    Real metal under the plastic shell.
    Could ever run without active cooling (I broke a fan connector).

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I have one of these that I use for Pi-hole. I bought it as soon as they were available. Didn’t realise it was 2012, seemed earlier than that.

    • ThunderLegend@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      This was my media server and kodi player for like 3 years…still have my Pi 1 lying around. Now I have a shitty Chinese desktop I built this year with i5 3rd. Gen with 8gb ram

      • lnxtx (xe/xem/xyr)@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Mainly telemetry, like temperature inside, outside.
        Script to read a data and push it into a RRD, later PostreSQL.
        ligthttpd to serve static content, later PHP.

        Once it served as a bridge, between LAN and LTE USB modem.

    • bdonvr@thelemmy.club
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I had quite a few docker containers going on a Raspberry Pi 4. Worked fine. Though it did have 8GB of RAM to be fair

  • robalees@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    2012 Mac Mini with a fucked NIC because I man handled it putting in a SSD. Those things are tight inside!

        • Cort@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Lol, I used to have an 08 Mac mini and that required a razor blade and putty knives to open. I got pretty good at it after separately upgrading the RAM adding an SSD and swapping out the cpu for the most powerful option that Apple didn’t even offer

          • robalees@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            When I used to work at the “Fruit Stand” I never had to repair those white back Mini’s thankfully, but I do remember the putty knives being around. The unibody iMac was the worse, had to pizza cutter the whole LCD off the frame to replace anything, then glue it back on!

            • Cort@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Lol by the time I actually needed to upgrade from that mini, all the fruit stand stuff wasn’t really upgradable anymore. It was really frustrating, so I jumped ship to Windows.

              Those iMac screens seemed so fiddley to remove just to get access to the drives. Why won’t they just bolt them in instead of using glue! (I know why, but I still don’t like it)

  • ebc@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Running a bunch of services here on a i3 PC I built for my wife back in 2010. I’ve since upgraded the RAM to 16GB, added as many hard drives as there are SATA ports on the mobo, re-bedded the heatsink, etc.

    It’s pretty much always ran on Debian, but all services are on Docker these days so the base distro doesn’t matter as much as it used to.

    I’d like to get a good backup solution going for it so I can actually use it for important data, but realistically I’m probably just going to replace it with a NAS at some point.

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      A NAS is just a small desktop computer. If you have a motherboard/CPU/ram/Ethernet/case and a lot of SSDs/HDDs you are good to go.

      Just don’t bother to buy something marketed as NAS. It’s expensive and less modular than any desktop PC.

      Just my opinion.

  • biscuitswalrus@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.

    • renzev@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      How is ceph working out for you btw? I’m looking into distributed storage solutions rn. My usecase is to have a single unified filesystem/index, but to store the contents of the files on different machines, possibly with redundancy. In particular, I want to be able to upload some files to the cluster and be able to see them (the directory structure and filenames) even when the underlying machine storing their content goes offline. Is that a valid usecase for ceph?

      • biscuitswalrus@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

        I can’t see why regular file would be any different.

        I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

        I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

        I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

        I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

        Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.

        • renzev@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else.

          This is good advice, thanks! Pretty much what I’m doing right now. Already tried it with IPFS, and found that it didn’t meet my needs. Currently setting up a tahoe-lafs grid to see how it works. Will try out ceph after this.