I have been lurking on this community for a while now and have really enjoyed the informational and instructional posts but a topic I don’t see come up very often is scaling and hoarding. Currently, I have a 20TB server which I am rapidly filling and most posts talking about expanding recommend simply buying larger drives and slotting them in to a single machine. This definitely is the easiest way to expand, but seems like it would get you to about 100TB before you cant reasonably do that anymore. So how do you set up 100TB+ networks with multiple servers?
My main concern is that currently all my services are dockerized on a single machine running Ubuntu, which works extremely well. It is space efficient with hardlinking and I can still seed back everything. From different posts I’ve read, it seems like as people scale they either give up on hardlinks and then eat up a lot of their storage with copying files or they eventually delete their seeds and just keep the content. Does the Arr suite and Qbit allow dynamically selecting servers based on available space? Or are there other ways to solve these issues with additional tools? How do you guys set up large systems and what recommendations would you make? Any advice is appreciated from hardware to software!
Also, huge shout out to Saik0 from this thread: https://lemmy.dbzer0.com/post/24219297 I learned a ton from his post, but it seemed like the tip of the iceberg!
This is a great question and quite funny as I’m at 100TB now (including parity drives and non-media storage) and needing to figure out a solution fairly soon. Tossing a bunch of working $100-$200 drives in ‘the trash’ in order to replace them with $300-$400 drives isn’t much of a solution in my eyes.
I suppose the proper solution is to build a server rack and load it with drives but that seems a bit daunting at my current skill level. Anybody have a time machine I can borrow real quick?
I primarily buy used drives. Depending on your area, you might find buyers easily for your old 4TB+ ones.