Yo,

Wandering what the limit is when it comes to how many containers I can run. Currently I’m running around 15 containers. What happens if this is increased to say, 40? Also, can docker containers go “idle” when not being used - to save system resources?

I’m running a i7-6700k Intel cpu. Doesn’t seem to be struggling at all with my current setup at least, maybe only when transcoding for Jellyfin.

  • _cryptagion@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    On my old Dell workstation I pulled out of the dumpster of a local business, which now has a second life as a Unraid NAS, I’m running 29 currently. Used to be running more, but I got rid of some after I was done using those services.

    Among other things, the server runs my entire Servarr stack, as well as the various media servers for video, music, ebooks and audiobooks, and my Gitea. There’s a bunch of other stuff as well, but those are the most important to me.

  • JoeKrogan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    13 containers currently. I have thought about adding some more stuff such as bazarr and more but I need to be in the humor for it.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Looks like 9? Here’s what I’m currently running:

    • actual budget
    • caddy (for TLS trunking)
    • nextcloud and collabora
    • vaultwarden (currently unused)
    • jellyfin
    • home assistant

    The rest are databases and other auxiliary stuff. I’m probably going to work on it some this holiday break, because I’d like to eventually move to microOS, and I still have a few things running outside of containers that I need to clean up (e.g. Samba).

    But yeah, like others said, it really doesn’t matter. On Linux (assuming you’re not using Docker Desktop), a container is just a process. On other systems (e.g. Windows, macOS, or Linux w/ Desktop), they run in a VM, which is a bit heavier and reserves more resources for itself. I could run 1000 containers and it really wouldn’t matter, as long as they’re pretty light.

    • antimongo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Been curious about deploying HA with docker. As I understand the only limitation is you can’t use add-ons?

  • corsicanguppy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Zero.

    I run VMs and LDoms at work and VMs at home, a dwindling number of VMware VMs and a growing number of qemus to replace them.

    I don’t need the hassle.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Is this a joke?

      Simple ephemeral containers are far easier to manage as they are primarily managed via a static config that is reproducible. If something goes wrong or you want to modify something you just redeploy.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          Don’t make fun of someone because they like doing things the old way.

          I don’t want to be associated with this comment.

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Get a load of this guy, thinking containers are more of a hassle than VMs!

  • phase@lemmy.8th.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I am at ~80. Most are idling.

    For me, the metric to jeep an eye on is the time spent by the kernel between system and user. If the time spent by in system rises it is the sign that the kernel is just switching of context instead of executing programs.

  • sunbeam60@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Run 19 but barely get over 5% usage even when transcoding 4K movies where the copyright has expired.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I have gone up to about 300-400 or so. Currently running about 5 machines averaging about 100 each.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 hours ago

        RAM is definitely the limiting factor. The one server with a 5600X and 64GiB ram handled it pretty well as long as I wasn’t doing cpu transcoding, though.

        I’ve since added two N100 boxes with 16GiB and two first gen Epyc 32 cores with 64GiB ram. All pretty cost effective and quiet.

        The N100 CPUs get overloaded sometimes if they’re running too many databases, but usually it balances pretty well.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Wandering what the limit is when it comes to how many containers I can run.

    Basically the same as the number of processes you can run.

    Use “docker stats” to see what resources each container is using.

  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    As it was already said. Docker is not virtualization. The number of Containers you can run depends on the containers and what applications are packaged in them. I am pretty sure you can max out any host with a single container when it runs computational heavy software. And i am also pretty sure you can run on any given host thousands of containers when they are just serving a simple static website

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 days ago

      Correct on both counts, although it is possible to set limits that will prevent a single container using all your system’s resources.

  • grimer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Right now I have 32 active stacks running and a good number of them create at least one other container like a database. So I’m running around 60+ separate containers. The machine has maybe an i5 6500 or so in it with 32g of ram. I use unraid as the nas platform but I do all the docker stuff manually. It’s plenty fast for what I need so far… :)

  • Docker containers arnt virtual machines despite acting like them. They dont actually require compute resources to be sitting around doing nothing like a traditional vm cos they are essentially just a proxy between the kernal in the container and the kernal on the base machine.

    If the container isnt doing anything then it isnt consuming resources.