tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

EDIT:

If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.

  • domain from namecheap
  • cloudflare to handle DNS
  • Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn’t get around to looking at Caddy)
  • Cloudflare-ddns docker container to update my A records in cloudflare
  • authentik for 2 factor authentication on my immich server
  • 486@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

    That does not work. As soon as you get SSL certificates, expect the domain name to be public knowledge, especially with Let’s Encrypt and all other certificate authorities with transparency logs. As a general rule, don’t rely on something to be hidden from others as a security measure.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I currently have a nginx docker container and certbot docker container that I have working but don’t have in production. No extra features, just a barebones reverse proxy with an ssl cert. Knowing that, I read through Caddy’s homepage but since I’ve never put an internet facing service into production, it’s not obvious to me what features I need or what I’m missing out on. Do you mind sharing what the quality of life improvements you benefit from with Caddy are?

      • Oisteink@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        What caddy does are automatic certs. You set up your web-portal and make a wildcard subdoman that points to your portal. Then you just enter two lines in the config and your new app is up. Lets say you want to put your hone assistant there. You could add hass.portal.domain.tld {reverse_proxy internal.ip:8123 } and it works. Possible with other setups too, but its no hassle

      • AbidanYre@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I never went too far down the nginx route, so I can’t really compare the two. I ended up with caddy because I self-host vaultwarden and it really doesn’t like running over http (for obvious reasons) and caddy was the instruction set I found and understood first.

        I don’t make a lot of what I host available to the wider internet, for the ones that I do, I recently migrated to using a Cloudflare tunnel to deal with the internet at large, but still have it come through caddy once it hits my server to get ssl. For everything else I have a headscale server in Oracle’s free tier that all my internal services connect to.

  • 𞋴𝛂𝛋𝛆@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I’ve tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      So far, I’ve played around with reverse proxies and ssl certs and the easiest method I’ve found so far was docker. Just haven’t put anything in production yet. If you don’t know how to use docker, learn, it’s so worth it.

      Here is the tutorial I used and the note I left for myself. You’ll need a domain to play around with. Once you figure out how to get NGINX and certbot set up, replacing the helloworld container with a different one is relatively straight forward.

      DO NOT FORGET, you must give certbot read write permissions in the docker-compose.yml file which isn't shown in this tutorial
      -----EXAMPLE, NOT PRODUCTION CODE----
      
          nginx:
              container_name: nginx
              restart: unless-stopped
              image: nginx
              depends_on:
                  - helloworld
              ports:
                  - 80:80
                  - 443:443
              volumes:
                  - ./nginx/nginx.conf:/etc/nginx/nginx.conf
                  - ./certbot/conf:/etc/letsencrypt:ro
                  - ./certbot/www:/var/www/certbot:ro
      
          certbot:
            image: certbot/certbot
            container_name: certbot
            volumes: 
              - ./certbot/conf:/etc/letsencrypt:rw
              - ./certbot/www:/var/www/certbot:rw
            command: certonly --webroot -w /var/www/certbot --keep-until-expiring --email *email* -d *domain1* -d *domain2* --agree-tos
      
        • a_fancy_kiwi@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          You don’t even have to worry about setting up SSL on every individual service

          I probably need to look into it more but since traefik is the reverse proxy, doesn’t it just get one ssl cert for a domain that all the other services use? I think that’s how my current nginx proxy is set up; one cert configured to work with the main domain and a couple subdomains. If I want to add a subdomain, if I remember correctly, I just add it to the config, restart the containers, and certbot gets a new cert for all the domains

  • yak@lmy.brx.io
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I came here to upvote the post that mentions haproxy, but I can’t see it, so I’m resorting to writing one!

    Haproxy is super fast, highly configurable, and if you don’t have the config nailed down just right won’t start so you know you’ve messed something up right away :-)

    It will handle encryption too, so you don’t need to bother changing the config on your internal server, just tweak your firewall rules to let whatever box you have haproxy running on (you have a DMZ, right?) see the server, and you are good to go.

    Google and stackexchange are your friends for config snippets. And I find the actual documentation is good too.

    Configure it with certificates from let’s encrypt and you are off to the races.

  • iAmTheTot@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I use nginx manager in its own docker container on my unraid server. Was pretty simple to set up all things considered. I would call myself better with hardware than software but not a complete newb and I got it running with minimal headache.

  • subtext@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago
    1. I got started with a guide from these guys back in 2020. I still use traefik as my reverse proxy and Authelia for authentication and it has worked great all this time. As someone else said, everything is in containers on the one host and it is super easy this way. It all runs on a single box using containers for separation. I should probably look into a secondary server as a live backup, but that’s a lot of work / expense. I have a Cloudflare dynamic DNS container running for that.
    2. I would definitely advocate for owning your own domain, for the added use case of owning your own email addresses. I can now switch email providers and don’t have to worry about losing anything. This would also lean towards a more memorable domain, or at least a second domain that is memorable. Stay away from the country TLDs or “cute” generic TLDs and stay with a tried and true .com or .net (which may take some searching).
    3. I don’t bother with this, I just run my server behind Cloudflare, and let them protect my server. Some might disagree, but it’s easy for me and I like that.
    4. Containers, containers, containers! Probably Docker since it’s easy, but Podman if you really want to get fancy / extra secure. Also, make sure you have a git repo for your compose files, and a solid backup strategy from the start (so much easier than going back and doing it later). I use Backblaze for my backups and it’s $2/month for some peace of mind.
    5. Do it!!!
    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Do you mind giving a high level overview of what a Cloudlfare tunnel is doing? Like, what’s connected to what and how does the data flow? I’ve seen cloudflare mentioned a few other times in the comments here. I know Cloudflare offers DNS services via their 1.1.1.1 and 1.0.0.1 IPs and I also know they somehow offer DDoS protection (although I’m not sure how exactly. caching?). However, that’s the limit of my knowledge of Cloudflare

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      ISPs shouldn’t care unless it is explicitly prohibited in the contract. (I’ve never seen this)

      I still wouldn’t expose anything locally though since you would need to pay for a static IP.

      Instead, I just use a VPS with Wireguard and a reverse proxy.

  • Asparagus0098@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I use traefik with a wildcard domain pointing to a Tailscale IP for services I don’t want to be public. For the services I want to be publicly available I use cloudflare tunnels.

  • povario@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    if you know/use docker, the solution that has been the most straightforward for me is SWAG. the setup process is fairly easy when combined with registering your domain with Porkbun, as they allow free API access needed for obtaining top-level (example.com) as well as wildcard (*.example.com) SSL certificates.

    along with that, exposing a new service is fairly easy with the plethora of already included nginx configs for services like Nextcloud, Syncthing, etc.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Tailscale is completely transparent on any devices I’ve used it on. Install, set up, and never look at it again because unless it gets turned off, it’s always on.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I’ve run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn’t happen often but it is often enough that I’ve started to notice. I’m not sure if it’s a network issue or app issue but during that time, I can’t connect to my services. All that to say, my tolerance for that is higher than my partner’s; the first time something didn’t work, they would stop using it lol

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I’m the only one with Android. I’ve never had a complaint except one person that couldn’t get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that’s the closest I’ve seen to your problem.

        I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don’t think tailscale has a keepalive function like a wireguard connection. If that’s too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won’t go idle. Tailscale is probably wanting to reduce their overhead so they don’t include a keepalive.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.

    For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.

    For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.

    If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      That’ll be my impetus to learn how to write a script.

      This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That’s surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it’s own. I haven’t tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:

      open the terminal on the server and type

      mkdir scripts

      cd scripts

      nano docker-updates.sh

      type something along the lines of this (I’m still learning docker so adjust the commands to your needs)

      #!/bin/bash
      
      cd /path/to/scripts/docker-compose.yml
      docker compose pull && docker compose up -d
      docker image prune -f
      

      save the file and then type sudo chmod +x ./docker-updates.sh to make it executable

      and finally set up a cronjob to run the script at specific intervals. type

      crontab -e

      or

      sudo crontab -e (this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn’t be needed)

      and at the bottom of the file type this and save, that’s it:

      # runs script at 1am on the first of every month
      0 1 1 * * /path/to/scripts/docker-updates.sh
      

      this website will help you choose a different interval

      For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type “sudo” or not; it’s running as root so I don’t think you need it but maybe try it with sudo in front of both "apt"s if it’s not working. Also use whatever package manager you have if you aren’t using apt)

      while in the scripts folder you created earlier

      nano os-updates.sh

      #!/bin/bash
      
      apt update -y && apt upgrade -y
      reboot now
      

      save and don’t forget to make it exectuable

      then use

      sudo crontab -e (because you’ll need root privileges to update. this will run the script as root without requiring you to input your password)

      # runs script at 12am on the first of every month
      0 0 1 * * /path/to/scripts/os-updates.sh
      
      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.

        I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.

        Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.

        • a_fancy_kiwi@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          the lack of logs

          That’s the best part, with a script, you can pipe the output of the updates into a log file you create yourself. I don’t currently do that, if something breaks, I just roll back to a previous snapshot and try again later but it’s possible and seemingly straight forward.

          This askubuntu link will probably help

  • tritonium@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Why do so many people do this incorrectly. Unless you are actually serving a public then you don’t need to open anything other than a WireGuard tunnel. My phone automatically connects to WireGuard as soon as I disconnect from my home WiFi so I have access to every single one of my services and only have to expose one port and service.

    If you are going through setting up caddy or nginx proxy manager or anything else and you’re not serving a public… you’re dumb.