tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.
I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.
-
I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?
-
Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
-
What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.
-
Any other tips or info you care to share would be greatly appreciated.
-
Feel free to talk me out of it as well.
EDIT:
If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.
- domain from namecheap
- cloudflare to handle DNS
- Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn’t get around to looking at Caddy)
- Cloudflare-ddns docker container to update my A records in cloudflare
- authentik for 2 factor authentication on my immich server
or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
That does not work. As soon as you get SSL certificates, expect the domain name to be public knowledge, especially with Let’s Encrypt and all other certificate authorities with transparency logs. As a general rule, don’t rely on something to be hidden from others as a security measure.
Damn, I didn’t realize they had public logs like that. Thanks for the heads up
https://crt.sh/ would make anyone who thought obscurity would be a solution poop themselves.
deleted by creator
Caddy with cloudflare support in a docker container.
Does Caddy have an OWASP plugin like nginx?
I don’t use it, but it looks like yes.
This the solution.
Caddy is simple.
I currently have a nginx docker container and certbot docker container that I have working but don’t have in production. No extra features, just a barebones reverse proxy with an ssl cert. Knowing that, I read through Caddy’s homepage but since I’ve never put an internet facing service into production, it’s not obvious to me what features I need or what I’m missing out on. Do you mind sharing what the quality of life improvements you benefit from with Caddy are?
Honestly, if you know nginx just stick with it. There’s nothing to be gained by learning a new proxy.
Use Mozilla’s SSL generator if you want to harden nginx (or any proxy you choose)- https://ssl-config.mozilla.org/
I didn’t know about that tool. Thanks for sharing
What caddy does are automatic certs. You set up your web-portal and make a wildcard subdoman that points to your portal. Then you just enter two lines in the config and your new app is up. Lets say you want to put your hone assistant there. You could add hass.portal.domain.tld {reverse_proxy internal.ip:8123 } and it works. Possible with other setups too, but its no hassle
I never went too far down the nginx route, so I can’t really compare the two. I ended up with caddy because I self-host vaultwarden and it really doesn’t like running over http (for obvious reasons) and caddy was the instruction set I found and understood first.
I don’t make a lot of what I host available to the wider internet, for the ones that I do, I recently migrated to using a Cloudflare tunnel to deal with the internet at large, but still have it come through caddy once it hits my server to get ssl. For everything else I have a headscale server in Oracle’s free tier that all my internal services connect to.
I’ve tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)
So far, I’ve played around with reverse proxies and ssl certs and the easiest method I’ve found so far was docker. Just haven’t put anything in production yet. If you don’t know how to use docker, learn, it’s so worth it.
Here is the tutorial I used and the note I left for myself. You’ll need a domain to play around with. Once you figure out how to get NGINX and certbot set up, replacing the helloworld container with a different one is relatively straight forward.
DO NOT FORGET, you must give certbot read write permissions in the docker-compose.yml file which isn't shown in this tutorial -----EXAMPLE, NOT PRODUCTION CODE---- nginx: container_name: nginx restart: unless-stopped image: nginx depends_on: - helloworld ports: - 80:80 - 443:443 volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./certbot/conf:/etc/letsencrypt:ro - ./certbot/www:/var/www/certbot:ro certbot: image: certbot/certbot container_name: certbot volumes: - ./certbot/conf:/etc/letsencrypt:rw - ./certbot/www:/var/www/certbot:rw command: certonly --webroot -w /var/www/certbot --keep-until-expiring --email *email* -d *domain1* -d *domain2* --agree-tosdeleted by creator
You don’t even have to worry about setting up SSL on every individual service
I probably need to look into it more but since traefik is the reverse proxy, doesn’t it just get one ssl cert for a domain that all the other services use? I think that’s how my current nginx proxy is set up; one cert configured to work with the main domain and a couple subdomains. If I want to add a subdomain, if I remember correctly, I just add it to the config, restart the containers, and certbot gets a new cert for all the domains
deleted by creator
deleted by creator
I came here to upvote the post that mentions haproxy, but I can’t see it, so I’m resorting to writing one!
Haproxy is super fast, highly configurable, and if you don’t have the config nailed down just right won’t start so you know you’ve messed something up right away :-)
It will handle encryption too, so you don’t need to bother changing the config on your internal server, just tweak your firewall rules to let whatever box you have haproxy running on (you have a DMZ, right?) see the server, and you are good to go.
Google and stackexchange are your friends for config snippets. And I find the actual documentation is good too.
Configure it with certificates from let’s encrypt and you are off to the races.
I use nginx manager in its own docker container on my unraid server. Was pretty simple to set up all things considered. I would call myself better with hardware than software but not a complete newb and I got it running with minimal headache.
- I got started with a guide from these guys back in 2020. I still use traefik as my reverse proxy and Authelia for authentication and it has worked great all this time. As someone else said, everything is in containers on the one host and it is super easy this way. It all runs on a single box using containers for separation. I should probably look into a secondary server as a live backup, but that’s a lot of work / expense. I have a Cloudflare dynamic DNS container running for that.
- I would definitely advocate for owning your own domain, for the added use case of owning your own email addresses. I can now switch email providers and don’t have to worry about losing anything. This would also lean towards a more memorable domain, or at least a second domain that is memorable. Stay away from the country TLDs or “cute” generic TLDs and stay with a tried and true .com or .net (which may take some searching).
- I don’t bother with this, I just run my server behind Cloudflare, and let them protect my server. Some might disagree, but it’s easy for me and I like that.
- Containers, containers, containers! Probably Docker since it’s easy, but Podman if you really want to get fancy / extra secure. Also, make sure you have a git repo for your compose files, and a solid backup strategy from the start (so much easier than going back and doing it later). I use Backblaze for my backups and it’s $2/month for some peace of mind.
- Do it!!!
deleted by creator
Do you mind giving a high level overview of what a Cloudlfare tunnel is doing? Like, what’s connected to what and how does the data flow? I’ve seen cloudflare mentioned a few other times in the comments here. I know Cloudflare offers DNS services via their 1.1.1.1 and 1.0.0.1 IPs and I also know they somehow offer DDoS protection (although I’m not sure how exactly. caching?). However, that’s the limit of my knowledge of Cloudflare
deleted by creator
ISPs shouldn’t care unless it is explicitly prohibited in the contract. (I’ve never seen this)
I still wouldn’t expose anything locally though since you would need to pay for a static IP.
Instead, I just use a VPS with Wireguard and a reverse proxy.
I use traefik with a wildcard domain pointing to a Tailscale IP for services I don’t want to be public. For the services I want to be publicly available I use cloudflare tunnels.
I use this https://github.com/ZoeyVid/NPMplus. I use unifi for goe-blocking.
if you know/use docker, the solution that has been the most straightforward for me is SWAG. the setup process is fairly easy when combined with registering your domain with Porkbun, as they allow free API access needed for obtaining top-level (
example.com) as well as wildcard (*.example.com) SSL certificates.along with that, exposing a new service is fairly easy with the plethora of already included nginx configs for services like Nextcloud, Syncthing, etc.
Tailscale is completely transparent on any devices I’ve used it on. Install, set up, and never look at it again because unless it gets turned off, it’s always on.
I’ve run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn’t happen often but it is often enough that I’ve started to notice. I’m not sure if it’s a network issue or app issue but during that time, I can’t connect to my services. All that to say, my tolerance for that is higher than my partner’s; the first time something didn’t work, they would stop using it lol
So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I’m the only one with Android. I’ve never had a complaint except one person that couldn’t get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that’s the closest I’ve seen to your problem.
I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don’t think tailscale has a keepalive function like a wireguard connection. If that’s too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won’t go idle. Tailscale is probably wanting to reduce their overhead so they don’t include a keepalive.
Nginx Proxy Manager + LetsEncrypt.
On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.
For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.
For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.
If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.
That’ll be my impetus to learn how to write a script.
This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That’s surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it’s own. I haven’t tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:
open the terminal on the server and type
mkdir scriptscd scriptsnano docker-updates.shtype something along the lines of this (I’m still learning docker so adjust the commands to your needs)
#!/bin/bash cd /path/to/scripts/docker-compose.yml docker compose pull && docker compose up -d docker image prune -fsave the file and then type
sudo chmod +x ./docker-updates.shto make it executableand finally set up a cronjob to run the script at specific intervals. type
crontab -eor
sudo crontab -e(this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn’t be needed)and at the bottom of the file type this and save, that’s it:
# runs script at 1am on the first of every month 0 1 1 * * /path/to/scripts/docker-updates.shthis website will help you choose a different interval
For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type “sudo” or not; it’s running as root so I don’t think you need it but maybe try it with sudo in front of both "apt"s if it’s not working. Also use whatever package manager you have if you aren’t using apt)
while in the scripts folder you created earlier
nano os-updates.sh#!/bin/bash apt update -y && apt upgrade -y reboot nowsave and don’t forget to make it exectuable
then use
sudo crontab -e(because you’ll need root privileges to update. this will run the script as root without requiring you to input your password)# runs script at 12am on the first of every month 0 0 1 * * /path/to/scripts/os-updates.shI did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.
I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.
Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.
the lack of logs
That’s the best part, with a script, you can pipe the output of the updates into a log file you create yourself. I don’t currently do that, if something breaks, I just roll back to a previous snapshot and try again later but it’s possible and seemingly straight forward.
This askubuntu link will probably help
Why do so many people do this incorrectly. Unless you are actually serving a public then you don’t need to open anything other than a WireGuard tunnel. My phone automatically connects to WireGuard as soon as I disconnect from my home WiFi so I have access to every single one of my services and only have to expose one port and service.
If you are going through setting up caddy or nginx proxy manager or anything else and you’re not serving a public… you’re dumb.
What are you using to auto connect to VPN when you disconnect from your home wifi?
WG Tunnel does that natively, you can whitelist some wifis and auto connect on other and optionally on mobile data










