You really, really should. Just because someone is inside your network is no reason to just give them the keys to the kingdom.
And I don't see any reason why having to allow a postgres or apache or whatever run through docker through your firewall any more confusing than allowing them through your firewall installed via APT. It's mor confusing that the firewall DOESN'T protect docker services like everything else.
Ideally, yes. But in reality, this means that if you just want to have 1 little EC2 VM on AWS running Docker, you now need to create a VM, a VPC, an NLB/ALB in front of the VPC ($20/mo+, right?) and assign a public IP address to that LB instead. For a VM like t4g.nano, it could mean going from a $3/mo bill to $23/mo ($35 in case of a NAT gateway instead of an LB?) bill, not to mention the hassle of all that setup. Hetzner, on the other hand, has a free firewall included.
Your original solution of binding to 127.0.0.1 generally seems fine. Also, if you're spinning up a web app and its supporting services all in Docker, and you're really just running this on a single $3/mo instance... my unpopular opinion is that docker compose might actually be a fine choice here. Docker compose makes it easy for these services to talk to each other without exposing any of them to the outside network unless you intentionally set up a port binding for those services in the compose file.
You should try swarm. It solves a lot of challenges that you would otherwise have while running production services with compose. I built rove.dev to trivialize setup and deployments over SSH.
What does swarm actually do better for a single-node, single-instance deployment? (I have no experience with swarm, but on googling it, it looks like it is targeted at cluster deployments. Compose seems like the simpler choice here.)
Swarm works just as well in a single host environment. It is very similar to compose in semantics, but also does basic orchestration that you would have to hack into compose, like multiple instances of a service and blue/green deployments. And then if you need to grow later, it can of course run services on multiple hosts. The main footgun is that the Swarm management port does not have any security on it, so that needs to be locked down either with rove or manual ufw config.
In AWS why would you need a NLB/ALB for this? You could expose all ports you want all day from inside the EC2 instance, but nobody is going to be able to access it unless you specifically allow those ports as inbound in the security group attached to the instance. In this case you'd only need a load balancer if you want to use it as a reverse proxy to terminate HTTPS or something.
There's no good reason a VM or container on Hetzner cannot use a firewall like IPTables. If that makes the service too expensive you increase cost or otherwise lower resources. A firewall is a very simple, essential part of network security. Every simple IoT device running Linux can run IPTables, too.
I guess you did not read the link I posted initially. When you set up a firewall on a machine to block all incoming traffic on all ports except 443 and then run docker compose exposing port 8000:8000 and put a reverse proxy like caddy/nginx in front (e.g. if you want to host multiple services on one IP over HTTPS), Docker punches holes in the iptables config without your permission, making both ports 443 and 8000 open on your machine.
@globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).
Either you run a VM inside the VM or indeed two VMs. Jumphost does not require a lot of resources.
The problem here is the user does not understand that exposing 8080 on external network means it is reachable by everyone. If you use an internal network between database and application, cache and application, application and reverse proxy, and put proper auth on reverse proxy, you're good to go. Guides do suggest this. They even explain LE for reverse proxy.
I always have to define 'external: true' at the network. Which I don't do with databases. I link it to an internal network, shared with application. You can do the same with your web application, thereby only needing auth on reverse proxy. Then you use whitelisting on that port, or you use a VPN. But I also always use a firewall where OCI daemon does not have root access on.
Yeah, true, but I have set it up in such a way that such network is an exposed bridge whereas the other networks created by docker-compose are not. It isn't even possible to reach these from outside. They're not routed, each of these backends uses standard Postgres port so with 1:1 NAT it'd give errors. Even on 127.0.0.1 it does not work:
$ nc 127.0.0.1 5432 && echo success || echo no success
no success
Example snippet from docker-compose:
DB/cache (e.g. Postgres & Redis, in this example Postgres):
Nobody is disputing that it is possible to set up a secure container network. But this post is about the fact that the default docker behavior is an insecure footgun for users who don’t realize what it’s doing.