Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is only an issue if you run Docker on your firewall, which you absolutely should not.


Do you not run firewalls on your internal facing machines to make sure they only have the correct ports exposed?

Security isn't just an at the edge thing.


No. That would be incredibly annoying and it's probably why docker overrides it as it would cause all manner of confusion.


You really, really should. Just because someone is inside your network is no reason to just give them the keys to the kingdom.

And I don't see any reason why having to allow a postgres or apache or whatever run through docker through your firewall any more confusing than allowing them through your firewall installed via APT. It's mor confusing that the firewall DOESN'T protect docker services like everything else.


Ideally, yes. But in reality, this means that if you just want to have 1 little EC2 VM on AWS running Docker, you now need to create a VM, a VPC, an NLB/ALB in front of the VPC ($20/mo+, right?) and assign a public IP address to that LB instead. For a VM like t4g.nano, it could mean going from a $3/mo bill to $23/mo ($35 in case of a NAT gateway instead of an LB?) bill, not to mention the hassle of all that setup. Hetzner, on the other hand, has a free firewall included.


Your original solution of binding to 127.0.0.1 generally seems fine. Also, if you're spinning up a web app and its supporting services all in Docker, and you're really just running this on a single $3/mo instance... my unpopular opinion is that docker compose might actually be a fine choice here. Docker compose makes it easy for these services to talk to each other without exposing any of them to the outside network unless you intentionally set up a port binding for those services in the compose file.


You should try swarm. It solves a lot of challenges that you would otherwise have while running production services with compose. I built rove.dev to trivialize setup and deployments over SSH.


What does swarm actually do better for a single-node, single-instance deployment? (I have no experience with swarm, but on googling it, it looks like it is targeted at cluster deployments. Compose seems like the simpler choice here.)


Swarm works just as well in a single host environment. It is very similar to compose in semantics, but also does basic orchestration that you would have to hack into compose, like multiple instances of a service and blue/green deployments. And then if you need to grow later, it can of course run services on multiple hosts. The main footgun is that the Swarm management port does not have any security on it, so that needs to be locked down either with rove or manual ufw config.


Interesting, in my mind Swarm was more or less dead and the next step after docker+compose or podman+quadlet was k3s. I will check out Rove, thanks!


That was rumored for a while, but Swarm is still maintained! I wouldn't count on it getting the latest and greatest compose format support though.


In AWS why would you need a NLB/ALB for this? You could expose all ports you want all day from inside the EC2 instance, but nobody is going to be able to access it unless you specifically allow those ports as inbound in the security group attached to the instance. In this case you'd only need a load balancer if you want to use it as a reverse proxy to terminate HTTPS or something.


TIL, thank you! I used such security groups with OpenStack and OCI but somehow didn't think about them in connection with EC2.


There's no good reason a VM or container on Hetzner cannot use a firewall like IPTables. If that makes the service too expensive you increase cost or otherwise lower resources. A firewall is a very simple, essential part of network security. Every simple IoT device running Linux can run IPTables, too.


I guess you did not read the link I posted initially. When you set up a firewall on a machine to block all incoming traffic on all ports except 443 and then run docker compose exposing port 8000:8000 and put a reverse proxy like caddy/nginx in front (e.g. if you want to host multiple services on one IP over HTTPS), Docker punches holes in the iptables config without your permission, making both ports 443 and 8000 open on your machine.

@globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).


Either you run a VM inside the VM or indeed two VMs. Jumphost does not require a lot of resources.

The problem here is the user does not understand that exposing 8080 on external network means it is reachable by everyone. If you use an internal network between database and application, cache and application, application and reverse proxy, and put proper auth on reverse proxy, you're good to go. Guides do suggest this. They even explain LE for reverse proxy.


Docker by default modifies iptables rules to allow traffic when you use the options to launch a container with port options.

If you have your own firewall rules, docker just writes its own around them.


I always have to define 'external: true' at the network. Which I don't do with databases. I link it to an internal network, shared with application. You can do the same with your web application, thereby only needing auth on reverse proxy. Then you use whitelisting on that port, or you use a VPN. But I also always use a firewall where OCI daemon does not have root access on.


> I always have to define 'external: true' at the network

That option has nothing to do with the problem at hand.

https://docs.docker.com/reference/compose-file/networks/#ext...


I thought "external" referred to whether the network was managed by compose or not


Yeah, true, but I have set it up in such a way that such network is an exposed bridge whereas the other networks created by docker-compose are not. It isn't even possible to reach these from outside. They're not routed, each of these backends uses standard Postgres port so with 1:1 NAT it'd give errors. Even on 127.0.0.1 it does not work:

$ nc 127.0.0.1 5432 && echo success || echo no success no success

Example snippet from docker-compose:

DB/cache (e.g. Postgres & Redis, in this example Postgres):

    [..]
    ports:
      - "5432:5432"
    networks:
      - backend
    [..]
App:

    [..]
    networks:
      - backend
      - frontend
    [..]
networks: frontend: external: true backend: internal: true


Nobody is disputing that it is possible to set up a secure container network. But this post is about the fact that the default docker behavior is an insecure footgun for users who don’t realize what it’s doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: