Docker network issue

Hey all, got an annoying little problem I can't seem to figure out. Wondering if you can help.

Setup:

  • A production vm (prod01: 192.168.2.10) acting as my reverse proxy (nginx).
  • A docker vm (docker01: 192.168.2.3) acting as my primary docker host (debian 9 based)
  • A docker network (docknet: 172.30.0.0/16) acting as the network for all containers. All containers have a static ip specified, so they are always using the same docknet ip.

Scenario

I want to proxy from nginx to all the containers at their specific docknet ip. My reverse proxy used to be on the same host as the docker containers, so this was possible to do. I could easily proxy to the docker ip:port (not the host ip:port even though i had the ports exposed to the host from the beginning).

But for a number of reasons, I decided to split the reverse proxy off to it's own vm for stability. To accomplish this, I added a static route for 172.30.0.0/16 to 192.168.2.3 in my router/default gateway. That worked. Next, I decided to go back and clean up all my containers by removing the host port exposures, since I was proxying to the docknet ip:port.

This is where things started to fail. I dont have UFW or any other custom iptables rules. The requests started to time out when trying to access the docknet ip:port. For example, portainer used to have port 9000 mapped to the host port 9000. When i removed that port mapping, i could no longer access portainer at 172.30.1.1:9000. To fix this I could expose all the container ports to the host port. But that would require going back and editing all the containers and also editing my reverse proxy definitions. I could also add a UFW rule to allow all traffic from my local network (192.168.2.0/24) to docknet (172.30.0.0/16). But I'm really trying to understand what is happening now and why it's happening.

To add in a level of frustration, One of my containers (nextcloud) has never had a port exposed to the host. But, for some reason, I can still access it via it's docknet ip:port (172.30.1.37:80).

Can anyone help me figure out why I cannot access portainer (172.30.1.1:9000) but i can access nextcloud (172.30.1.37:80)?

Note:

  • 172.30.1.1 is a valid ip, as it's part of network 172.30.0.0/16 (172.30.0.0172.30.255.255)
  • Portainer used to have host port exposures, but it was later removed.
  • Nextcloud has never had a host port exposed.
  • No UFW or custom iptable rules.

Could it be a bug in debian/docker allowing all traffic by default until I explicity turn off access (ie, removing the port exposure)?

UPDATE:

After deleting the container completely and using docker-compose to bring up portainer without port mappings, I can once again access portainer from a machine on my local network via the docknet ip of 172.30.1.1:9000. This leads me to believe it is a bug/unintended side effect of how debian defaults (allow traffic by default) are not restored after removing a port mapping.

submitted by /u/jiru443
[link] [comments]
Source: Reddit