Websocket get 504 after passing through two Nginx proxy

Here is the story:

My server is a cloud server running centos, and serves a few bunch of web pages.
These web pages can be divided into 3 main categories, my homework, my little projects and big projects.

To manage them efficiently, I decided to move them into docker containers.And here is the structure:

This is my plan but websocket can't work on this setup

port 80---Nginx
          |
          |-another port--container port 80-nginx--static files
          |                                 |--container port--back end server
          |-another port ....
          ......
################################################################################
Websocket works fine on this setup

physical port -- container port -- nginx -- static files
                                   |--container port--back end server

I tried to use nginx to listen port 80 on my physical machine and proxy pass requests to my containers, and nginx in my containers proxy pass requests to my back end server. In this case, every thing works fine except websocket. Web pages can be load, ajax requests to my backend server can be responded but when it comes to websocket, back end server can receive, upgrade, and hold it however the browser just can’t get any response until nginx closed the connection with error code 504 after time exceeded.
When I bind port 80 of container 80 instead of using nginx proxy on the physical machine, every thing works fine.

I think it is not an issue about headers because I set the header with code before upgrade it.

I can’t figure out why, can anybody help me?

Here are my configurations:

##############################
the nginx.conf on physical machine

http{
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    upstream myminiprojs{
        server 127.0.0.1:19306;
    }
    .......... # a few more upstreams
    server {
        listen      80;
        server_name # can't be published;
        charset utf-8;
        add_header Cache-Control no-store;

        location /{
            root /root/coding;
        }

        location /homeworks{
            proxy_pass http://myminiprojs;
        }
        ...............
    }
}
###################
the nginx.conf in container

http{
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    upstream chatroomAjax{
        server 127.0.0.1:19306;
    }
    server {
        listen      80;
        charset utf-8;
        add_header Cache-Control no-store;

        location /{
            root /app;
        }

        location /homeworks/chatroom/ajax{
            proxy_pass http://chatroomAjax;
        }
    }
}

#############################
a few code of my back end server

//go webSocket server
    {
        scheduleBroadCast := func(w http.ResponseWriter, r *http.Request) {
            r.Header.Set("Connection","Upgrade")
            r.Header.Set("Upgrade","websocket")
            fmt.Println(r.Header.Get("Connection"),r.Header.Get("Upgrade"),)
            var upGrader websocket.Upgrader
            upGrader.CheckOrigin = func(r *http.Request) bool {
                return true
            }
            conn, err := upGrader.Upgrade(w, r, nil)
            if err != nil {
                fmt.Println("web socket 建立失败", err)
                fmt.Println(r.Method)
                return
            }
            broadCast.AddListener(conn)
        }
        http.HandleFunc(route+"/websocket/", scheduleBroadCast)
    }

Source: StackOverflow