For security reasons, I've recently been dealing with a requirement to record a user's real IP. I thought it was very simple, but then I realized it was not as simple as I thought. Here is the main memo, if the server is in the port backflow (hairpin NAT), keepalived, nginx, how to get the client's external IP.
The traffic path from the client PC is as above, in such a topology, in the application service to obtain, the client PC's extranet ip, what problems may be encountered? (The ip is coded arbitrarily for the sake of illustration and is not considered reasonable).
- Programming implementation
Java for example, this I will.
public static String getClientIP(HttpServletRequest request) { String remoteAddr = (); return remoteAddr; }
Run it.The output is 3.3.3.3. This is because the API obtains the source address of the IP packet. This is because this API obtains the source address of the IP packet. the Nginx reverse proxy works at the application layer, when it receives an http request, it generates a new request and sends it to the application service. the source address of the IP packet of the request is the IP of the Nginx server, i.e., 3.3.3.3.
- Nginx header injection
Because it's the application layer, then the source address of this request ip packet must be 3.3.3.3, but in the application layer we can attach a little bit of information, so that the application service behind, can use this additional information, to understand the original source address corresponding to this request. That's what I would do.
Configuration in Nginx.
server { ... proxy_set_header X-Real-IP $remote_addr;
In the application layer http protocol, add an http header. x-Real-IP:$remote_addr. $remote_addr is a predefined variable representing the original source ip address of the request being forwarded by the proxy.
In a Java program, read the corresponding additional information.
public String getRealIp(HttpServletRequest request) { String realIp = ("X-Real-IP"); if (realIp != null && !()) { return "Client's Real IP: " + realIp; } return ""; }
Run it.Output 2.2.2.2 at this time. Obviously we took a step forward.
- Keepalived load balancing mode
I was under the impression that the main purpose of keepalived here should be to solve the single-point problem with the nginx proxy server, and it seems to be configured for load balancing as well? Looking through the config files, here's what's actually happening.
The operation maintenance big strong said he configured keepalived time to consider one more step, if the machine is alive, nginx hang how to do, so another layer of load balancing (in this case, the virtual IP will not be drifted to the right of the backup machine). It's not unreasonable to say that keepalived's load balancing seems to work at layer 3, so it must have changed the source address of the ip packets during load balancing. This is the network layer, and attaching information to nginx like this is definitely not an option. So I looked up the manual and found that keepalived load balancing supports three routing modes, NAT, Direct Routing and Tunneling.
In NAT mode, the source IP is modified, and both incoming and outgoing traffic passes through the load balancer. DR mode, on the other hand, will directly modify the MAC address, then the backhaul traffic will no longer go through the load balancer, which means that in this mode, the source address will not be modified, and the backhaul traffic will be sent directly to the source ip address.
DR mode has a requirement that the load balancer needs to be able to know the MAC address of the back-end service, which relies on the ARP implementation, i.e., it requires the load balancer and the back-end server to be in the same broadcast domain. It just so happens that we can fulfill it. So.
virtual_server 192.168.11.242 80 {
……
lb_kind DR
……
Switch the load balanced routing mode to DR mode. Revisit this time, theGetting the client address changed to 1.1.1.1, the This step is a pitfall. Why does the source address of the ip packet arriving at the keepalived change to, say, the external address of the exit router?
- Router port reflow (Hairpin NAT)
It is not far from the victory, at this time, the knowledgeable Dazhuang said, this should be related to the port reflow, there is a system with a similar problem, your web port is configured with a port reflow, if you turn off the port reflow can get the external network address. What is port reflow?
First, the router did port mapping, 1.1.1.1:80->192.168.0.2:80
Server A, for some reasons, it is not convenient to use the intranet address 192.168.0.2 to access B, but through the external IP or domain name to access server B, that is, access 1.1.1.1:80, according to the port forwarding rules, the router will be from the intranet interface of the traffic forwarded back to the intranet server B again, the formation of a 180-degree sharp bend! --This is where the name Haripin NAT comes from, which is very graphic.
Without this setting, Server A cannot access Server B properly by accessing 1.1.1.1:80. The reason is that hairpin affects the handshake process of Tcp connection establishment.
1. A sends a handshake request to the ingress router, which modifies the destination ip to 192.68.0.2 and sends it to Server B.
Upon receiving a handshake request, reply with a handshake acknowledgement to the source IP address of the handshake request, in this case A's address 192.168.0.1.
3. Because A and B are on the same network, the handshake acknowledgement will reach A directly.
It was found that the source ip (192.168.0.2) of this handshake acknowledgement reply was not the destination address (1.1.1.1) of the handshake request that I was expecting to establish a connection with, and that A didn't know B, only the router, which prevented the TCP connection from being established.
The key to solving the above problem is to have the handshake acknowledgement should go through the router and be sent to A as well.Therefore, it is necessary to change the source ip address to (1.1.1.1) at the same time when forwarding the handshake request to B before, so that when server B makes an acknowledgement reply, it will naturally send it to 1.1.1.1 as well.
But this source address translation (SNAT) process is really only necessary for traffic from the intranet. For extranet traffic, whose source IP is itself outside the network, it will inevitably be returned through the router again.
So I contacted Ming, who manages the router, and asked him not to be lazy, to configure the rules more carefully, and not to do undifferentiated source address translation. i.e.
1. Source and destination address translation for intranet interface traffic
2. Outbound traffic is only converted to the destination address.
Retest. Finally output the actual client PC actual ip address 0.0.0.0
OK, one twist and turn, ups and downs, write it here~