I have some VPS servers with good connections and rather limited hardware resources, which cannot be used as an edge node to handle Kubernetes traffic by itself. For instance, a VPS server with CN2-GIA connection is perfect for Mainland Chinese visitors, but it comes only with a shy 512 MB of memory, which is a little tight for running Docker, Kubelet and Nginx Ingress Controller.
So I am trying to set up an forwarding mechanism to proxy the traffic between visitors and another more powerful Kubernetes node. A simple solution might be to run a TCP port proxy, such as nginx, haproxy, socat, or even iptables’ MASQUERADE feature from the Linux kernel. The problems with this method are:
- The original visitor IP will be lost and replaced with the edge’s IP.
- Surely you can use the PROXY Protocol to wrap and forward the client IPs, but that would require additional setup. For instance, Nginx Ingress Controller allows you accept only traffic with or without PROXY Protocol, but not both.
- With the exception of kernel forwarding, traffic must be handled by the user space, which can increase the load on an underpowered node.
- Traffic is forwarded between nodes as-is, with no additional encryption.
- This will be problematic if the two nodes are not connected with a private network, and the traffic forwarded is in a plaintext protocol.
I experimented with a new solution, which overcomes all the problems mentioned above. Here is an overall illustration of the setup:
The following are the incoming and outgoing traffic flows, along with the commands used to set up forwarding:
- Incoming traffic
- When a client from
[Client.IP]
hits[Slim.Edge.Public.IP]:80
, the traffic will be received by the edge server’s public network interface. - Edge server changes the destination address to
[Powerful.Node.Private.IP]:80
using the kernel’s DNAT mechanism. This happens in the pre-routing stage.
iptables -t nat -A PREROUTING -p tcp -m multiport --dports 80,443 -d [SLIM.EDGE.PUBLIC.IP] -j DNAT --to-destination [POWERFUL.NODE.PRIVATE.IP]
- Note that unlike other proxy setups, there is no masquerading (iptables’
MASQUERADE
action) in place, because we want to preserve the original source IP.
- Note that unlike other proxy setups, there is no masquerading (iptables’
- The kernel will pass on the incoming traffic to edge server’s WireGuard interface. By default, forwarding traffic is not allowed, and a forwarding rule has to be added explicitly.
iptables -t filter -A FORWARD -o [SLIM.EDGE.WIREGUARD.INTERFACE] -j ACCEPT
- Here
[SLIM.EDGE.WIREGUARD.INTERFACE]
is the slim node’s WireGuard interface, such aswg0
. - Also remember to enable the
net.ipv4.ip_forward
toggle. This is pretty routing, so I won’t get into details.
- Here
- Powerful node will receive the traffic on their WireGuard interface, and since the ingress controller (or HTTP server, or whatever daemon you are trying to forward the traffic to) is listening on the host interface with a wildcard address (0.0.0.0), the forwarded packets will be handled just like direct ones, with the client’s source IP addresses in tact.
- When a client from
- Outgoing traffic
- The powerful node will probably reply with a packet from
[Powerful.Node.Private.IP]:80
to[Client.IP]:(source-port)
. - Use an IP rule to make sure that traffic coming out of the private interface, regardless of the destination address, will be sent to the WireGuard interface.
ip rule add from [POWERFUL.NODE.PRIVATE.IP] table 1234 prio 5678
- Note that the routing table ID (1234) should be set in the WireGuard configuration (
Table = 1234
) in order for thewg-quick
script to create and fill this routing table. The priority number (5678) can be anywhere between 1 and the main rule (usually 32766). - When setting up the WireGuard tunnel, make sure to put
AllowedIPs = 0.0.0.0/0
(or the IPv6 counterpart::/0
) in the edge node’s peer definition. - You can examine the generated routing table by running:
ip route show table 1234
, and the output should be similar to:default dev [POWERFUL.NODE.WIREGUARD.INTERFACE] scope link
- Note that the routing table ID (1234) should be set in the WireGuard configuration (
- The traffic will be received by the edge node, which it needs to change the source IP from
[Powerful.Node.Private.IP]
back to its own public IP, using the kernel’s SNAT mechanism.
iptables -t nat -A POSTROUTING -p tcp -s [POWERFUL.NODE.PRIVATE.IP] -j SNAT --to-source [SLIM.EDGE.PUBLIC.IP]
- From the edge node’s perspective, this packet is also a forwarded one, so another forwarding rule should be added.
iptables -t filter -A FORWARD -i wg-pe-ba-la -j ACCEPT
- The powerful node will probably reply with a packet from
With this setup, I can achieve the goal of receiving traffic using the slim edge node, while avoiding the issues mentioned above: The client IP is preserved during the whole process, the traffic forwarding happens entirely in the kernel space, and traffic between servers are encrypted.
However, the encryption is totally optional, so the WireGuard tunnel between servers can be easily replaced with other layer-3 tunnels like IPIP or GRE.