I have some servers that don’t come with native IPv6 connectivity, which means that in order to use the next generation protocol, they need to be tunneled by other IPv6-capable nodes over IPv4.
In the past I have exclusively gone for the Tunnel Broker service provided by Hurricane Electric. I loved their service not only because it is free and easy to set up, but also for the reasonably good quality of their tunnel, since HE is a well-known transit provider. But recently, one of my servers which I use as an Internet exit has been suffering when it tries to make connections to IPv6-enabled websites. The symptom is simple – I can ping6
some addresses but not others, and the frequency is getting higher. So, I decided to set up a private tunnel endpoint using one of my own IPv6-enabled servers.
Prerequisites
I want to mimic the Tunnel Broker service as much as possible, because it is known to work. The current service provides tunnel users with the following stuff:
- “Server IPv4 Address”: The remote IPv4 tunnel endpoint, like
66.220.*.*
. - “Client IPv6 Address”: An IPv6 address representing the host connecting to the tunnel, like
2001:470:c:*::2
. - “Server IPv6 Address”: An IPv6 address representing the tunnel server, also used as the IPv6 gateway of the client host, like
2001:470:c:*::1
. - “Routed IPv6 Prefixes”:
/64
or/48
subnets given to the tunnel operator to provide IPv6 connectivity to other internal networks through the tunnel.
I made one of my IPv6-connected servers the designated tunnel server. In order to be used as such, it has the following to provide:
- A public IPv4 address: I will use this as the “Server IPv4 Address”, which means my client host will connect to this endpoint over IPv4.
- Three routable IPv6 addresses: The specs actually say I have 10, and I believe they would route the whole
/64
to me if I set it up right. But for this particular use case, 3 is enough:*::1
is the address of the tunnel server,*::2
and*::3
as Server and Client IPv6 Address respectively.
Since I don’t have any other subnets to make routable, I don’t need to provide another routable IPv6 prefix.
Connecting tunnel client and server
We first need to make the client and server hosts communicable using their IPv6 addresses. The protocol used by Tunnel Broker, and thus my new tunnel, is Simple Internet Transition (SIT). It is supported by Linux kernel natively and quite easy to set up. In fact, the Tunnel Broker service provides users with sample client configurations depending on their preferred network management tools. Here is an example using iproute2
:
modprobe ipv6
ip tunnel add sit-ipv6 mode sit remote [SERVER-IPV4] local [CLIENT-IPV4] ttl 255
ip link set sit-ipv6 up
ip addr add [CLIENT-IPV6]/127 dev sit-ipv6
ip route add ::/0 dev sit-ipv6
ip -f inet6 addr
For my configuration, the client IPv6 address is *::3
, and the netmask is set to /127
to include both ends’ addresses. If one wants to persist the configuration, they can use the method provided by their operating systems. Here is the example client configuration using Netplan (used at least by Ubuntu 18.04):
network:
version: 2
tunnels:
sit-ipv6:
mode: sit
remote: [SERVER-IPV4]
local: [CLIENT-IPV4]
addresses:
- "[CLIENT-IPV6]/127"
gateway6: "[SERVER-IPV6]"
The thing about SIT tunnels is that they are symmetrical, so in order to set up the server end, one need to make the following changes:
- Switch the server and client IPv4 addresses, so that the one after “local” is the IPv4 address of the configured machine.
- Replace
CLIENT-IPV6
withSERVER-IPV6
as the interface’s IPv6 endpoint. - Remove the route / gateway definition, since the server already has an external IPv6 gateway.
By now, both the tunnel server and client hosts should be able to reach each other with their brand new IPv6 addresses. This can be verified by running ping6 [SERVER-IPV6]
on the client side, and vice versa.
Forwarding Tunneled Traffic
In order for the tunneled host to actually reach the global Internet, the tunnel server has to route IPv6 traffic from and to the host.
Forwarding Outgoing Traffic
Since [SERVER-IPV6]
is configured to be the IPv6 gateway on the client host, all its traffic with a remote IPv6 destination address will be sent over the tunnel to the server side. By default, a server will not take the role of routing that traffic – it will only receive traffic destined to itself. To make it also forward traffic to the next hop, we need to enable packet forwarding in the kernel parameters. This can be done by running the following as root:
echo 1 > /proc/sys/net/ipv6/conf/[SERVER-TUNNEL-INTERFACE]/forwarding
This can be persisted across reboots by appending net.ipv6.conf.[SERVER-TUNNEL-INTERFACE].forwarding=1
to /etc/sysctl.conf
. Note that if you have firewalls like ip6tables
, you may need to configure its forwarding rules, or change the default forwarding policy to ACCEPT.
Accepting Incoming Traffic
When there is traffic coming in for the tunnel server, but has the destination address of the client host, the tunnel server’s gateway will attempt to use “Neighbor Solicitation Message” to verify its reachability. But the client host’s IPv6 address is absent on all interfaces of the server host, so it will not reply said message, causing the incoming traffic to be dropped.
In order for the tunnel server to respond to the solicitation message with a “Neighbor Advertisement Message”, we need to configure a NDP proxy for the server’s external interface. The first step is to enable NDP proxy in the Linux kernel:
echo 1 > /proc/sys/net/ipv6/conf/[SERVER-EXTERNAL-INTERFACE]/proxy_ndp
This parameter can be persisted in the same way as shown in the last section. Then we have to explicitly enable NDP proxy for the client IPv6 address. Using iproute2
this can be done as:
ip -6 neigh add proxy [CLIENT-IPV6] dev [SERVER-EXTERNAL-INTERFACE]
This line means that when the external router wants to reach the client IPv6 address on the interface, the server will respond with its own address. Then, when the traffic destined for the client host arrives, the server will forward it to the tunnel interface, since we configured a /127
subnet above to include IPv6 addresses of both ends. This can be shown by observing the routing table from running ip -6 route
on the server.
The command also needs to be persisted, so that client hosts will not lose connectivity after the server reboots. The way of persistence varies by the network management tool used by the server. For ifupdown the command can be written in /etc/network/interfaces
; If the server is using Netplan, the location where this command goes should probably be /etc/networkd-dispatcher/routable.d
, since Netplan doesn’t come with native hook support.
Summary
I would like to revisit the route an outgoing packet will go through. Let’s say a process on the client host wants to access 2001:4860:4860::8888
:
- According to the routing table on the client end, the traffic should be forwarded to the gateway,
SERVER-IPV6
. - Then it will notice that the
SERVER-IPV6
address belongs to the/127
subnet onsit-ipv6
interface. - When the packet is forwarded to the
sit-ipv6
tunnel interface, it will be encapsulated with an IPv4 header, and sent to theSERVER-IPV4
address. This could be across the IPv4 Internet, or a private connection if there is one. - The encapsulated packet will be received by the
sit-ipv6
interface on the server’s end, and unpacked to its original IPv6 form. - Since the IPv6 destination is an external one, and we have enabled forwarding on the server, it will be routed to the external gateway according to the routing table.
When the remote server replies, the packet goes the exact opposite way back to the client host.