Setting up a gateway on OpenBSD
Introduction
OpenBSD is a mature Unix-like operating system that focuses on security and correctness. It features a flexible, robust, performant TCP/IP stack and a highly configurable firewall. This page describes how to configure a computer running OpenBSD as an AMPRNet router to transfer traffic between AMPRNet subnets and the Internet.
OpenBSD natively supports IPENCAP (IP-IP) tunnels through gif(4) pseudo-devices. Each gif device is a virtual network interface, synthesized by the operating system, that implements a point-to-point tunnel. Unlike Linux, OpenBSD requires a separate gif interface for each tunnel. An essentially arbitrary number of such interfaces can be created and it scales to the number required to route all of AMPRNet.
One can manually configure gif tunnels and routes at the command line, or configure the system to establish tunnels and routes at boot time.
We will describe how to set things up by way of example. Assume a system configuration that looks substantially similar to the following:
- A dedicated static IP address to use as an endpoint for AMPRNet traffic.
- An ISP-provided router that is just a router; no NATing, no firewall.
- An OpenBSD computer with three ethernet interfaces. For example, using a Ubiquiti EdgeRouter 3 Lite:
- cnmac0 is the external interface connected to the ISP's network (in this example, we use 23.30.150.141 routing to 23.30.150.142)
- cnmac1 connects to an internal network (its configuration is irrelevant)
- cnmac2 is the internal interface connected to the subnet (in this example, we use 44.44.107.1 routing for 44.44.107.0/24)
Let us start by configuring a single tunnel and route to the AMPRNet gateway at UCSD:
ifconfig gif1 create ifconfig gif1 tunnel 23.30.150.141 169.228.34.84 ifconfig gif1 inet 44.44.107.1 netmask 255.255.255.255 route add -host 44.0.0.1 -link -iface gif1 -llinfo
The first command creates the interface, causing the kernel to synthesize it into existence. The second configures the tunnel itself: that is, the the IP addresses that will be put into the IPENCAP datagram that the tunnel creates: the first address is the local address, which will serve as the source address for the IPENCAP packet, while the second is the remote address, to which the packet will be sent. The third sets an IP address for the local endpoint of the interface: this exists solely so that traffic that is generated by the router, such as ICMP error messages (host or port unreachable, for example), have a valid source address. Note that despite the fact that this is a point-to-point interface, we do not specify the IP address of the remote end. The fourth and final command creates a host route and associates it with the tunnel interface.
We can repeat this process for each AMPRNet tunnel, creating interfaces and adding routes for each subnet.
Handling Encapsulated Inbound Traffic Without a Reciprocal Tunnel
When an inbound IPENCAP datagram arrives on our external interface, the network stack in the OpenBSD kernel recognizes it by examining the protocol number in the IP header: IPENCAP is protocol number 4 (not to be confused with IP version 4). Any such packets are passed to the packet input function in the gif implementation, which searches all configured gif interfaces trying to find match the configured tunnel source and destinations addresses with the corresponding addresses in the inbound packet. If such an interface is found, the packet is enqueued to the interface, which will strip the IPENCAP header and route the resulting "de-encapsulated" IP packet. This works for tunnels that are configured bidirectionally between any two sites. That is, if site A has a tunnel to site B, and B has a corresponding tunnel to A, they can send each other traffic.
Now consider the case where site A has a tunnel configured to send traffic to site B, but B has no tunnel configured to A: in this case, the datagram arrives as before and is presented to the gif implementation, but the search above fails since B has no tunnel to A, so nothing matches the source and destination addresses on the incoming packet. In this case, the system might be responsible for routing such packets to another computer or network, so the packet is not de-encapsulated and processed. However, in an AMPRNet context, we very well may want to process that packet. Accordingly, the gif implementation has a mechanism for describing an interface that accepts encapsulated traffic from any source destined to a local address. If we configure a gif interface where the distant end of the tunnel set to 0.0.0.0
, then any incoming datagram where the destination address is the same as the local address on the interface, will be accepted and de-encapsulated and processed as before. Using this, we can set up an interface specifically for accepting traffic from systems to which we have not defined a tunnel:
ifconfig gif0 create ifconfig gif0 tunnel 23.30.150.141 0.0.0.0 ifconfig gif0 inet 44.44.107.1 255.255.255.255
Note the 0.0.0.0
as the remote address in the ifconfig tunnel
command. Again, we set an interface address using our local AMPRNet router address purely for locally generated traffic.
One this interface is configured, IPENCAP traffic from remote systems that have defined tunnels to us will flow, regardless of whether we have created a tunnel to them.
Policy-based Routing Using Routing Domains
The configuration explored so far is sufficient to make connections to AMPRNet directly-configured AMPRNet subnets, but suffers from a number of deficiencies. In particular, there are two issues that we will discuss now.
First, there is a problem with exchanging traffic with non-AMPRNet systems on the Internet. Presumably, these systems are not aware of AMPRNet tunneling, so traffic from them goes to the gateway at UCSD, where it will be encapsulated and sent through a tunnel to the external interface on our router. There, it will be de-encapsulated and delivered into our subnet. However, return traffic will be sent to the router, but since the destination is generally a tunnel, it will be sent via the default route, but from an AMPRNet source address. Since most ISPs will not pass AMPRNet traffic, the result will likely be lost before it reaches the destination. We may think it would be possible to work around that using a firewall rule to NAT the source address to something provided by our ISP, but even if the resulting datagram made it to the destination, for a protocol like TCP it would no longer match the 5-tuple for the connection, and would thus be lost.
The second problem is how reaching AMPRNet systems for which we have not configured a tunnel. Without a tunnel, and thus a route, we cannot send traffic to those systems.
We can solve both of these problems by sending all of our traffic through a tunnel interface to the UCSD gateway by default, e.g., by setting the default route:
route add default 4.0.0.1
However, how does the encapsulated traffic from the tunnel interface get sent to our ISP's router? We can add a host route for the UCSD gateway in our local routing table, but we have to do this for every tunnel, which is unwieldy. Further, connecting to our external interface becomes complicated: suppose someone ping
s our external interface. Assuming we permit this, the response would be routed through the UCSD gateway tunnel, but even if the gateway passed arbitrary traffic back onto the Internet, the packet might be lost as upstream ISPs would refuse to route it.
The solution to all of these problems is to use policy-based routing. Specifically, we would like to make routing decisions based on the source IP address of our traffic. We might be able to do this with firewall rules, but the edge cases get complicated very quickly. Fortunately, there is another way: routing domains.
Routing domains in OpenBSD are a mechanism to isolate routing decisions from one another. Network interfaces are configured into exactly one routing domain, which has its own private set of routing tables. Those tables are isolated, but traffic can be passed between routing domains via firewall rules.
In our case, we can put external interface and our local AMPRNet gateway interface into separate routing domains: the gif interfaces and the local AMPRNet gateway interface can both be assigned to routing domain 44, while the external interface might be 23. 0 is the default. The numbers here are arbitrary (as long as the are less than 256), and these are chosen to match the first octet of our example addresses. In our example, we can set the default route in the routing domain that owns our external interface to our ISP's router, while in the routing domain hosting our AMPRNet presence we can set it to 44.0.0.1
.
There is a additional piece of functionality that allows us to dispense with much of the complexity of routing between domains: the tunnel assigned to a gif can be in a different routing domain than the interface itself.
The solution was to set up a separate routing table in a different
routing domain specifically for AMPRNet traffic, and tie the two
together using firewall rules. In the AMPRNet routing table, I
could set my default route to point to the UCSD gateway, so any
traffic sent from one of my 44.44.107.0/24 addresses that doesn't
match a route to a known tunnel gets forwarded through
amprgw.sysnet.ucsd.edu. With that in place, I could ping my gateway
from random machines. This must seem obvious to a lot of folks
here, but it took me a little while to figure out what was going
on. Things are working now, however.
So far I have encountered two other caveats: I decided to configure two tunnel interfaces statically at boot time: 'gif0' goes to the UCSD tunnel, and 'gif1' sets up a tunnel to N1URO for his 44.88 net. Under OpenBSD, I assumed that the natural way to do this would be to add /etc/hostname.gif0 and /etc/hostname.gif1 files and this does in fact create the tunnels at boot time. However, traffic going out from my gateway doesn't seem to get sent through the tunnels; I did not bother to track down exactly why, but I believe it has to do with some kind of implicit ordering dependency when initializing PF. When I set up the separate routing domain, it struck me that the language accepted by /etc/netstart in an /etc/hostname.if file was not sufficiently rich to set up tunnels in a routing domain, so I capitulated and just set up the static interfaces from /etc/rc.local; imperfect but it works.
The second caveat is that I seem to have tickled a kernel error trying to set up an alias of a second IP address on my 44.44.107.1 NIC; I get a kernel panic due to an assertion failure. It looks a bug to me, but I haven't had the bandwidth to track it down. In the meanwhile, simply don't add aliases to interfaces in non-default routing domains.
The biggest piece missing was a daemon to handle receiving 44net RIP packets and use that data to maintain tunnels and routes. I thought about porting one, but decided of write my own instead. It has been running for a few weeks now on my node and while it's still not quite "done" it seems to work well enough that I decided it was time to cast a somewhat a wider net and push it up to GitHub for comment from others.
A couple of quick notes on implementation:
- The program maintains a copy of the AMPRNet routing table in a modified PATRICIA trie (really a compressed radix tree). Routes are expired after receiving a RIP packet.
- A similar table of tunnels is maintained.
- Tunnel interfaces are reference counted and garbage collected. A bitmap indicating which tunnels are in use is maintained.
- The program is completely self-contained in the sense that I do not fork/exec external commands to e.g. configure tunnels or manipulate routes. That is all done via ioctls or writing messages to a routing socket.
There is more to do; I'm sure there are a few bugs. I'd also like to query the system state at startup to initialize the routing and tunnel tables. Exporting and/or parsing an encap file would be nice. Logging and error checking can, I'm sure, be improved.
It's about 1200 lines of non-comment code, compiles down to a 28K MIPS64 executable (stripped). The code is at https://github.com/dancrossnyc/44ripd