Setting up a gateway on OpenBSD: Difference between revisions

From 44Net Wiki
Jump to navigation Jump to search
No edit summary
m (Minor grammar tweaks)
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
TODO(Cross): Wordsmith this section
== Introduction ==


This page describes how to configure a router running OpenBSD to transfer traffic between a 44Net subnet and the rest of the world.
[https://www.openbsd.org OpenBSD] is a mature Unix-like operating system that focuses on security and correctness. It features a flexible, robust, performant TCP/IP stack and a [https://www.openbsd.org/faq/pf/ highly configurable firewall]. This page describes how to configure a computer running OpenBSD as an AMPRNet router to transfer traffic between AMPRNet subnets and the Internet.


Assume a setup that looks substantially similar to the following:
OpenBSD natively supports [https://en.wikipedia.org/wiki/IP_in_IP IPENCAP (IP-IP)] tunnels through [https://man.openbsd.org/gif.4 ''gif''(4)] pseudo-devices. Each ''gif'' device is a virtual network interface, synthesized by the operating system, that implements a point-to-point tunnel. Unlike Linux, OpenBSD requires a separate ''gif'' interface for each tunnel. An essentially arbitrary number of such interfaces can be created and it scales to the number required to route all of AMPRNet.


# A dedicated static IP address to use as an endpoint for 44net traffic.
One can manually configure ''gif'' tunnels and routes at the command line, or configure the system to establish tunnels and routes at boot time.
# An ISP-provided router that is just a router; no NATing, no firewalling.
# An edge router OpenBSD with three ethernet interfaces. For example, using a Ubiquiti EdgeRouter Lite:
## cnmac0 is the external interface connected to the ISP's network
## cnmac1 connects to your internal network
## cnmac2 is your AMPRNet internal gateway to your subnet (in this example, we use 44.44.107.0/24)


On OpenBSD, tunneling interfaces for IPENCAP are provided by 'gif' pseudo-devices.  Unlike Linux, one creates a separate 'gif' interface for each tunnel, but one can create an essentially arbitrary number of such interfaces. If there is a limit it is well above the number required for supporting the full AMPRNet mesh, so don't worry about scalability to the number of interfaces.
We will describe how to set things up by way of example. Assume a system configuration that looks substantially similar to the following:


On AMPRNet, the UCSD gateway *will not* pass traffic for an IP address that does not have a corresponding entry in the AMPR.ORG DNS domain. Also, 44.0.0.1 does not respond to 'ping' from 44/8 IP's. Caveat emptor as one tries to test: make sure you have DNS entries for your addresses and try pinging something other than 44.0.0.1 or you'll suffer contusions banging your head against a desk trying to figure out why nothing appears to work....
* A dedicated static IP address to use as an endpoint for AMPRNet traffic.
* An ISP-provided router that is just a router; no NATing, no firewall.
* An OpenBSD computer with three ethernet interfaces. For example, using a Ubiquiti EdgeRouter 3 Lite:
*# cnmac0 is the external interface connected to the ISP's network (in this example, we use 23.30.150.141 routing to 23.30.150.142)
*# cnmac1 connects to an internal network (its configuration is irrelevant)
*# cnmac2 is the internal interface connected to the subnet (in this example, we use 44.44.107.1 routing for 44.44.107.0/24)


Once I had a tunnel up to UCSD, I found that I could ping my
Let us start by configuring a single tunnel and route to the AMPRNet gateway at UCSD:
44.44.107.1 machine from a host on my internal network, but not
from arbitrary machines.  This was interesting; it turns out that
hosts on my internal network get NAT'ed to another IP address on
the small subnet I got from Comcast (through another, completely
separate router -- not comcast's router but another ERL).  What was
happening was that as I ping'ed 44.44.107.1 from e.g. my laptop,
ICMP echo request packets got NAT'ed to this other address and
routed over to amprgw.sysnet.ucsd.edu and tunneled back to the
external interface of my AMPRNet gateway.  The gateway accepted the
encapsulated ICMP echo requests (I have a PF rule that explicitly
allows ping) and forwarded them across the tunnel interface where
they were unencapsulated; the IP stack saw that the result was
addressed to an IP address on a local interface (i.e., they were
for the router) and generated an ICMP echo response packet with a
*source* address of 44.44.107.1 and a *destination* address of the
external address of my other router (that is, the address the ICMP
echo request was NAT'ed to).  This matched the network route for
my local Comcats subnet and so my AMPRNet router realized it could
pass the packet back to my other router directly.  It did so and
the other router happily took the packet, matched it back through
the NAT back to the original requesting machine (my laptop) and
forwarded it: hence, I got my ping responses back.  But note that
the response was not going through the tunnel back to UCSD: it was
being routed directly through the external interface.


Now consider what happens when I tried to ping 44.44.107.1 from
<nowiki>ifconfig gif1 create
a different machine on some other network. The ICMP echo request
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84
packet gets routed through the UCSD gateway and tunneled back to
ifconfig gif1 inet 44.44.107.1 netmask 255.255.255.255
my gateway as before, but since responses don't go through back
route add -host 44.0.0.1 -link -iface gif1 -llinfo</nowiki>
through the tunnel, the response packet matches the default route
of my gateway and get's forwarded to comcast's router. Comcast
would look at it, see that 44.44.107.1 wasn't on one of it's known
networks that it would route floor, and discard the response. Oops.


The solution was to set up a separate routing table in a different
The first command creates the interface, causing the kernel to synthesize it into existence. The second configures the tunnel itself: that is, the the IP addresses that will be put into the IPENCAP datagram that the tunnel creates: the first address is the ''local'' address, which will serve as the source address for the IPENCAP packet, while the second is the ''remote'' address, to which the packet will be sent. The third sets an IP address for the local endpoint of the interface: this exists solely so that traffic that is generated by the router, such as ICMP error messages (host or port unreachable, for example), have a valid source address. Note that despite the fact that this is a point-to-point interface, we do not specify the IP address of the remote end.
routing domain specifically for AMPRNet traffic, and tie the two
together using firewall rules.  In the AMPRNet routing table, I
could set my default route to point to the UCSD gateway, so any
traffic sent from one of my 44.44.107.0/24 addresses that doesn't
match a route to a known tunnel gets forwarded through
amprgw.sysnet.ucsd.edu.  With that in place, I could ping my gateway
from random machines.  This must seem obvious to a lot of folks
here, but it took me a little while to figure out what was going
on.  Things are working now, however.


So far I have encountered two other caveats: I decided to
The fourth and final command creates a host route and associates it with the tunnel interface. The <code>-link</code>, <code>-iface</code> and <code>-llinfo</code> flags indicate that this is an interface route, and that traffic for the route should go directly to the given interface (<code>gif1</code>) instead of identifying the gateway via an IP address. We can examine this route from the commandline. E.g.,
configure two tunnel interfaces statically at boot time: 'gif0'
goes to the UCSD tunnel, and 'gif1' sets up a tunnel to N1URO for
his 44.88 net.  Under OpenBSD, I assumed that the natural way to
do this would be to add /etc/hostname.gif0 and /etc/hostname.gif1
files and this does in fact create the tunnels at boot time.  However,
traffic going out from my gateway doesn't seem to get sent through
the tunnels; I did not bother to track down exactly why, but I
believe it has to do with some kind of implicit ordering dependency
when initializing PF.  When I set up the separate routing domain,
it struck me that the language accepted by /etc/netstart in an
/etc/hostname.if file was not sufficiently rich to set up tunnels
in a routing domain, so I capitulated and just set up the static
interfaces from /etc/rc.local; imperfect but it works.


The second caveat is that I seem to have tickled a kernel error
<nowiki>$ route -n show -inet | grep '44\.0\.0\.1 '
trying to set up an alias of a second IP address on my 44.44.107.1
44.0.0.1          link#7            UHLSh      1        2    -     8 gif1</nowiki>
NIC; I get a kernel panic due to an assertion failure. It looks a
bug to me, but I haven't had the bandwidth to track it down. In
the meanwhile, simply don't add aliases to interfaces in non-default
routing domains.


The biggest piece missing was a daemon to handle receiving 44net RIP packets and use that data to maintain tunnels and routes. I thought about porting one, but decided of write my own instead. It has been running for a few weeks now on my node and while it's still not quite "done" it seems to work well enough that I decided it was time to cast a somewhat a wider net and push it up to GitHub for comment from others.
Consult the manual page for [https://man.openbsd.org/netstat.1 ''netstat''(1)] for details on what the <code>UHLSh</code> flags mean.


A couple of quick notes on implementation:
We can repeat this process for each AMPRNet tunnel, creating interfaces and adding routes for each subnet.


# The program maintains a copy of the AMPRNet routing table in a modified PATRICIA trie (really a compressed radix tree). Routes are expired after receiving a RIP packet.
== Handling Encapsulated Inbound Traffic Without a Reciprocal Tunnel ==
# A similar table of tunnels is maintained.
# Tunnel interfaces are reference counted and garbage collected. A bitmap indicating which tunnels are in use is maintained.
# The program is completely self-contained in the sense that I do not fork/exec external commands to e.g. configure tunnels or manipulate routes.  That is all done via ioctls or writing messages to a routing socket.


There is more to do; I'm sure there are a few bugs.  I'd also like
When an inbound IPENCAP datagram arrives on our external interface, the network stack in the OpenBSD kernel recognizes it by examining the protocol number in the IP header: IPENCAP is protocol number 4 (not to be confused with IP version 4). Any such packets are passed to the packet input function in the ''gif'' implementation, which searches all configured ''gif'' interfaces trying to match the configured tunnel source and destinations addresses with the corresponding addresses in the inbound packet. If such an interface is found, the packet is enqueued to the interface, which will strip the IPENCAP header and route the resulting "de-encapsulated" IP packet. This works for tunnels that are configured bidirectionally between any two sites. That is, if site A has a tunnel to site B, and B has a corresponding tunnel to A, they can send each other traffic.
to query the system state at startup to initialize the routing and
tunnel tables. Exporting and/or parsing an encap file would be
nice. Logging and error checking can, I'm sure, be improved.


It's about 1200 lines of non-comment code, compiles down to a 28K MIPS64 executable (stripped).  The code is at https://github.com/dancrossnyc/44ripd
Now consider the case where site A has a tunnel configured to send traffic to site B, but B has no tunnel configured to A: in this case, the datagram arrives as before and is presented to the ''gif'' implementation, but the search above fails since B has no tunnel to A, so nothing matches the source ''and'' destination addresses on the incoming packet. In this case, the system might be responsible for routing such packets to another computer or network, so the packet is not decapsulated and processed. However, in an AMPRNet context, we very well may want to process that packet. Accordingly, the ''gif'' implementation has a mechanism for describing an interface that accepts encapsulated traffic from any source destined to a local address. If we configure a ''gif'' interface where the distant end of the ''tunnel'' set to <code>0.0.0.0</code>, then any incoming datagram where the destination address is the same as the local address on the interface will be accepted, decapsulated and processed as before. Using this, we can set up an interface specifically for accepting traffic from systems to which we have not defined a tunnel:
 
<nowiki>ifconfig gif0 create
ifconfig gif0 tunnel 23.30.150.141 0.0.0.0
ifconfig gif0 inet 44.44.107.1 255.255.255.255</nowiki>
 
Note the <code>0.0.0.0</code> as the remote address in the <code>ifconfig tunnel</code> command. Again, we set an interface address using our local AMPRNet router address purely for locally generated traffic.
 
One this interface is configured, IPENCAP traffic from remote systems that have defined tunnels to us will flow, regardless of whether we have created a tunnel to them.
 
== Policy-based Routing Using Routing Domains ==
 
The configuration explored so far is sufficient to make connections to AMPRNet subnets we have manually configured tunnels for, but suffers from a number of deficiencies. In particular, the following two issues that we will discuss now.
 
First, there is a problem with exchanging traffic with non-AMPRNet systems on the Internet. Presumably, these systems are not aware of AMPRNet tunneling, so traffic from them goes to the gateway at UCSD, where it will be encapsulated and sent through a tunnel to the external interface on our router. There, it will be decapsulated and delivered into our subnet. However, return traffic will be sent to the local router, but since the destination is generally not a tunnel, it will be sent via the default route, but with an AMPRNet source address. Since most ISPs will not pass AMPRNet traffic, the result will likely be lost before it reaches the destination. We may think it would be possible to work around that using a firewall rule to NAT the source address to something provided by our ISP, but even if the resulting datagram made it to the destination, for a protocol like TCP it would no longer match the 5-tuple for the connection, and would thus be lost.
 
The second problem is reaching AMPRNet systems for which we have not configured a tunnel.  Without a tunnel, and thus a route, we cannot send traffic to those systems.
 
We can solve both of these problems by sending all of our traffic through a tunnel interface to the UCSD gateway by default, e.g., by setting the default route:
 
<nowiki>route add default 4.0.0.1</nowiki>
 
However this creates a new problem: how does the encapsulated traffic from the tunnel interface get sent to our ISP's gateway? We can add a host route for the UCSD gateway in our local routing table, but we have to do this for every tunnel, which is unwieldy. Further, connecting to our external interface becomes complicated: suppose someone <code>ping</code>s our local router's external interface. Assuming we permit this, the response would be routed through the UCSD gateway tunnel, but even if the gateway passed arbitrary traffic back onto the Internet, the packet might be lost as upstream ISPs would refuse to route it and even if delivered, it wouldn't match the destination address of the original ICMP echo request packet anyway.
 
The solution to all of these problems is to use [https://en.wikipedia.org/wiki/Policy-based_routing ''policy-based routing'']. Specifically, we would like to make routing decisions based on the source IP address of our traffic. We might be able to do this with firewall rules, but the edge cases get complicated very quickly. Fortunately, there is another way: [https://man.openbsd.org/rdomain.4 routing domains].
 
Routing domains in OpenBSD are a mechanism to isolate routing decisions from one another. Network interfaces are configured into exactly one routing domain, which has its own private set of routing tables. Those tables are isolated, but traffic can be passed between routing domains via firewall rules.
 
In our example, we put our external and local AMPRNet gateway interfaces into separate routing domains: all of the ''gif'' interfaces and the local AMPRNet gateway interface can both be assigned to routing domain 44, while the external interface might be 23. Routing domain 0 is the default. Note that the numbers here are arbitrary, and we can choose any value below 256 that we like; these are chosen to match the first octet of our example addresses.
 
The routing domain on an interface is set using the <code>rdomain</code> parameter to <code>ifconfig</code>:
 
<nowiki>ifconfig cnmac0 rdomain 23
ifconfig gif0 rdomain 44
ifconfig gif1 rdomain 44</nowiki>
 
We set the default route in the routing domain that owns our external interface to our ISP's router, while in the routing domain hosting our AMPRNet presence we can set it to the UCSD gateway:
 
<nowiki>route -T 23 add default 23.30.150.242
route -T 44 add default 44.0.0.1</nowiki>
 
Now traffic on our AMPRNet subnet will be routed through the UCSD gateway, while traffic on the external interface will be routed through our ISP's router.
 
Another piece of functionality lets us dispense with much of the complexity of routing between domains: the tunnel assigned to a ''gif'' can be in a different routing domain than the interface itself. Going back to our example, if we place each of our ''gif'' interfaces into routing domain 44 then traffic routed out to or coming in from the tunnel will be routed with our AMPRNet-specific routing table. But if we place the tunnel on that interface into routing domain 23, then the encapsulated traffic that we send to and from the Internet (e.g., to other tunnel end points) will be routed in that domain, thus routing through our ISP's network. Critically, matching incoming datagrams to ''gif'' interfaces as described above happens in the routing domain associated with the tunnel, so inbound traffic coming through our external interface will be directed to the correct interface.
 
We specify the routing domain of a tunnel via the <code>tunneldomain</code> parameter to <code>ifconfig</code> when configuring the route on an interface:
 
  <nowiki>ifconfig gif0 tunnel 23.30.150.141 0.0.0.0 tunneldomain 23
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84 tunneldomain 23</nowiki>
 
With this in place, routing works as expected for all of the cases mentioned above.
 
== Persistent Configuration Across Router Restarts ==
 
We now have enough information that we can set up tunnels between our router and arbitrary AMPRNet subnets. However, doing so manually is tedious and not particularly robust. We would like the system to automatically configure our tunnels and default routes at boot time. Fortunately, the OpenBSD startup code can do this easily. For each network interface <code>$if</code> on the system, we can configure it automatically at boot time by putting configuration commands into the file <code>/etc/hostname.$if</code>.
 
There are four interfaces we have configured: the two ethernet interfaces for our external and AMPRNet networks, and the two ''gif'' interfaces with the default incoming tunnel and the tunnel to the UCSD gateway.  Thus, there are four files:
 
<code>/etc/hostname.cnmac0</code>:
 
<nowiki>rdomain 23
inet 23.30.150.141 0xfffffff8
!ifconfig lo23 inet 127.0.0.1
!route -qn -T 23 add default 23.30.150.142</nowiki>
 
<code>/etc/hostname.cnmac2</code>:
 
<nowiki>rdomain 44
inet 44.44.107.1 255.255.255.0
!ifconfig lo44 inet 127.0.0.1</nowiki>
 
<code>/etc/hostname.gif0</code>:
 
<nowiki>rdomain 44
tunnel 23.30.150.141 0.0.0.0 tunneldomain 23
inet 44.44.107.1 255.255.255.255</nowiki>
 
<code>/etc/hostname.gif1</code>:
 
<nowiki>tunnel 23.30.150.141 169.228.34.84 tunneldomain 23
inet 44.44.107.1 255.255.255.255
!route -qn -T 44 add 44.0.0.1/32 -link -iface gif1 -llinfo
!route -qn -T 44 add default 44.0.0.1</nowiki>
 
Note that each routing domain also has its own associated loopback interface, hence configuring <code>lo23</code> and <code>lo44</code>. These interfaces are automatically created when the routing domain is created, but we configure them when we bring up the associated ethernet interfaces.
 
We can examine the interfaces and separate routing tables to ensure that things are set up as expected:
 
<nowiki>$ ifconfig cnmac0
cnmac0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> rdomain 23 mtu 1500
        lladdr 44:d9:e7:9f:a7:64
        index 1 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet 23.30.150.141 netmask 0xfffffff8 broadcast 23.30.150.143
$ ifconfig cnmac2
cnmac2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> rdomain 44 mtu 1500
        lladdr 44:d9:e7:9f:a7:66
        index 3 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex,master)
        status: active
        inet 44.44.107.1 netmask 0xffffff00 broadcast 44.44.107.255
$ ifconfig gif0
gif0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 44 mtu 1280
        index 6 priority 0 llprio 3
        encap: txprio payload rxprio payload
        groups: gif
        tunnel: inet 23.30.150.141 -> 0.0.0.0 ttl 64 nodf ecn rdomain 23
        inet 44.44.107.1 --> 0.0.0.0 netmask 0xffffffff
$ ifconfig gif1
gif1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 44 mtu 1280
        index 7 priority 0 llprio 3
        encap: txprio payload rxprio payload
        groups: gif
        tunnel: inet 23.30.150.141 -> 169.228.34.84 ttl 64 nodf ecn rdomain 23
        inet 44.44.107.1 --> 0.0.0.0 netmask 0xffffffff
$ route -T 23 -n show -inet
Routing tables
 
Internet:
Destination        Gateway            Flags  Refs      Use  Mtu  Prio Iface
default            23.30.150.142      UGS        0    51126    -    8 cnmac0
23.30.150.136/29  23.30.150.141      UCn        1        7    -    4 cnmac0
23.30.150.141      44:d9:e7:9f:a7:64  UHLl      0    88377    -    1 cnmac0
23.30.150.142      4a:1d:70:de:c3:5a  UHLch      1      386    -    3 cnmac0
23.30.150.143      23.30.150.141      UHb        0        0    -    1 cnmac0
127.0.0.1          127.0.0.1          UHl        0        0 32768    1 lo23
$ route -T 44 -n show -inet
Routing tables
 
Internet:
Destination        Gateway            Flags  Refs      Use  Mtu  Prio Iface
default            44.0.0.1          UGS        0    54718    -    8 gif1
44.0.0.1          link#7            UHLSh      1        2    -    8 gif1
44.44.107/24      44.44.107.1        UCn      16        4    -    4 cnmac2
44.44.107.1        44:d9:e7:9f:a7:66  UHLl      0    4825    -    1 cnmac2
44.44.107.255      44.44.107.1        UHb        0        0    -    1 cnmac2
127.0.0.1          127.0.0.1          UHl        0        0 32768    1 lo44</nowiki>
 
== Restricting Traffic with the [https://www.openbsd.org/faq/pf/ PF] Firewall ==
 
All of these interfaces will fully integrate with the PF firewall software that comes with OpenBSD, and we can implement nearly arbitrary policies in our configuration.  For example, we might restrict traffic on the external interface to only ICMP messages and IPENCAP datagrams by setting the following rules in <code>/etc/pf.conf</code>:
 
<nowiki># Constants
extif = "cnmac0"
extamprgate = "23.30.150.141"
 
# Options
set skip on lo
set block-policy return
 
block return    # block stateless traffic
pass            # establish keep-state
 
# Normalize incoming packets.
match in all scrub (no-df random-id max-mss 1440)
 
# By default, block everything.  We selectively override in subsequent rules.
block in on $extif
 
# Pass 44net traffic
pass in on $extif inet proto ipencap from any to $extamprgate
 
# Pass ping and ICMP unreachable messages on external interface
pass in on $extif inet proto icmp icmp-type echoreq code 0 keep state
pass in on $extif inet proto icmp icmp-type unreach keep state</nowiki>
 
We can verify that these rules are in place as expected by querying the rule tables:
 
<nowiki># pfctl -sr -vv
@0 block return all
  [ Evaluations: 115977    Packets: 3776      Bytes: 168422      States: 0    ]
  [ Inserted: uid 0 pid 31812 State Creations: 0    ]
@1 pass all flags S/SA
  [ Evaluations: 115977    Packets: 242418    Bytes: 25899015    States: 253  ]
  [ Inserted: uid 0 pid 31812 State Creations: 102105]
@2 match in all scrub (no-df random-id max-mss 1440)
  [ Evaluations: 115977    Packets: 261105    Bytes: 31394491    States: 136  ]
  [ Inserted: uid 0 pid 31812 State Creations: 0    ]
@3 block return in on cnmac0 all
  [ Evaluations: 64726    Packets: 5925      Bytes: 325492      States: 0    ]
  [ Inserted: uid 0 pid 31812 State Creations: 0    ]
@4 pass in on cnmac0 inet proto icmp all icmp-type echoreq code 0
  [ Evaluations: 8149      Packets: 342      Bytes: 27424      States: 0    ]
  [ Inserted: uid 0 pid 31812 State Creations: 171  ]
@5 pass in on cnmac0 inet proto icmp all icmp-type unreach
  [ Evaluations: 308      Packets: 5        Bytes: 368        States: 0    ]
  [ Inserted: uid 0 pid 31812 State Creations: 0    ]
@6 pass in on cnmac0 inet proto ipencap from any to 23.30.150.141
  [ Evaluations: 8149      Packets: 126197    Bytes: 16538717    States: 5    ]
  [ Inserted: uid 0 pid 31812 State Creations: 2048  ]</nowiki>
 
A site can add more elaborate rules as desired.
 
== Maintaining Mesh Routes with 44ripd ==
 
The above shows how to set up AMPRNet tunnel interfaces, set routes to them, use routing domains for policy-based routing, and set firewall rules. However, so far all of these steps have been manual. This works for a handful of tunnels and routes, but there are hundreds of subnets AMPRNet in the AMPRNet mesh; maintaining all of these manually is not reasonable.
 
However, Dan Cross (KZ2X) has written a daemon specific to OpenBSD called <code>44ripd</code> that maintains tunnel and route information as distributed via the [https://wiki.ampr.org/wiki/RIP AMPRNet RIP] variant sent from <code>44.0.0.1</code>. To use this, make sure that multicasting is enabled on the host by setting <code>multicast=YES</code> in <code>/etc/rc.conf.local</code>, and then retrieve the 44ripd software from Github at https://github.com/dancrossnyc/44ripd. The software is built with the <code>make</code> command. For example:
 
<nowiki>git clone https://github.com/dancrossnyc/44ripd
cd 44ripd
make
doas install -c -o root -g wheel -m 555 44ripd /usr/local/sbin</nowiki>
 
Once installed, this can be run at boot by adding the following lines to <code>/etc/rc.local</code>:
 
<nowiki>if [ -x /usr/local/sbin/44ripd ]; then
        echo -n ' 44ripd'
        route -T 44 exec /usr/local/sbin/44ripd -s0 -s1 -D 44 -T 23
fi</nowiki>
 
Note that the <code>-s</code> options instruct the daemon not to try and allocate the <code>gif0</code> and <code>gif1</code> interfaces, as these are manually configured. The <code>-D</code> and <code>-T</code> options set the routing domains for ''gif'' interfaces and their associated tunnels, respectively.
 
=== Set the Default AMPRNet Tunnel and Route Manually ===
 
While route information for the UCSD AMPRNet gateway is distributed in RIP44 packets and we could in theory only configure the default incoming tunnel and rely on 44ripd to set up a route to UCSD, this leads to a problem. Specifically, without an interface configured to the UCSD gateway, we cannot set a default route in our AMRPNet routing domain for the gateway. Since we rely on periodic route broadcasts from UCSD to set those routes, we would delay setting up our default route for an indeterminate amount of time (on the order of minutes).
 
Thus, it is advised to explicitly configure the tunnel to UCSD at boot time along with the default route.
 
=== Notes on 44ripd Implementation ===
 
* 44ripd maintains a copy of the AMPRNet routing table in a modified PATRICIA trie (really a compressed radix tree).
* A similar table of tunnels is maintained.
* Tunnel interfaces are reference counted and garbage collected. A bitmap indicating which tunnels are in use is maintained.
* Routes are expired after receiving a RIP packet.
* The program is completely self-contained in the sense that it does not fork/exec external commands to configure tunnels or manipulate routes.  That is all done via ioctls or writing messages to a routing socket.
* 44ripd does not examine the state of the system or routing tables at startup to bootstrap its internal state, but arguably should.
* Bugs in 44ripd should be reported via Github issues.
* Exporting and/or parsing an encap file would be nice.
* Logging and error checking can always be improved.

Latest revision as of 02:17, 30 June 2021

Introduction

OpenBSD is a mature Unix-like operating system that focuses on security and correctness. It features a flexible, robust, performant TCP/IP stack and a highly configurable firewall. This page describes how to configure a computer running OpenBSD as an AMPRNet router to transfer traffic between AMPRNet subnets and the Internet.

OpenBSD natively supports IPENCAP (IP-IP) tunnels through gif(4) pseudo-devices. Each gif device is a virtual network interface, synthesized by the operating system, that implements a point-to-point tunnel. Unlike Linux, OpenBSD requires a separate gif interface for each tunnel. An essentially arbitrary number of such interfaces can be created and it scales to the number required to route all of AMPRNet.

One can manually configure gif tunnels and routes at the command line, or configure the system to establish tunnels and routes at boot time.

We will describe how to set things up by way of example. Assume a system configuration that looks substantially similar to the following:

  • A dedicated static IP address to use as an endpoint for AMPRNet traffic.
  • An ISP-provided router that is just a router; no NATing, no firewall.
  • An OpenBSD computer with three ethernet interfaces. For example, using a Ubiquiti EdgeRouter 3 Lite:
    1. cnmac0 is the external interface connected to the ISP's network (in this example, we use 23.30.150.141 routing to 23.30.150.142)
    2. cnmac1 connects to an internal network (its configuration is irrelevant)
    3. cnmac2 is the internal interface connected to the subnet (in this example, we use 44.44.107.1 routing for 44.44.107.0/24)

Let us start by configuring a single tunnel and route to the AMPRNet gateway at UCSD:

ifconfig gif1 create
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84
ifconfig gif1 inet 44.44.107.1 netmask 255.255.255.255
route add -host 44.0.0.1 -link -iface gif1 -llinfo

The first command creates the interface, causing the kernel to synthesize it into existence. The second configures the tunnel itself: that is, the the IP addresses that will be put into the IPENCAP datagram that the tunnel creates: the first address is the local address, which will serve as the source address for the IPENCAP packet, while the second is the remote address, to which the packet will be sent. The third sets an IP address for the local endpoint of the interface: this exists solely so that traffic that is generated by the router, such as ICMP error messages (host or port unreachable, for example), have a valid source address. Note that despite the fact that this is a point-to-point interface, we do not specify the IP address of the remote end.

The fourth and final command creates a host route and associates it with the tunnel interface. The -link, -iface and -llinfo flags indicate that this is an interface route, and that traffic for the route should go directly to the given interface (gif1) instead of identifying the gateway via an IP address. We can examine this route from the commandline. E.g.,

$ route -n show -inet | grep '44\.0\.0\.1 '
44.0.0.1           link#7             UHLSh      1        2     -     8 gif1

Consult the manual page for netstat(1) for details on what the UHLSh flags mean.

We can repeat this process for each AMPRNet tunnel, creating interfaces and adding routes for each subnet.

Handling Encapsulated Inbound Traffic Without a Reciprocal Tunnel

When an inbound IPENCAP datagram arrives on our external interface, the network stack in the OpenBSD kernel recognizes it by examining the protocol number in the IP header: IPENCAP is protocol number 4 (not to be confused with IP version 4). Any such packets are passed to the packet input function in the gif implementation, which searches all configured gif interfaces trying to match the configured tunnel source and destinations addresses with the corresponding addresses in the inbound packet. If such an interface is found, the packet is enqueued to the interface, which will strip the IPENCAP header and route the resulting "de-encapsulated" IP packet. This works for tunnels that are configured bidirectionally between any two sites. That is, if site A has a tunnel to site B, and B has a corresponding tunnel to A, they can send each other traffic.

Now consider the case where site A has a tunnel configured to send traffic to site B, but B has no tunnel configured to A: in this case, the datagram arrives as before and is presented to the gif implementation, but the search above fails since B has no tunnel to A, so nothing matches the source and destination addresses on the incoming packet. In this case, the system might be responsible for routing such packets to another computer or network, so the packet is not decapsulated and processed. However, in an AMPRNet context, we very well may want to process that packet. Accordingly, the gif implementation has a mechanism for describing an interface that accepts encapsulated traffic from any source destined to a local address. If we configure a gif interface where the distant end of the tunnel set to 0.0.0.0, then any incoming datagram where the destination address is the same as the local address on the interface will be accepted, decapsulated and processed as before. Using this, we can set up an interface specifically for accepting traffic from systems to which we have not defined a tunnel:

ifconfig gif0 create
ifconfig gif0 tunnel 23.30.150.141 0.0.0.0
ifconfig gif0 inet 44.44.107.1 255.255.255.255

Note the 0.0.0.0 as the remote address in the ifconfig tunnel command. Again, we set an interface address using our local AMPRNet router address purely for locally generated traffic.

One this interface is configured, IPENCAP traffic from remote systems that have defined tunnels to us will flow, regardless of whether we have created a tunnel to them.

Policy-based Routing Using Routing Domains

The configuration explored so far is sufficient to make connections to AMPRNet subnets we have manually configured tunnels for, but suffers from a number of deficiencies. In particular, the following two issues that we will discuss now.

First, there is a problem with exchanging traffic with non-AMPRNet systems on the Internet. Presumably, these systems are not aware of AMPRNet tunneling, so traffic from them goes to the gateway at UCSD, where it will be encapsulated and sent through a tunnel to the external interface on our router. There, it will be decapsulated and delivered into our subnet. However, return traffic will be sent to the local router, but since the destination is generally not a tunnel, it will be sent via the default route, but with an AMPRNet source address. Since most ISPs will not pass AMPRNet traffic, the result will likely be lost before it reaches the destination. We may think it would be possible to work around that using a firewall rule to NAT the source address to something provided by our ISP, but even if the resulting datagram made it to the destination, for a protocol like TCP it would no longer match the 5-tuple for the connection, and would thus be lost.

The second problem is reaching AMPRNet systems for which we have not configured a tunnel. Without a tunnel, and thus a route, we cannot send traffic to those systems.

We can solve both of these problems by sending all of our traffic through a tunnel interface to the UCSD gateway by default, e.g., by setting the default route:

route add default 4.0.0.1

However this creates a new problem: how does the encapsulated traffic from the tunnel interface get sent to our ISP's gateway? We can add a host route for the UCSD gateway in our local routing table, but we have to do this for every tunnel, which is unwieldy. Further, connecting to our external interface becomes complicated: suppose someone pings our local router's external interface. Assuming we permit this, the response would be routed through the UCSD gateway tunnel, but even if the gateway passed arbitrary traffic back onto the Internet, the packet might be lost as upstream ISPs would refuse to route it and even if delivered, it wouldn't match the destination address of the original ICMP echo request packet anyway.

The solution to all of these problems is to use policy-based routing. Specifically, we would like to make routing decisions based on the source IP address of our traffic. We might be able to do this with firewall rules, but the edge cases get complicated very quickly. Fortunately, there is another way: routing domains.

Routing domains in OpenBSD are a mechanism to isolate routing decisions from one another. Network interfaces are configured into exactly one routing domain, which has its own private set of routing tables. Those tables are isolated, but traffic can be passed between routing domains via firewall rules.

In our example, we put our external and local AMPRNet gateway interfaces into separate routing domains: all of the gif interfaces and the local AMPRNet gateway interface can both be assigned to routing domain 44, while the external interface might be 23. Routing domain 0 is the default. Note that the numbers here are arbitrary, and we can choose any value below 256 that we like; these are chosen to match the first octet of our example addresses.

The routing domain on an interface is set using the rdomain parameter to ifconfig:

ifconfig cnmac0 rdomain 23
ifconfig gif0 rdomain 44
ifconfig gif1 rdomain 44

We set the default route in the routing domain that owns our external interface to our ISP's router, while in the routing domain hosting our AMPRNet presence we can set it to the UCSD gateway:

route -T 23 add default 23.30.150.242
route -T 44 add default 44.0.0.1

Now traffic on our AMPRNet subnet will be routed through the UCSD gateway, while traffic on the external interface will be routed through our ISP's router.

Another piece of functionality lets us dispense with much of the complexity of routing between domains: the tunnel assigned to a gif can be in a different routing domain than the interface itself. Going back to our example, if we place each of our gif interfaces into routing domain 44 then traffic routed out to or coming in from the tunnel will be routed with our AMPRNet-specific routing table. But if we place the tunnel on that interface into routing domain 23, then the encapsulated traffic that we send to and from the Internet (e.g., to other tunnel end points) will be routed in that domain, thus routing through our ISP's network. Critically, matching incoming datagrams to gif interfaces as described above happens in the routing domain associated with the tunnel, so inbound traffic coming through our external interface will be directed to the correct interface.

We specify the routing domain of a tunnel via the tunneldomain parameter to ifconfig when configuring the route on an interface:

ifconfig gif0 tunnel 23.30.150.141 0.0.0.0 tunneldomain 23
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84 tunneldomain 23

With this in place, routing works as expected for all of the cases mentioned above.

Persistent Configuration Across Router Restarts

We now have enough information that we can set up tunnels between our router and arbitrary AMPRNet subnets. However, doing so manually is tedious and not particularly robust. We would like the system to automatically configure our tunnels and default routes at boot time. Fortunately, the OpenBSD startup code can do this easily. For each network interface $if on the system, we can configure it automatically at boot time by putting configuration commands into the file /etc/hostname.$if.

There are four interfaces we have configured: the two ethernet interfaces for our external and AMPRNet networks, and the two gif interfaces with the default incoming tunnel and the tunnel to the UCSD gateway. Thus, there are four files:

/etc/hostname.cnmac0:

rdomain 23
inet 23.30.150.141 0xfffffff8
!ifconfig lo23 inet 127.0.0.1
!route -qn -T 23 add default 23.30.150.142

/etc/hostname.cnmac2:

rdomain 44
inet 44.44.107.1 255.255.255.0
!ifconfig lo44 inet 127.0.0.1

/etc/hostname.gif0:

rdomain 44
tunnel 23.30.150.141 0.0.0.0 tunneldomain 23
inet 44.44.107.1 255.255.255.255

/etc/hostname.gif1:

tunnel 23.30.150.141 169.228.34.84 tunneldomain 23
inet 44.44.107.1 255.255.255.255
!route -qn -T 44 add 44.0.0.1/32 -link -iface gif1 -llinfo
!route -qn -T 44 add default 44.0.0.1

Note that each routing domain also has its own associated loopback interface, hence configuring lo23 and lo44. These interfaces are automatically created when the routing domain is created, but we configure them when we bring up the associated ethernet interfaces.

We can examine the interfaces and separate routing tables to ensure that things are set up as expected:

$ ifconfig cnmac0
cnmac0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> rdomain 23 mtu 1500
        lladdr 44:d9:e7:9f:a7:64
        index 1 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet 23.30.150.141 netmask 0xfffffff8 broadcast 23.30.150.143
$ ifconfig cnmac2
cnmac2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> rdomain 44 mtu 1500
        lladdr 44:d9:e7:9f:a7:66
        index 3 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex,master)
        status: active
        inet 44.44.107.1 netmask 0xffffff00 broadcast 44.44.107.255
$ ifconfig gif0
gif0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 44 mtu 1280
        index 6 priority 0 llprio 3
        encap: txprio payload rxprio payload
        groups: gif
        tunnel: inet 23.30.150.141 -> 0.0.0.0 ttl 64 nodf ecn rdomain 23
        inet 44.44.107.1 --> 0.0.0.0 netmask 0xffffffff
$ ifconfig gif1
gif1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 44 mtu 1280
        index 7 priority 0 llprio 3
        encap: txprio payload rxprio payload
        groups: gif
        tunnel: inet 23.30.150.141 -> 169.228.34.84 ttl 64 nodf ecn rdomain 23
        inet 44.44.107.1 --> 0.0.0.0 netmask 0xffffffff
$ route -T 23 -n show -inet
Routing tables

Internet:
Destination        Gateway            Flags   Refs      Use   Mtu  Prio Iface
default            23.30.150.142      UGS        0    51126     -     8 cnmac0
23.30.150.136/29   23.30.150.141      UCn        1        7     -     4 cnmac0
23.30.150.141      44:d9:e7:9f:a7:64  UHLl       0    88377     -     1 cnmac0
23.30.150.142      4a:1d:70:de:c3:5a  UHLch      1      386     -     3 cnmac0
23.30.150.143      23.30.150.141      UHb        0        0     -     1 cnmac0
127.0.0.1          127.0.0.1          UHl        0        0 32768     1 lo23
$ route -T 44 -n show -inet
Routing tables

Internet:
Destination        Gateway            Flags   Refs      Use   Mtu  Prio Iface
default            44.0.0.1           UGS        0    54718     -     8 gif1
44.0.0.1           link#7             UHLSh      1        2     -     8 gif1
44.44.107/24       44.44.107.1        UCn       16        4     -     4 cnmac2
44.44.107.1        44:d9:e7:9f:a7:66  UHLl       0     4825     -     1 cnmac2
44.44.107.255      44.44.107.1        UHb        0        0     -     1 cnmac2
127.0.0.1          127.0.0.1          UHl        0        0 32768     1 lo44

Restricting Traffic with the PF Firewall

All of these interfaces will fully integrate with the PF firewall software that comes with OpenBSD, and we can implement nearly arbitrary policies in our configuration. For example, we might restrict traffic on the external interface to only ICMP messages and IPENCAP datagrams by setting the following rules in /etc/pf.conf:

# Constants
extif = "cnmac0"
extamprgate = "23.30.150.141"

# Options
set skip on lo
set block-policy return

block return    # block stateless traffic
pass            # establish keep-state

# Normalize incoming packets.
match in all scrub (no-df random-id max-mss 1440)

# By default, block everything.  We selectively override in subsequent rules.
block in on $extif

# Pass 44net traffic
pass in on $extif inet proto ipencap from any to $extamprgate

# Pass ping and ICMP unreachable messages on external interface
pass in on $extif inet proto icmp icmp-type echoreq code 0 keep state
pass in on $extif inet proto icmp icmp-type unreach keep state

We can verify that these rules are in place as expected by querying the rule tables:

# pfctl -sr -vv
@0 block return all
  [ Evaluations: 115977    Packets: 3776      Bytes: 168422      States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@1 pass all flags S/SA
  [ Evaluations: 115977    Packets: 242418    Bytes: 25899015    States: 253   ]
  [ Inserted: uid 0 pid 31812 State Creations: 102105]
@2 match in all scrub (no-df random-id max-mss 1440)
  [ Evaluations: 115977    Packets: 261105    Bytes: 31394491    States: 136   ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@3 block return in on cnmac0 all
  [ Evaluations: 64726     Packets: 5925      Bytes: 325492      States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@4 pass in on cnmac0 inet proto icmp all icmp-type echoreq code 0
  [ Evaluations: 8149      Packets: 342       Bytes: 27424       States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 171   ]
@5 pass in on cnmac0 inet proto icmp all icmp-type unreach
  [ Evaluations: 308       Packets: 5         Bytes: 368         States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@6 pass in on cnmac0 inet proto ipencap from any to 23.30.150.141
  [ Evaluations: 8149      Packets: 126197    Bytes: 16538717    States: 5     ]
  [ Inserted: uid 0 pid 31812 State Creations: 2048  ]

A site can add more elaborate rules as desired.

Maintaining Mesh Routes with 44ripd

The above shows how to set up AMPRNet tunnel interfaces, set routes to them, use routing domains for policy-based routing, and set firewall rules. However, so far all of these steps have been manual. This works for a handful of tunnels and routes, but there are hundreds of subnets AMPRNet in the AMPRNet mesh; maintaining all of these manually is not reasonable.

However, Dan Cross (KZ2X) has written a daemon specific to OpenBSD called 44ripd that maintains tunnel and route information as distributed via the AMPRNet RIP variant sent from 44.0.0.1. To use this, make sure that multicasting is enabled on the host by setting multicast=YES in /etc/rc.conf.local, and then retrieve the 44ripd software from Github at https://github.com/dancrossnyc/44ripd. The software is built with the make command. For example:

git clone https://github.com/dancrossnyc/44ripd
cd 44ripd
make
doas install -c -o root -g wheel -m 555 44ripd /usr/local/sbin

Once installed, this can be run at boot by adding the following lines to /etc/rc.local:

if [ -x /usr/local/sbin/44ripd ]; then
        echo -n ' 44ripd'
        route -T 44 exec /usr/local/sbin/44ripd -s0 -s1 -D 44 -T 23
fi

Note that the -s options instruct the daemon not to try and allocate the gif0 and gif1 interfaces, as these are manually configured. The -D and -T options set the routing domains for gif interfaces and their associated tunnels, respectively.

Set the Default AMPRNet Tunnel and Route Manually

While route information for the UCSD AMPRNet gateway is distributed in RIP44 packets and we could in theory only configure the default incoming tunnel and rely on 44ripd to set up a route to UCSD, this leads to a problem. Specifically, without an interface configured to the UCSD gateway, we cannot set a default route in our AMRPNet routing domain for the gateway. Since we rely on periodic route broadcasts from UCSD to set those routes, we would delay setting up our default route for an indeterminate amount of time (on the order of minutes).

Thus, it is advised to explicitly configure the tunnel to UCSD at boot time along with the default route.

Notes on 44ripd Implementation

  • 44ripd maintains a copy of the AMPRNet routing table in a modified PATRICIA trie (really a compressed radix tree).
  • A similar table of tunnels is maintained.
  • Tunnel interfaces are reference counted and garbage collected. A bitmap indicating which tunnels are in use is maintained.
  • Routes are expired after receiving a RIP packet.
  • The program is completely self-contained in the sense that it does not fork/exec external commands to configure tunnels or manipulate routes. That is all done via ioctls or writing messages to a routing socket.
  • 44ripd does not examine the state of the system or routing tables at startup to bootstrap its internal state, but arguably should.
  • Bugs in 44ripd should be reported via Github issues.
  • Exporting and/or parsing an encap file would be nice.
  • Logging and error checking can always be improved.