Setting up a gateway on OpenBSD: Difference between revisions

From 44Net Wiki
Jump to navigation Jump to search
Cross (talk | contribs)
No edit summary
Cross (talk | contribs)
Specify use of the `-cloning` flag when creating routes.
 
(2 intermediate revisions by the same user not shown)
Line 21: Line 21:
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84
ifconfig gif1 inet 44.44.107.1 netmask 255.255.255.255
ifconfig gif1 inet 44.44.107.1 netmask 255.255.255.255
route add -host 44.0.0.1 -link -iface gif1 -llinfo</nowiki>
route add -host 44.0.0.1 -link -iface gif1 -cloning</nowiki>


The first command creates the interface, causing the kernel to synthesize it into existence. The second configures the tunnel itself: that is, the the IP addresses that will be put into the IPENCAP datagram that the tunnel creates: the first address is the ''local'' address, which will serve as the source address for the IPENCAP packet, while the second is the ''remote'' address, to which the packet will be sent. The third sets an IP address for the local endpoint of the interface: this exists solely so that traffic that is generated by the router, such as ICMP error messages (host or port unreachable, for example), have a valid source address. Note that despite the fact that this is a point-to-point interface, we do not specify the IP address of the remote end.
The first command creates the interface, causing the kernel to synthesize it into existence. The second configures the tunnel itself: that is, the the IP addresses that will be put into the IPENCAP datagram that the tunnel creates: the first address is the ''local'' address, which will serve as the source address for the IPENCAP packet, while the second is the ''remote'' address, to which the packet will be sent. The third sets an IP address for the local endpoint of the interface: this exists solely so that traffic that is generated by the router, such as ICMP error messages (host or port unreachable, for example), have a valid source address. Note that despite the fact that this is a point-to-point interface, we do not specify the IP address of the remote end.


The fourth and final command creates a host route and associates it with the tunnel interface. The <code>-link</code>, <code>-iface</code> and <code>-llinfo</code> flags indicate that this is an interface route, and that traffic for the route should go directly to the given interface (<code>gif1</code>) instead of identifying the gateway via an IP address. We can examine this route from the commandline. E.g.,
The fourth and final command creates a host route and associates it with the tunnel interface. The <code>-link</code>, <code>-iface</code> and <code>-cloning</code> flags indicate that this is an interface route, that traffic for the route should go directly to the given interface (<code>gif1</code>) instead of identifying the gateway via an IP address, and that routes should be dynamically cloned when used. We can examine this route from the command line. E.g.,


  <nowiki>$ route -n show -inet | grep '44\.0\.0\.1 '
  <nowiki>$ route -n show -inet | grep '44\.0\.0\.1 '
44.0.0.1          link#7            UHLSh     1        2    -    8 gif1</nowiki>
44.0.0.1          link#7            UHCSh     1        2    -    8 gif1</nowiki>


Consult the manual page for [https://man.openbsd.org/netstat.1 ''netstat''(1)] for details on what the <code>UHLSh</code> flags mean.
Consult the manual page for [https://man.openbsd.org/netstat.1 ''netstat''(1)] for details on what the <code>UHCSh</code> flags mean.


We can repeat this process for each AMPRNet tunnel, creating interfaces and adding routes for each subnet.
We can repeat this process for each AMPRNet tunnel, creating interfaces and adding routes for each subnet.
Line 36: Line 36:
== Handling Encapsulated Inbound Traffic Without a Reciprocal Tunnel ==
== Handling Encapsulated Inbound Traffic Without a Reciprocal Tunnel ==


When an inbound IPENCAP datagram arrives on our external interface, the network stack in the OpenBSD kernel recognizes it by examining the protocol number in the IP header: IPENCAP is protocol number 4 (not to be confused with IP version 4). Any such packets are passed to the packet input function in the ''gif'' implementation, which searches all configured ''gif'' interfaces trying to find match the configured tunnel source and destinations addresses with the corresponding addresses in the inbound packet. If such an interface is found, the packet is enqueued to the interface, which will strip the IPENCAP header and route the resulting "de-encapsulated" IP packet. This works for tunnels that are configured bidirectionally between any two sites. That is, if site A has a tunnel to site B, and B has a corresponding tunnel to A, they can send each other traffic.
When an inbound IPENCAP datagram arrives on our external interface, the network stack in the OpenBSD kernel recognizes it by examining the protocol number in the IP header: IPENCAP is protocol number 4 (not to be confused with IP version 4). Any such packets are passed to the packet input function in the ''gif'' implementation, which searches all configured ''gif'' interfaces trying to match the configured tunnel source and destinations addresses with the corresponding addresses in the inbound packet. If such an interface is found, the packet is enqueued to the interface, which will strip the IPENCAP header and route the resulting "de-encapsulated" IP packet. This works for tunnels that are configured bidirectionally between any two sites. That is, if site A has a tunnel to site B, and B has a corresponding tunnel to A, they can send each other traffic.


Now consider the case where site A has a tunnel configured to send traffic to site B, but B has no tunnel configured to A: in this case, the datagram arrives as before and is presented to the ''gif'' implementation, but the search above fails since B has no tunnel to A, so nothing matches the source ''and'' destination addresses on the incoming packet. In this case, the system might be responsible for routing such packets to another computer or network, so the packet is not de-encapsulated and processed. However, in an AMPRNet context, we very well may want to process that packet. Accordingly, the ''gif'' implementation has a mechanism for describing an interface that accepts encapsulated traffic from any source destined to a local address. If we configure a ''gif'' interface where the distant end of the ''tunnel'' set to <code>0.0.0.0</code>, then any incoming datagram where the destination address is the same as the local address on the interface, will be accepted and de-encapsulated and processed as before. Using this, we can set up an interface specifically for accepting traffic from systems to which we have not defined a tunnel:
Now consider the case where site A has a tunnel configured to send traffic to site B, but B has no tunnel configured to A: in this case, the datagram arrives as before and is presented to the ''gif'' implementation, but the search above fails since B has no tunnel to A, so nothing matches the source ''and'' destination addresses on the incoming packet. In this case, the system might be responsible for routing such packets to another computer or network, so the packet is not decapsulated and processed. However, in an AMPRNet context, we very well may want to process that packet. Accordingly, the ''gif'' implementation has a mechanism for describing an interface that accepts encapsulated traffic from any source destined to a local address. If we configure a ''gif'' interface where the distant end of the ''tunnel'' set to <code>0.0.0.0</code>, then any incoming datagram where the destination address is the same as the local address on the interface will be accepted, decapsulated and processed as before. Using this, we can set up an interface specifically for accepting traffic from systems to which we have not defined a tunnel:


  <nowiki>ifconfig gif0 create
  <nowiki>ifconfig gif0 create
Line 50: Line 50:
== Policy-based Routing Using Routing Domains ==
== Policy-based Routing Using Routing Domains ==


The configuration explored so far is sufficient to make connections to AMPRNet directly-configured AMPRNet subnets, but suffers from a number of deficiencies. In particular, there are two issues that we will discuss now.
The configuration explored so far is sufficient to make connections to AMPRNet subnets we have manually configured tunnels for, but suffers from a number of deficiencies. In particular, the following two issues that we will discuss now.


First, there is a problem with exchanging traffic with non-AMPRNet systems on the Internet. Presumably, these systems are not aware of AMPRNet tunneling, so traffic from them goes to the gateway at UCSD, where it will be encapsulated and sent through a tunnel to the external interface on our router. There, it will be de-encapsulated and delivered into our subnet. However, return traffic will be sent to the router, but since the destination is generally a tunnel, it will be sent via the default route, but from an AMPRNet source address. Since most ISPs will not pass AMPRNet traffic, the result will likely be lost before it reaches the destination. We may think it would be possible to work around that using a firewall rule to NAT the source address to something provided by our ISP, but even if the resulting datagram made it to the destination, for a protocol like TCP it would no longer match the 5-tuple for the connection, and would thus be lost.
First, there is a problem with exchanging traffic with non-AMPRNet systems on the Internet. Presumably, these systems are not aware of AMPRNet tunneling, so traffic from them goes to the gateway at UCSD, where it will be encapsulated and sent through a tunnel to the external interface on our router. There, it will be decapsulated and delivered into our subnet. However, return traffic will be sent to the local router, but since the destination is generally not a tunnel, it will be sent via the default route, but with an AMPRNet source address. Since most ISPs will not pass AMPRNet traffic, the result will likely be lost before it reaches the destination. We may think it would be possible to work around that using a firewall rule to NAT the source address to something provided by our ISP, but even if the resulting datagram made it to the destination, for a protocol like TCP it would no longer match the 5-tuple for the connection, and would thus be lost.


The second problem is how reaching AMPRNet systems for which we have not configured a tunnel.  Without a tunnel, and thus a route, we cannot send traffic to those systems.
The second problem is reaching AMPRNet systems for which we have not configured a tunnel.  Without a tunnel, and thus a route, we cannot send traffic to those systems.


We can solve both of these problems by sending all of our traffic through a tunnel interface to the UCSD gateway by default, e.g., by setting the default route:
We can solve both of these problems by sending all of our traffic through a tunnel interface to the UCSD gateway by default, e.g., by setting the default route:
Line 60: Line 60:
  <nowiki>route add default 4.0.0.1</nowiki>
  <nowiki>route add default 4.0.0.1</nowiki>


However, how does the encapsulated traffic from the tunnel interface get sent to our ISP's router? We can add a host route for the UCSD gateway in our local routing table, but we have to do this for every tunnel, which is unwieldy. Further, connecting to our external interface becomes complicated: suppose someone <code>ping</code>s our external interface. Assuming we permit this, the response would be routed through the UCSD gateway tunnel, but even if the gateway passed arbitrary traffic back onto the Internet, the packet might be lost as upstream ISPs would refuse to route it.
However this creates a new problem: how does the encapsulated traffic from the tunnel interface get sent to our ISP's gateway? We can add a host route for the UCSD gateway in our local routing table, but we have to do this for every tunnel, which is unwieldy. Further, connecting to our external interface becomes complicated: suppose someone <code>ping</code>s our local router's external interface. Assuming we permit this, the response would be routed through the UCSD gateway tunnel, but even if the gateway passed arbitrary traffic back onto the Internet, the packet might be lost as upstream ISPs would refuse to route it and even if delivered, it wouldn't match the destination address of the original ICMP echo request packet anyway.


The solution to all of these problems is to use [https://en.wikipedia.org/wiki/Policy-based_routing ''policy-based routing'']. Specifically, we would like to make routing decisions based on the source IP address of our traffic. We might be able to do this with firewall rules, but the edge cases get complicated very quickly. Fortunately, there is another way: [https://man.openbsd.org/rdomain.4 routing domains].
The solution to all of these problems is to use [https://en.wikipedia.org/wiki/Policy-based_routing ''policy-based routing'']. Specifically, we would like to make routing decisions based on the source IP address of our traffic. We might be able to do this with firewall rules, but the edge cases get complicated very quickly. Fortunately, there is another way: [https://man.openbsd.org/rdomain.4 routing domains].
Line 81: Line 81:
Now traffic on our AMPRNet subnet will be routed through the UCSD gateway, while traffic on the external interface will be routed through our ISP's router.
Now traffic on our AMPRNet subnet will be routed through the UCSD gateway, while traffic on the external interface will be routed through our ISP's router.


Another piece of functionality allows us to dispense with much of the complexity of routing between domains: the tunnel assigned to a ''gif'' can be in a different routing domain than the interface itself. Going back to our example, if we place each of our ''gif'' interfaces into routing domain 44 then traffic routed out to on coming in from the tunnel will be routed with our AMPRNet-specific routing table. But if we place the tunnel on that interface into routing domain 23, then the encapsulated traffic that we send to and from the Internet (e.g., to other tunnel end points) will be routed in that domain, thus routing through our ISP's network. Critically, matching incoming datagrams to ''gif'' interfaces as described above happens in the routing domain associated with the tunnel, so inbound traffic coming through our external interface will be directed to the correct interface.
Another piece of functionality lets us dispense with much of the complexity of routing between domains: the tunnel assigned to a ''gif'' can be in a different routing domain than the interface itself. Going back to our example, if we place each of our ''gif'' interfaces into routing domain 44 then traffic routed out to or coming in from the tunnel will be routed with our AMPRNet-specific routing table. But if we place the tunnel on that interface into routing domain 23, then the encapsulated traffic that we send to and from the Internet (e.g., to other tunnel end points) will be routed in that domain, thus routing through our ISP's network. Critically, matching incoming datagrams to ''gif'' interfaces as described above happens in the routing domain associated with the tunnel, so inbound traffic coming through our external interface will be directed to the correct interface.


We specify the routing domain of a tunnel via the <code>tunneldomain</code> parameter to <code>ifconfig</code> when configuring the route on an interface:
We specify the routing domain of a tunnel via the <code>tunneldomain</code> parameter to <code>ifconfig</code> when configuring the route on an interface:
Line 119: Line 119:
  <nowiki>tunnel 23.30.150.141 169.228.34.84 tunneldomain 23
  <nowiki>tunnel 23.30.150.141 169.228.34.84 tunneldomain 23
inet 44.44.107.1 255.255.255.255
inet 44.44.107.1 255.255.255.255
!route -qn -T 44 add 44.0.0.1/32 -link -iface gif1 -llinfo
!route -qn -T 44 add 44.0.0.1/32 -link -iface gif1 -cloning
!route -qn -T 44 add default 44.0.0.1</nowiki>
!route -qn -T 44 add default 44.0.0.1</nowiki>


Line 171: Line 171:
Destination        Gateway            Flags  Refs      Use  Mtu  Prio Iface
Destination        Gateway            Flags  Refs      Use  Mtu  Prio Iface
default            44.0.0.1          UGS        0    54718    -    8 gif1
default            44.0.0.1          UGS        0    54718    -    8 gif1
44.0.0.1          link#7            UHLSh     1        2    -    8 gif1
44.0.0.1          link#7            UHCSh     1        2    -    8 gif1
44.44.107/24      44.44.107.1        UCn      16        4    -    4 cnmac2
44.44.107/24      44.44.107.1        UCn      16        4    -    4 cnmac2
44.44.107.1        44:d9:e7:9f:a7:66  UHLl      0    4825    -    1 cnmac2
44.44.107.1        44:d9:e7:9f:a7:66  UHLl      0    4825    -    1 cnmac2

Latest revision as of 02:19, 28 April 2024

Introduction

OpenBSD is a mature Unix-like operating system that focuses on security and correctness. It features a flexible, robust, performant TCP/IP stack and a highly configurable firewall. This page describes how to configure a computer running OpenBSD as an AMPRNet router to transfer traffic between AMPRNet subnets and the Internet.

OpenBSD natively supports IPENCAP (IP-IP) tunnels through gif(4) pseudo-devices. Each gif device is a virtual network interface, synthesized by the operating system, that implements a point-to-point tunnel. Unlike Linux, OpenBSD requires a separate gif interface for each tunnel. An essentially arbitrary number of such interfaces can be created and it scales to the number required to route all of AMPRNet.

One can manually configure gif tunnels and routes at the command line, or configure the system to establish tunnels and routes at boot time.

We will describe how to set things up by way of example. Assume a system configuration that looks substantially similar to the following:

  • A dedicated static IP address to use as an endpoint for AMPRNet traffic.
  • An ISP-provided router that is just a router; no NATing, no firewall.
  • An OpenBSD computer with three ethernet interfaces. For example, using a Ubiquiti EdgeRouter 3 Lite:
    1. cnmac0 is the external interface connected to the ISP's network (in this example, we use 23.30.150.141 routing to 23.30.150.142)
    2. cnmac1 connects to an internal network (its configuration is irrelevant)
    3. cnmac2 is the internal interface connected to the subnet (in this example, we use 44.44.107.1 routing for 44.44.107.0/24)

Let us start by configuring a single tunnel and route to the AMPRNet gateway at UCSD:

ifconfig gif1 create
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84
ifconfig gif1 inet 44.44.107.1 netmask 255.255.255.255
route add -host 44.0.0.1 -link -iface gif1 -cloning

The first command creates the interface, causing the kernel to synthesize it into existence. The second configures the tunnel itself: that is, the the IP addresses that will be put into the IPENCAP datagram that the tunnel creates: the first address is the local address, which will serve as the source address for the IPENCAP packet, while the second is the remote address, to which the packet will be sent. The third sets an IP address for the local endpoint of the interface: this exists solely so that traffic that is generated by the router, such as ICMP error messages (host or port unreachable, for example), have a valid source address. Note that despite the fact that this is a point-to-point interface, we do not specify the IP address of the remote end.

The fourth and final command creates a host route and associates it with the tunnel interface. The -link, -iface and -cloning flags indicate that this is an interface route, that traffic for the route should go directly to the given interface (gif1) instead of identifying the gateway via an IP address, and that routes should be dynamically cloned when used. We can examine this route from the command line. E.g.,

$ route -n show -inet | grep '44\.0\.0\.1 '
44.0.0.1           link#7             UHCSh      1        2     -     8 gif1

Consult the manual page for netstat(1) for details on what the UHCSh flags mean.

We can repeat this process for each AMPRNet tunnel, creating interfaces and adding routes for each subnet.

Handling Encapsulated Inbound Traffic Without a Reciprocal Tunnel

When an inbound IPENCAP datagram arrives on our external interface, the network stack in the OpenBSD kernel recognizes it by examining the protocol number in the IP header: IPENCAP is protocol number 4 (not to be confused with IP version 4). Any such packets are passed to the packet input function in the gif implementation, which searches all configured gif interfaces trying to match the configured tunnel source and destinations addresses with the corresponding addresses in the inbound packet. If such an interface is found, the packet is enqueued to the interface, which will strip the IPENCAP header and route the resulting "de-encapsulated" IP packet. This works for tunnels that are configured bidirectionally between any two sites. That is, if site A has a tunnel to site B, and B has a corresponding tunnel to A, they can send each other traffic.

Now consider the case where site A has a tunnel configured to send traffic to site B, but B has no tunnel configured to A: in this case, the datagram arrives as before and is presented to the gif implementation, but the search above fails since B has no tunnel to A, so nothing matches the source and destination addresses on the incoming packet. In this case, the system might be responsible for routing such packets to another computer or network, so the packet is not decapsulated and processed. However, in an AMPRNet context, we very well may want to process that packet. Accordingly, the gif implementation has a mechanism for describing an interface that accepts encapsulated traffic from any source destined to a local address. If we configure a gif interface where the distant end of the tunnel set to 0.0.0.0, then any incoming datagram where the destination address is the same as the local address on the interface will be accepted, decapsulated and processed as before. Using this, we can set up an interface specifically for accepting traffic from systems to which we have not defined a tunnel:

ifconfig gif0 create
ifconfig gif0 tunnel 23.30.150.141 0.0.0.0
ifconfig gif0 inet 44.44.107.1 255.255.255.255

Note the 0.0.0.0 as the remote address in the ifconfig tunnel command. Again, we set an interface address using our local AMPRNet router address purely for locally generated traffic.

One this interface is configured, IPENCAP traffic from remote systems that have defined tunnels to us will flow, regardless of whether we have created a tunnel to them.

Policy-based Routing Using Routing Domains

The configuration explored so far is sufficient to make connections to AMPRNet subnets we have manually configured tunnels for, but suffers from a number of deficiencies. In particular, the following two issues that we will discuss now.

First, there is a problem with exchanging traffic with non-AMPRNet systems on the Internet. Presumably, these systems are not aware of AMPRNet tunneling, so traffic from them goes to the gateway at UCSD, where it will be encapsulated and sent through a tunnel to the external interface on our router. There, it will be decapsulated and delivered into our subnet. However, return traffic will be sent to the local router, but since the destination is generally not a tunnel, it will be sent via the default route, but with an AMPRNet source address. Since most ISPs will not pass AMPRNet traffic, the result will likely be lost before it reaches the destination. We may think it would be possible to work around that using a firewall rule to NAT the source address to something provided by our ISP, but even if the resulting datagram made it to the destination, for a protocol like TCP it would no longer match the 5-tuple for the connection, and would thus be lost.

The second problem is reaching AMPRNet systems for which we have not configured a tunnel. Without a tunnel, and thus a route, we cannot send traffic to those systems.

We can solve both of these problems by sending all of our traffic through a tunnel interface to the UCSD gateway by default, e.g., by setting the default route:

route add default 4.0.0.1

However this creates a new problem: how does the encapsulated traffic from the tunnel interface get sent to our ISP's gateway? We can add a host route for the UCSD gateway in our local routing table, but we have to do this for every tunnel, which is unwieldy. Further, connecting to our external interface becomes complicated: suppose someone pings our local router's external interface. Assuming we permit this, the response would be routed through the UCSD gateway tunnel, but even if the gateway passed arbitrary traffic back onto the Internet, the packet might be lost as upstream ISPs would refuse to route it and even if delivered, it wouldn't match the destination address of the original ICMP echo request packet anyway.

The solution to all of these problems is to use policy-based routing. Specifically, we would like to make routing decisions based on the source IP address of our traffic. We might be able to do this with firewall rules, but the edge cases get complicated very quickly. Fortunately, there is another way: routing domains.

Routing domains in OpenBSD are a mechanism to isolate routing decisions from one another. Network interfaces are configured into exactly one routing domain, which has its own private set of routing tables. Those tables are isolated, but traffic can be passed between routing domains via firewall rules.

In our example, we put our external and local AMPRNet gateway interfaces into separate routing domains: all of the gif interfaces and the local AMPRNet gateway interface can both be assigned to routing domain 44, while the external interface might be 23. Routing domain 0 is the default. Note that the numbers here are arbitrary, and we can choose any value below 256 that we like; these are chosen to match the first octet of our example addresses.

The routing domain on an interface is set using the rdomain parameter to ifconfig:

ifconfig cnmac0 rdomain 23
ifconfig gif0 rdomain 44
ifconfig gif1 rdomain 44

We set the default route in the routing domain that owns our external interface to our ISP's router, while in the routing domain hosting our AMPRNet presence we can set it to the UCSD gateway:

route -T 23 add default 23.30.150.242
route -T 44 add default 44.0.0.1

Now traffic on our AMPRNet subnet will be routed through the UCSD gateway, while traffic on the external interface will be routed through our ISP's router.

Another piece of functionality lets us dispense with much of the complexity of routing between domains: the tunnel assigned to a gif can be in a different routing domain than the interface itself. Going back to our example, if we place each of our gif interfaces into routing domain 44 then traffic routed out to or coming in from the tunnel will be routed with our AMPRNet-specific routing table. But if we place the tunnel on that interface into routing domain 23, then the encapsulated traffic that we send to and from the Internet (e.g., to other tunnel end points) will be routed in that domain, thus routing through our ISP's network. Critically, matching incoming datagrams to gif interfaces as described above happens in the routing domain associated with the tunnel, so inbound traffic coming through our external interface will be directed to the correct interface.

We specify the routing domain of a tunnel via the tunneldomain parameter to ifconfig when configuring the route on an interface:

ifconfig gif0 tunnel 23.30.150.141 0.0.0.0 tunneldomain 23
ifconfig gif1 tunnel 23.30.150.141 169.228.34.84 tunneldomain 23

With this in place, routing works as expected for all of the cases mentioned above.

Persistent Configuration Across Router Restarts

We now have enough information that we can set up tunnels between our router and arbitrary AMPRNet subnets. However, doing so manually is tedious and not particularly robust. We would like the system to automatically configure our tunnels and default routes at boot time. Fortunately, the OpenBSD startup code can do this easily. For each network interface $if on the system, we can configure it automatically at boot time by putting configuration commands into the file /etc/hostname.$if.

There are four interfaces we have configured: the two ethernet interfaces for our external and AMPRNet networks, and the two gif interfaces with the default incoming tunnel and the tunnel to the UCSD gateway. Thus, there are four files:

/etc/hostname.cnmac0:

rdomain 23
inet 23.30.150.141 0xfffffff8
!ifconfig lo23 inet 127.0.0.1
!route -qn -T 23 add default 23.30.150.142

/etc/hostname.cnmac2:

rdomain 44
inet 44.44.107.1 255.255.255.0
!ifconfig lo44 inet 127.0.0.1

/etc/hostname.gif0:

rdomain 44
tunnel 23.30.150.141 0.0.0.0 tunneldomain 23
inet 44.44.107.1 255.255.255.255

/etc/hostname.gif1:

tunnel 23.30.150.141 169.228.34.84 tunneldomain 23
inet 44.44.107.1 255.255.255.255
!route -qn -T 44 add 44.0.0.1/32 -link -iface gif1 -cloning
!route -qn -T 44 add default 44.0.0.1

Note that each routing domain also has its own associated loopback interface, hence configuring lo23 and lo44. These interfaces are automatically created when the routing domain is created, but we configure them when we bring up the associated ethernet interfaces.

We can examine the interfaces and separate routing tables to ensure that things are set up as expected:

$ ifconfig cnmac0
cnmac0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> rdomain 23 mtu 1500
        lladdr 44:d9:e7:9f:a7:64
        index 1 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet 23.30.150.141 netmask 0xfffffff8 broadcast 23.30.150.143
$ ifconfig cnmac2
cnmac2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> rdomain 44 mtu 1500
        lladdr 44:d9:e7:9f:a7:66
        index 3 priority 0 llprio 3
        media: Ethernet autoselect (1000baseT full-duplex,master)
        status: active
        inet 44.44.107.1 netmask 0xffffff00 broadcast 44.44.107.255
$ ifconfig gif0
gif0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 44 mtu 1280
        index 6 priority 0 llprio 3
        encap: txprio payload rxprio payload
        groups: gif
        tunnel: inet 23.30.150.141 -> 0.0.0.0 ttl 64 nodf ecn rdomain 23
        inet 44.44.107.1 --> 0.0.0.0 netmask 0xffffffff
$ ifconfig gif1
gif1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 44 mtu 1280
        index 7 priority 0 llprio 3
        encap: txprio payload rxprio payload
        groups: gif
        tunnel: inet 23.30.150.141 -> 169.228.34.84 ttl 64 nodf ecn rdomain 23
        inet 44.44.107.1 --> 0.0.0.0 netmask 0xffffffff
$ route -T 23 -n show -inet
Routing tables

Internet:
Destination        Gateway            Flags   Refs      Use   Mtu  Prio Iface
default            23.30.150.142      UGS        0    51126     -     8 cnmac0
23.30.150.136/29   23.30.150.141      UCn        1        7     -     4 cnmac0
23.30.150.141      44:d9:e7:9f:a7:64  UHLl       0    88377     -     1 cnmac0
23.30.150.142      4a:1d:70:de:c3:5a  UHLch      1      386     -     3 cnmac0
23.30.150.143      23.30.150.141      UHb        0        0     -     1 cnmac0
127.0.0.1          127.0.0.1          UHl        0        0 32768     1 lo23
$ route -T 44 -n show -inet
Routing tables

Internet:
Destination        Gateway            Flags   Refs      Use   Mtu  Prio Iface
default            44.0.0.1           UGS        0    54718     -     8 gif1
44.0.0.1           link#7             UHCSh      1        2     -     8 gif1
44.44.107/24       44.44.107.1        UCn       16        4     -     4 cnmac2
44.44.107.1        44:d9:e7:9f:a7:66  UHLl       0     4825     -     1 cnmac2
44.44.107.255      44.44.107.1        UHb        0        0     -     1 cnmac2
127.0.0.1          127.0.0.1          UHl        0        0 32768     1 lo44

Restricting Traffic with the PF Firewall

All of these interfaces will fully integrate with the PF firewall software that comes with OpenBSD, and we can implement nearly arbitrary policies in our configuration. For example, we might restrict traffic on the external interface to only ICMP messages and IPENCAP datagrams by setting the following rules in /etc/pf.conf:

# Constants
extif = "cnmac0"
extamprgate = "23.30.150.141"

# Options
set skip on lo
set block-policy return

block return    # block stateless traffic
pass            # establish keep-state

# Normalize incoming packets.
match in all scrub (no-df random-id max-mss 1440)

# By default, block everything.  We selectively override in subsequent rules.
block in on $extif

# Pass 44net traffic
pass in on $extif inet proto ipencap from any to $extamprgate

# Pass ping and ICMP unreachable messages on external interface
pass in on $extif inet proto icmp icmp-type echoreq code 0 keep state
pass in on $extif inet proto icmp icmp-type unreach keep state

We can verify that these rules are in place as expected by querying the rule tables:

# pfctl -sr -vv
@0 block return all
  [ Evaluations: 115977    Packets: 3776      Bytes: 168422      States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@1 pass all flags S/SA
  [ Evaluations: 115977    Packets: 242418    Bytes: 25899015    States: 253   ]
  [ Inserted: uid 0 pid 31812 State Creations: 102105]
@2 match in all scrub (no-df random-id max-mss 1440)
  [ Evaluations: 115977    Packets: 261105    Bytes: 31394491    States: 136   ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@3 block return in on cnmac0 all
  [ Evaluations: 64726     Packets: 5925      Bytes: 325492      States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@4 pass in on cnmac0 inet proto icmp all icmp-type echoreq code 0
  [ Evaluations: 8149      Packets: 342       Bytes: 27424       States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 171   ]
@5 pass in on cnmac0 inet proto icmp all icmp-type unreach
  [ Evaluations: 308       Packets: 5         Bytes: 368         States: 0     ]
  [ Inserted: uid 0 pid 31812 State Creations: 0     ]
@6 pass in on cnmac0 inet proto ipencap from any to 23.30.150.141
  [ Evaluations: 8149      Packets: 126197    Bytes: 16538717    States: 5     ]
  [ Inserted: uid 0 pid 31812 State Creations: 2048  ]

A site can add more elaborate rules as desired.

Maintaining Mesh Routes with 44ripd

The above shows how to set up AMPRNet tunnel interfaces, set routes to them, use routing domains for policy-based routing, and set firewall rules. However, so far all of these steps have been manual. This works for a handful of tunnels and routes, but there are hundreds of subnets AMPRNet in the AMPRNet mesh; maintaining all of these manually is not reasonable.

However, Dan Cross (KZ2X) has written a daemon specific to OpenBSD called 44ripd that maintains tunnel and route information as distributed via the AMPRNet RIP variant sent from 44.0.0.1. To use this, make sure that multicasting is enabled on the host by setting multicast=YES in /etc/rc.conf.local, and then retrieve the 44ripd software from Github at https://github.com/dancrossnyc/44ripd. The software is built with the make command. For example:

git clone https://github.com/dancrossnyc/44ripd
cd 44ripd
make
doas install -c -o root -g wheel -m 555 44ripd /usr/local/sbin

Once installed, this can be run at boot by adding the following lines to /etc/rc.local:

if [ -x /usr/local/sbin/44ripd ]; then
        echo -n ' 44ripd'
        route -T 44 exec /usr/local/sbin/44ripd -s0 -s1 -D 44 -T 23
fi

Note that the -s options instruct the daemon not to try and allocate the gif0 and gif1 interfaces, as these are manually configured. The -D and -T options set the routing domains for gif interfaces and their associated tunnels, respectively.

Set the Default AMPRNet Tunnel and Route Manually

While route information for the UCSD AMPRNet gateway is distributed in RIP44 packets and we could in theory only configure the default incoming tunnel and rely on 44ripd to set up a route to UCSD, this leads to a problem. Specifically, without an interface configured to the UCSD gateway, we cannot set a default route in our AMRPNet routing domain for the gateway. Since we rely on periodic route broadcasts from UCSD to set those routes, we would delay setting up our default route for an indeterminate amount of time (on the order of minutes).

Thus, it is advised to explicitly configure the tunnel to UCSD at boot time along with the default route.

Notes on 44ripd Implementation

  • 44ripd maintains a copy of the AMPRNet routing table in a modified PATRICIA trie (really a compressed radix tree).
  • A similar table of tunnels is maintained.
  • Tunnel interfaces are reference counted and garbage collected. A bitmap indicating which tunnels are in use is maintained.
  • Routes are expired after receiving a RIP packet.
  • The program is completely self-contained in the sense that it does not fork/exec external commands to configure tunnels or manipulate routes. That is all done via ioctls or writing messages to a routing socket.
  • 44ripd does not examine the state of the system or routing tables at startup to bootstrap its internal state, but arguably should.
  • Bugs in 44ripd should be reported via Github issues.
  • Exporting and/or parsing an encap file would be nice.
  • Logging and error checking can always be improved.