Re-learning to type… again! (From Neo to Bone)

Summary

After using an ergonomical keyboard layout for more than half a decade, I felt that I might try something new. My old layout (Neo) works great, but after learning of a newer variant (Bone), I wanted to try it out – and am fascinated by what the human brain is capable of doing.

Article

[New to Gemini? Have a look at my Gemini FAQ.]

The actual post can be found here (if you are using a native Gemini client):

gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/relearning_to_type.gmi

If you want to use a webproxy to read it from your web browser, follow this Link.

Multi-OS PXE-booting from FreeBSD 12: Linux, illumos and more (pt. 4)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/multi-os_pxe-booting_from_fbsd_pt4.gmi

Post 1 of this mini series is about what lead me to do this in the first place, features a little excursion for people new to PXE and details the setup of a FreeBSD router.
Post 2 discusses setting up the required daemons for DHCP, TFTP and HTTP / FTP. Each component that is not in FreeBSD’s base system is briefly discussed and two options to pick from are covered.
Post 3 covers the NBP (pxelinux), boot menu configuration and adding all the major BSD operating systems (with the exception of DragonFly that I could not get to work) as PXE-boot options.

In this post we are going to add some Linux distributions to the list as well as illumos and some other Open Source operating systems. If you are not familiar with pxelinux and configuring its menu, please see the previous post for more information. You will also find an explanation on going with flat menus if you prefer that – in this article I’ll make use of sub-menus.

Menu preparations

First is a bit of preparation work. I’m going to add three more sub-menus to the main menu – follow along with any or all that are relevant to you. On your PXE boot server, edit the file main configuration file for pxelinux:

# vi /usr/local/tftpboot/pxelinux.cfg/default

Append the following blocks:

LABEL linux-distros
        MENU LABEL Linux Distributions
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/linux

LABEL illumos-distros
        MENU LABEL illumos Distributions
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/illumos

LABEL other-oses
        MENU LABEL Other Operating Systems
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/other

Next is creating the configuration files for the sub-menus we just referenced. We’ll start with Linux:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

Let’s put the menu title and an option to go back to the main menu in there:

MENU TITLE PXE Boot Menu (Linux)

LABEL main-menu
        MENU LABEL Main Menu
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/default

Now the same thing for illumos:

# vi /usr/local/tftpboot/pxelinux.cfg/illumos

Insert this text:

MENU TITLE PXE Boot Menu (illumos)

LABEL main-menu
        MENU LABEL Main Menu
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/default

And finally for the other systems:

# vi /usr/local/tftpboot/pxelinux.cfg/other

Again put the default stuff in there:

MENU TITLE PXE Boot Menu (Other)

LABEL main-menu
        MENU LABEL Main Menu
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/default

Alright, everything is prepared. So we can add some actual boot options for our PXE boot server!

Ubuntu

Making Ubuntu available over PXE is not very hard to do. We first need the (very recently released) ISO. It’s a big file and thus it makes sense to check the downloaded image for transmission errors – for that reason we’re getting the checksum file, too. The ISO is needed for the installation, so we’re going to make it available over HTTP by putting it in the right location:

# mkdir -p /usr/local/www/pxe/linux/ubuntu
# fetch https://releases.ubuntu.com/20.04/ubuntu-20.04.2-live-server-amd64.iso -o /usr/local/www/pxe/linux/ubuntu/ubuntu-20.04.2-live-server-amd64.iso
# fetch https://releases.ubuntu.com/20.04/SHA256SUMS -o /tmp/SHA256SUMS
# grep -c `sha256 /usr/local/www/pxe/linux/ubuntu/ubuntu-20.04.2-live-server-amd64.iso | awk '{ print $NF }'` /tmp/SHA256SUMS
# rm /tmp/SHA256SUMS

Make sure that the grep command line returns 1. If it doesn’t, remove the ISO and checksum file and re-download. We need the kernel and the init ramdisk available via TFTP, so we first extract the ISO to a temporary location. I found 7zip a good fit for extracting, so install it if you don’t have it on your system. Now copy off what we need and remove the directory again:

# 7z x /usr/local/www/pxe/linux/ubuntu/ubuntu-20.04.2-live-server-amd64.iso -o/tmp/ubuntu
# mkdir -p /usr/local/tftpboot/linux/ubuntu
# cp /tmp/ubuntu/casper/vmlinuz /usr/local/tftpboot/linux/ubuntu
# cp /tmp/ubuntu/casper/initrd /usr/local/tftpboot/linux/ubuntu
# rm -r /tmp/ubuntu

With all the system data in place we only need to add a boot menu entry for it:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

Add the following to the file and save:

LABEL ubuntu-server-pxe-install
        MENU LABEL Install Ubuntu 20.04 (PXE)
        KERNEL linux/ubuntu/vmlinuz
        INITRD linux/ubuntu/initrd
        APPEND root=/dev/ram0 ramdisk_size=1500000 ip=dhcp url=http://10.11.12.1/linux/ubuntu/ubuntu-20.04.2-live-server-amd64.iso

Since pxelinux supports fetching via HTTP, you can also serve both the kernel and initrd via that protocol instead. If you want to do that, simply move the files to the other location:

# mv /usr/local/tftpboot/linux/ubuntu/vmlinuz /usr/local/tftpboot/linux/ubuntu/initrd /usr/local/www/pxe/linux/ubuntu

Then edit the menu definition again:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

And adjust the menu definition like this:

        KERNEL http://10.11.12.1/linux/ubuntu/vmlinuz
        INITRD http://10.11.12.1/linux/ubuntu/initrd

And that’s it. Once you’ve restarted inetd, Ubuntu will be available as an option for PXE clients booting off of your server. It takes quite a while to push all the bits of the large ISO across the wire (’cause Ubuntu…) but it works.

AlmaLinux

What’s AlmaLinux some people may ask? Well, it’s one of the candidates for the succession of CentOS 8 after that was killed off by IBM. There are other candidates, but if you ask me, this one is the most promising right now. And since it’s available already (though in Beta currently), we’re not going to waste our time with dying CentOS but rather aim for the future of an enterprisy Linux distribution with Alma.

Obviously the first thing to do is getting it on our machine. I suggest getting the DVD which comes with all the packages available; if you know that you only do very basic installations, you may get the x86_64-minimal.iso one instead. Since it’s a pretty big ISO (8 GB!), checking for integrity is something I wouldn’t recommend skipping:

# mkdir -p /usr/local/www/pxe/linux/alma
# fetch https://repo.almalinux.org/almalinux/8.3-beta/isos/x86_64/AlmaLinux-8.3-beta-1-x86_64-dvd1.iso -o /usr/local/www/pxe/linux/alma/AlmaLinux-8.3-beta-1-x86_64-dvd1.iso
# fetch https://repo.almalinux.org/almalinux/8.3-beta/isos/x86_64/CHECKSUM -o /tmp/CHECKSUM
# grep -c `sha256 /usr/local/www/pxe/linux/alma/AlmaLinux-8.3-beta-1-x86_64-dvd1.iso | awk '{ print $NF }'` /tmp/CHECKSUM
# rm /tmp/CHECKSUM

If the grep command line does not return 1, delete both files and re-fetch, then run the check again. As soon as we have a good copy, we can extract the contents, making it available over HTTP. Permissions are wrong after extraction with all the directories missing the executable bit (rendering all their content inaccessible to the web server). So we fix this quickly, too:

# 7z x /usr/local/www/pxe/linux/alma/AlmaLinux-8.3-beta-1-x86_64-dvd1.iso -o/usr/local/www/pxe/linux/alma/sysroot
# find /usr/local/www/pxe/linux/alma -type d | xargs chmod 0755

The next step is to make kernel and init ramdisk available via TFTP:

# mkdir /usr/local/tftpboot/linux/alma
# cp /usr/local/www/pxe/linux/alma/sysroot/images/pxeboot/vmlinuz /usr/local/www/pxe/linux/alma/sysroot/images/pxeboot/initrd.img /usr/local/tftpboot/linux/alma

And finally we need to edit the Linux menu:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

The definition to append looks like this:

LABEL alma-pxe-install
        MENU LABEL Install AlmaLinux 8.3 BETA (PXE)
        KERNEL linux/alma/vmlinuz
        INITRD linux/alma/initrd.img
        APPEND ip=dhcp inst.stage2=http://10.11.12.1/linux/alma/sysroot inst.xdriver=vesa

If you think that requiring X11 to install an OS is a reasonable choice and prefer the graphical installer, leave out the inst.xdriver=vesa option in the APPEND line. Also you can serve kernel and initrd via HTTP if you like. To do so, move the boot files:

# mv /usr/local/tftpboot/linux/alma/vmlinuz /usr/local/tftpboot/linux/alma/initrd.img /usr/local/www/pxe/linux/alma

Then edit the menu again:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

And make the following changes:

        KERNEL http://10.11.12.1/linux/alma/vmlinuz
        INITRD http://10.11.12.1/linux/alma/initrd.img

Now your PXE server is ready to serve AlmaLinux.

Devuan Beowulf

Let’s add Devuan to the list next. Why Devuan and not Debian proper you ask? Well, because it has much cooler release names! Ok, of course not (the names are cooler, though!). Because of Init freedom.

While I don’t use Devuan (or Debian) myself, I’ve been supporting the fork since it started, sharing the first ISOs via Torrent and such. I very much dislike the “Bwahaha, you’ll never manage to fork XYZ!!!!11″ stance that some very loud people in the Linux community display. I felt ashamed for my fellow Open Source supporters when it happened to the MATE desktop and I did again when people dismissed Devuan like that. Let me emphasize: This ridiculing is not the Debian project’s fault. But the choice of opening their doors widely to Systemd while discriminating against everything else is. Hence: Devuan.

If you prefer Debian instead, there’s not too much to change: Substitute the directory name, grab Debian’s netboot archive instead and you’re good to go. First thing to do is to get the netboot image, extract it and copying kernel and initrd over:

# fetch https://pkgmaster.devuan.org/devuan/dists/beowulf/main/installer-amd64/current/images/netboot/netboot.tar.gz -o /tmp/netboot.tar.gz
# mkdir /tmp/devuan && tar -C /tmp/devuan -xvf /tmp/netboot.tar.gz
# mkdir /usr/local/tftpboot/linux/devuan
# cp /tmp/devuan/debian-installer/amd64/linux /tmp/devuan/debian-installer/amd64/initrd.gz /usr/local/tftpboot/linux/devuan
# rm -r /tmp/devuan /tmp/netboot.tar.gz

Then we have to edit the Linux boot menu configuration:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

Simply add the following block to the file and save:

LABEL devuan-pxe-install
        MENU LABEL Install Devuan Beowulf (PXE)
        KERNEL linux/devuan/linux
        INITRD linux/devuan/initrd.gz
        APPEND vga=788

And yes, that’s already it. Debian does a pretty good job when it comes to PXE installing! If you prefer to serve the boot files via HTTP, just move the files:

# mkdir -p /usr/local/www/pxe/linux/devuan
# mv /usr/local/tftpboot/linux/devuan/linux /usr/local/tftpboot/linux/devuan/initrd.gz /usr/local/www/pxe/linux/devuan

Then edit the menu config again:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

Now substitute the respective lines with these:

        KERNEL http://10.11.12.1/linux/devuan/linux
        INITRD http://10.11.12.1/linux/devuan/initrd.gz

Done. You can boot into the Devuan installer via PXE.

Alpine Linux 3.13

One last Linux example here is for a long-time favorite distribution of mine: Alpine Linux. It’s a very light-weight, security-oriented distribution based on Musl Libc. Alpine is well fit for embedded use and can run diskless and entirely from RAM – of course traditional installations are possible, too. Most people probably know it because it’s used frequently with Docker.

Adding Alpine to the mix is very simple. Let’s get and extract the netboot archive first:

# fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/releases/x86_64/alpine-netboot-3.13.1-x86_64.tar.gz -o /tmp/alpine-netboot-3.13.1-x86_64.tar.gz
# mkdir /tmp/alpine && tar -C /tmp/alpine -xvf /tmp/alpine-netboot-3.13.1-x86_64.tar.gz

Now we copy off kernel, initrd as well as the modloop:

# cp /tmp/alpine/boot/vmlinuz-lts /tmp/alpine/boot/initramfs-lts /usr/local/tftpboot/linux/alpine
# mkdir /usr/local/www/pxe/linux/alpine
# cp /tmp/alpine/boot/modloop-lts /usr/local/www/pxe/linux/alpine

Finally edit the Linux menu config:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

Put the following in there and save:

LABEL alpine-pxe-install
        MENU DEFAULT
        MENU LABEL Alpine Linux 3.13 (PXE)
        KERNEL linux/alpine/vmlinuz-lts
        INITRD linux/alpine/initramfs-lts
        APPEND ip=dhcp modloop=http://10.11.12.1/linux/alpine/modloop-lts alpine_repo=http://dl-cdn.alpinelinux.org/alpine/edge/main

And that’s really it! If you wish to use HTTP only, move two files:

# mv /usr/local/tftpboot/linux/alpine/* /usr/local/www/pxe/linux/alpine

Then edit the menu again:

# vi /usr/local/tftpboot/pxelinux.cfg/linux

And change the following lines:

        KERNEL http://10.11.12.1/linux/alpine/vmlinuz-lts
        INITRD http://10.11.12.1/linux/alpine/initramfs-lts

That’s all, you’re good to go.

SmartOS

SmartOS is a gem unknown to many. It’s an illumos distribution (OpenSolaris-derivative) meant to operate fully in live-mode. That is: It uses the machine’s hard disk(s) only for storage – the OS is always booted via PXE. This means updating is as easy as booting a newer image! Due to its nature it’s really no wonder that it has the most advanced, stunningly simple PXE process of all the systems that I’ve covered so far. You just get the image file:

# mkdir -p /usr/local/www/pxe/illumos/smartos
# fetch https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos-latest-USB.img.gz -o /usr/local/www/pxe/illumos/smartos/smartos-latest-USB.img.gz

Then you edit the illumos menu config:

# vi /usr/local/tftpboot/pxelinux.cfg/illumos

Paste in the following block and save:

LABEL smartos-pxe
        MENU LABEL SmartOS (PXE)
        KERNEL memdisk
        INITRD http://10.11.12.1/illumos/smartos/smartos-latest-USB.img.gz
        APPEND harddisk raw

And that’s really it! There is no next step (except for enjoying your new system(s))!

Tribblix

Tribblix is a retro style illumos distribution that you might enjoy if you are a Solaris veteran. I did not succeed to get it working with pxelinux which seems to have trouble loading the illumos kernel. Fortunately, PXE-booting Tribblix using iPXE is officially supported. Yes, in a previous article I wrote that I decided to use pxelinux instead, but since there’s no other way (that I know of… please comment if you know how to get it work with just pxelinux!), we’re going down that route here.

The first thing to do is to make sure we’re able to chain-load iPXE. FreeBSD provides a package for it which we’re going to install. Then we make the required program available via http:

# pkg install -y ipxe
# mkdir -p /usr/local/www/pxe/other/ipxe
# cp /usr/local/share/ipxe/ipxe.pxe /usr/local/www/pxe/other/ipxe

Next is adding a menu item to select iPXE. So we need to edit the respective config file:

# vi /usr/local/tftpboot/pxelinux.cfg/illumos

Let’s add the following block and save the file:

LABEL tribblix-over-ipxe
        MENU LABEL Tribblix 0m24 (iPXE)
        KERNEL pxechn.c32
        APPEND http://10.11.12.1/other/ipxe/ipxe.pxe

This will be enough to chain-load iPXE. It’s not of too much use, yet, though. If you select it, it’ll request an IP address and use the PXE-related info it gets from the DHCP server to – load pxelinux. Press CTRL-B when it loads to break the automatic behavior and be dropped to a console prompt. However we can’t do much with it, yet, since we’re missing the OS data.

The location to use is a bit odd here. This is due to hard-coded paths in the image, I assume. At least I could not get it working using the scheme that I use for the other examples. Anyway, time to fetch the kernel and the ramdisk image:

# mkdir -p /usr/local/www/pxe/m24/platform/i86pc/kernel/amd64
# fetch http://pkgs.tribblix.org/m24/platform/i86pc/kernel/amd64/unix -o /usr/local/www/pxe/m24/platform/i86pc/kernel/amd64/unix
# fetch http://pkgs.tribblix.org/m24/platform/i86pc/boot_archive -o /usr/local/www/pxe/m24/platform/i86pc/boot_archive

Now that we have the data in place, we could manually boot up Tribblix. But there’s an easier way by creating a script which we can invoke and that simply does the rest. Let’s create it in the tftp directory (it’s small and using TFTP means we have less to type because we can omit the http://10.11.12.1/ part):

# vi /usr/local/tftpboot/tribblix.ipxe

Paste the following into the file and save:

#!ipxe
kernel http://10.11.12.1/m24/platform/i86pc/kernel/amd64/unix
initrd http://10.11.12.1/m24/platform/i86pc/boot_archive
boot

That’s it. Drop to iPXE’s prompt as described above and request an IP address; then simply chainload the script. Tribblix will boot to the installer:

iPXE> dhcp
Waiting for link-up on net0... ok
Configuring (net0 10:bf:48:df:e9:65)...... ok
iPXE> chain tribblix.pxe

Yes, this is somewhat less convenient than other options shown here. And in fact it can be avoided. The problem being that creating a more convenient solution is also a little bit more involved and simply beyond the scope of this article. If you’re interested in doing it, do some reading on your own. What you want to research is embedding the script into iPXE.

OmniOSce r151036

After struggeling with getting OmniOSce to PXE-boot to no avail for days I gave up and moved on to Tribblix. I wish I had done it the other way round; Peter Tribble has posted an article on his blog on how to do it with iPXE. And since I gave in and used it in my example above, I figured that I might as well return and give OmniOSce a try again. And really: Using iPXE instead of pure pxelinux – I got it working.

Caution, though: What is described here is a fully automated (unattended) installation. It will destroy any data on the target machine without asking. If that’s not what you want, stay away. I wish I knew a way to just invoke the installer instead, but I don’t.

Of course we need the operating system kernel and image as well as the zfs stream to install:

# mkdir -p /usr/local/www/pxe/illumos/omniosce/platform/i86pc/amd64 /usr/local/www/pxe/illumos/omniosce/platform/i86pc/kernel/amd64
# fetch https://downloads.omnios.org/media/r151036/omniosce-r151036m.unix -o /usr/local/www/pxe/illumos/omniosce/platform/i86pc/kernel/amd64/unix
# fetch https://downloads.omnios.org/media/r151036/omniosce-r151036m.miniroot.gz -o /usr/local/www/pxe/illumos/omniosce/platform/i86pc/amd64/omniosce-r151036m.miniroot.gz
# gunzip /usr/local/www/pxe/illumos/omniosce/platform/i86pc/amd64/omniosce-r151036m.miniroot.gz
# fetch https://downloads.omnios.org/media/r151036/omniosce-r151036.zfs.xz -o /usr/local/www/pxe/illumos/omniosce/platform/i86pc/amd64/omniosce-r151036.zfs.xz

Next is the installer configuration. Kayak will look for a file at the location told to (see below in the script) – the filename it tries first is named after the MAC address of the interface used for DHCP. In my case the MAC is 10:bf:48:df:e9:65, so the file matching only this single machine would be called 10BF48DFE965 (use uppercase for the Hex values above 9 resembled by the letter symbols).

If it cannot find that file it will look for a file with the name of the first 11 digits, i.e. 10BF48DFE96. That would also match MAC addresses like 10BF48DFE961, 10BF48DFE96A, 10BF48DFE96E and the like. If there’s no such file, it will try to get one named after the first 10 digits and so on until it eventually tries “1” in my case (if your MAC starts with a different digit, you will have to use that).

I’m lazy here and use a single-digit name (and thus choose to play with fire since this configuration will potentially nuke the previous installation of any machine having a network card with a MAC address that starts with 1 and would be PXE-booting OmniOS by accident!):

# vi /usr/local/www/pxe/kayak/1

I put the following in there (your machines are likely different from mine; refer to the OmniOSce documentation for more information about about the configuration script):

BuildRpool c3t0d0
RootPW '$5$JQkyMDvv$pPzEUsvP/rLwURyrpwz5i1SfVqx2QiEoIdDA9ZrG271'
SetRootPW
SetHostname omnios
SetTimezone UTC
EnableDNS example.com
SetDNS 1.1.1.1 80.80.80.80
Postboot '/sbin/ipadm create-if rge0'
Postboot '/sbin/ipadm create-addr -T dhcp rge0/v4'
SendInstallLog

Now I create a menu entry for pxelinux by editing the configuration file:

# vi /usr/local/tftpboot/pxelinux.cfg/illumos

I add the following block and save:

LABEL omniosce-auto-over-ipxe
        MENU LABEL OmniOSce r151036 AUTOMATED INSTALLATION [dangerous!] (iPXE)
        KERNEL pxechn.c32
        APPEND http://10.11.12.1/other/ipxe/ipxe.pxe

Finally I create the script to “chainload” from iPXE:

# vi /usr/local/tftpboot/omniosce.ipxe

Here’s the content for that script:

!ipxe
kernel http://10.11.12.1/illumos/omniosce/platform/i86pc/kernel/amd64/unix -B install_media=http://10.11.12.1/illumos/omniosce/platform/i86pc/amd64/omniosce-r151036.zfs.xz,in
stall_config=http://10.11.12.1/illumos/omniosce
initrd http://10.11.12.1/illumos/omniosce/platform/i86pc/amd64/omniosce-r151036m.miniroot
boot

And that’s it. Like with Tribblix, use CTRL-B to get to an iPXE prompt, obtain an IP address and chainload the iPXE script – in this case the one for OmniOSce. If you did everything correct for your machine (depending on its MAC), kayak should erase your data and install OmniOSce on it instead.

FreeDOS 1.2

Why add FreeDOS? Well, why not? I’ve played with it quite a bit in my youth and happen to like it. It’s also simple to do and can in fact be useful (e.g. when you want to flash some firmware). As always, the first thing is to get and extract the archive. The zip file contains a virtual machine image, too, but we don’t need it and thus skip unzipping it:

# fetch http://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/distributions/1.2/FD12FULL.zip -o /tmp/FD12FULL.zip
# unzip -d /tmp -x '*.vmdk' /tmp/FD12FULL.zip

Next step is putting the extracted image into the right place, to correct permissions and to compress it to save a fair bit of space:

# mkdir -p /usr/local/www/pxe/other/fdos
# mv /tmp/FD12FULL.img /usr/local/www/pxe/other/fdos
# chmod 0644 /usr/local/www/pxe/other/fdos/FD12FULL.img.gz
# gzip -9 /usr/local/www/pxe/other/fdos/FD12FULL.img

Finally edit the “other” boot menu config file:

# vi /usr/local/tftpboot/pxelinux.cfg/other

And add the following block:

LABEL fdos-pxe-install
       MENU LABEL Install FreeDOS 1.2 (PXE)
       KERNEL memdisk
       INITRD http://10.11.12.1/other/fdos/FD12FULL.img.gz
       APPEND harddisk raw

Done. You should be able to PXE-boot into FreeDOS now.

Plan 9

Back to the Unixy stuff – and (disregarding Inferno) the one OS that is more Unixy that Unix itself: Plan 9! I’m going with the 9front distribution here; it’s basically an example, anyway. There are not that many people out there who will want to actually provision a lot of servers with Plan 9, right? (If you are one of those, get in touch with me! I’d be interested in what you’re doing with it.)

As always we first need to get the image and put it in place:

# mkdir /usr/local/www/pxe/other/9front
# fetch http://9front.org/iso/9front-8013.d9e940a768d1.amd64.iso.gz -o /usr/local/www/pxe/other/9front/9front-8013.d9e940a768d1.amd64.iso.gz

Then the menu configuration needs to be edited:

# vi /usr/local/tftpboot/pxelinux.cfg/other

Paste in the following block:

LABEL 9front-pxe-install
       MENU LABEL Install 9front (PXE)
       KERNEL memdisk
       INITRD http://10.11.12.1/other/9front/9front-8013.d9e940a768d1.amd64.iso.gz
       APPEND iso raw

And that’s all there is to it.

Redox 0.6

Redox is an experimental OS is about implementing a POSIX-like operating system written in Rust. It’s using a modern microkernel architecture and made amazingly quick progress after the project started. I’m including it here because I think it’s a very interesting thing to follow.

To add Redox to the mix, you have to download the ISO first:

# mkdir /usr/local/www/pxe/other/redox
# fetch https://gitlab.redox-os.org/redox-os/redox/-/jobs/31100/artifacts/raw/build/img/redox_0.6.0_livedisk.iso.gz -o /usr/local/www/pxe/other/redox/redox_0.6.0_livedisk.iso.gz

Next and final step is editing the the menu configuration:

# vi /usr/local/tftpboot/pxelinux.cfg/other

and adding a block for Redox’ menu item:

LABEL redox-pxe-install
       MENU LABEL Install Redox OS (PXE)
       KERNEL memdisk
       INITRD http://10.11.12.1/other/redox/redox_0.6.0_livedisk.iso.gz
       APPEND iso raw

The current version seems to have introduced a regression regarding my test hardware. It boots but after setting the resolution, the computer simply restarts. Version 0.4.0 could boot up to the graphical login, though (but I couldn’t login due to missing input drivers I assume).

What’s missing?

I would have liked to add OpenIndiana, Haiku, Minix and ReactOS as well.

With both Haiku and ReactOS I failed to get them even booting. It might be possible to do it, but I ran out of ideas to try quickly and because of a lack of time didn’t try harder.

For OpenIndiana I didn’t even find proper documentation. Since I’m not very knowledgable with illumos, anyway, I eventually settled on having three other distributions covered and didn’t really try it out.

Minix is a special case. According to the documentation, PXE-booting should be possible with a ramdisk image. The project doesn’t provide them for download (or I didn’t find them) but building one from source is documented. So I checked out the source and ran the script to build it – and it failed. I tried again on a 64-bit machine and it failed again. Looks like cross-building Minix on FreeBSD did once work (documentation mentions FreeBSD 10), but this no longer seems to be the case. Not having any Linux machines around to try it on, I decided to skip it.

Conclusion

The 4 articles of this mini series detail how you can setup a PXE server on FreeBSD that will serve multiple operating systems depending on the menu item chosen by the user. With all the examples of Linux, illumos and various other Open Source operating systems in this article, you have plenty of choice for experimenting in your home lab!

This does not mean that there are not many, many more options available for free. If you know how to add more options – why not write about it yourself? Be sure to let me know, I’ll happily place a link to your work.

And as always: Feedback, suggestions for improvements and such are very welcome. Have fun!

Multi-OS PXE-booting from FreeBSD 12: PXE menu and *BSD (pt. 3)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/multi-os_pxe-booting_from_fbsd_pt3.gmi

Post 1 of this mini series is about what lead me to do this in the first place, features a little excursion for people new to PXE and details the setup of a FreeBSD router.
Post 2 discusses setting up the required daemons for DHCP, TFTP and HTTP / FTP. Each component that is not in FreeBSD’s base system is briefly discussed and two options to pick from are covered.

At the end of part 2, the situation is as follows: A client in the 10.11.12.0/24 subnet attempting to PXE boot will get an IP address via DHCP and be told where to get the NPB (Network Bootstrap Program). It will then attempt to fetch it via TFTP. There’s just one problem: That is not there, yet! We’ll fix that in a minute. When we’re done here, you’ll have a fully working PXE server that offers multiple BSD operating systems to boot into (Linux and more are covered in part 4). This article focuses on BIOS (“legacy”) booting; if you want to boot EFI-only machines you’ll have to adapt the configuration examples given here to that. I also assume using HTTP here – if you opted for FTP you will have to adapt the commands used in the examples.

Network Bootstrap Program

There are a lot of NBPs available. Usually each operating system has its own which is tuned towards its specific duty: FreeBSD has one, OpenBSD has another and Linux has several. These are not ordinary programs; they need to cope with a very resource-constrained environment and cannot depend on any external libraries. While writing boot code is challenging enough, adding network booting capabilities doesn’t make things any easier. This is why most NBPs are as simple as possible.

As a result of that, the NBPs usually know how to boot exactly one operating system. Since we want to set up a multi-OS PXE server this is quite unfortunate for our use case. There are two ways to work around this problem:

  1. Provide various NBPs and use DHCP to distinguish between clients
  2. Use an NBP that supports a menu to select which one to boot next

As usual there’s pros and cons to both. Letting DHCP do the magic requires a much more complex DHCP configuration. It’s also much less flexible. The boot menu approach is simple and flexible, but more complicated if you are also interested in automation. I do like automation, but I decided in favor of using a boot menu for this article because it’s easier to follow. It is also a good achievement to build upon once you’re comfortable with DHCP and feel ready for advanced settings.

It is possible to use one NBP to fetch and execute another one. This process is known as chainloading. For some operating systems that is the best choice to add them to a menu system.

There’s three popular options that we have for an NBP which fits our needs:

1. GRUB
2. PXELINUX (from Syslinux) and
3. iPXE

GRUB and I never made friends. I used it for a while after switching from LILO only to ditch it for Syslinux when I learned of that. I have to use it on many systems, but when I have a choice, I choose something else. The iPXE project is very interesting. It’s the most advanced (but also most involved) of the three options. If you’re curious about just how far you can take PXE booting, at least have a look at it. For this article, we’ll go with PXELINUX.

Pxelinux

Pxelinux is available via packages on FreeBSD. It does pull in some dependencies that I don’t want on my system however. For that reason we’re going to fetch the package instead of installing it. Then we extract it manually:

# pkg fetch -y syslinux
# mkdir /tmp/syslinux
# tar -C /tmp/syslinux -xvf /var/cache/pkg/syslinux-6.03.txz

Now we can cherry-pick the required files:

# cp /tmp/syslinux/usr/local/share/syslinux/bios/core/lpxelinux.0 /usr/local/tftpboot/pxelinux.0
# cp /tmp/syslinux/usr/local/share/syslinux/bios/com32/elflink/ldlinux/ldlinux.c32  /usr/local/tftpboot/
# cp /tmp/syslinux/usr/local/share/syslinux/bios/com32/menu/vesamenu.c32 /usr/local/tftpboot/
# cp /tmp/syslinux/usr/local/share/syslinux/bios/com32/lib/libcom32.c32 /usr/local/tftpboot/
# cp /tmp/syslinux/usr/local/share/syslinux/bios/com32/libutil/libutil.c32 /usr/local/tftpboot/
# cp /tmp/syslinux/usr/local/share/syslinux/bios/com32/modules/pxechn.c32 /usr/local/tftpboot/
# cp /tmp/syslinux/usr/local/share/syslinux/bios/memdisk/memdisk /usr/local/tftpboot/
# rm -r /tmp/syslinux

The first one is the modular NBP itself. It requires some modules – the c32 files. The pxechn and memdisk modules are optional – they are required for some of the operating systems examples here but not all. You can leave them out if you don’t need them. Restart the inetd service now and you will be able to PXE boot to the menu:

# service inetd restart

Keep in mind: Restart the inetd service whenever you added a file to or edited any in the tftpboot directory!

Tip 1: You can use make use of gzipped files as Pxelinux supports that. This way you can decrease loading times by serving smaller images over the net.

Tip 2: I’m using gzip in my examples but if you really want to fight for the last byte, use zopfli instead. It’s a compression program that produces gzip-compatible output but takes much more processor time to create optimized archives. As decompression time is unaffected it’s a price you have to pay only once. Consider using it if you want the best results.

Submenus

Pxelinux is hard-coded to load pxelinux.cfg/default via TFTP and use that as its configuration. If you plan to only use few of the OS examples shown here, that config is sufficient as you can put everything into there. Once you feel that your menu is becoming overly crowded, you can turn it into a main menu and make use of submenus to group things as I do it here. This works by putting the various OS entries in different config files.

If you don’t want submenus, skip the next step and put all the menu entries that go into something other than pxelinux.cfg/default in my examples right into that file instead – and leave out the reference back to the main menu since they don’t make any sense if you’re using a flat menu anyway.

In the previous post we already created the file /usr/local/tftpboot/pxelinux.cfg/default. Append the following to it to create a submenu for the BSDs we’re covering today:

MENU TITLE PXE Boot Menu (Main)

LABEL bsd-oses
        MENU LABEL BSD Operating Systems
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/bsd

Now create the file referenced there:

# vi /usr/local/tftpboot/pxelinux.cfg/bsd

and put the following text in there:

MENU TITLE PXE Boot Menu (BSD)

LABEL main-menu
        MENU LABEL Main Menu
        KERNEL vesamenu.c32
        APPEND pxelinux.cfg/default

Alright, preparation work is done, let’s finally add some operating system data!

FreeBSD 12.2

PXE booting FreeBSD is actually not such an easy thing to do if you want to avoid using NFS shares. Fortunately there is mfsBSD, a project that provides tools as well as releases of special FreeBSD versions that can be booted over the net easily. We’re going to use that.

There are multiple variants for download: The standard one, the “special edition” and a “mini edition”. The special edition comes with the distribution tarballs on the ISO – you may want to use that one for installation purposes. If you just want a FreeBSD live system (e.g. for maintenance and repairs) or use you owr mirror (see below), the standard edition is for you since it is much smaller and thus boots way faster.

Let’s make the image available via HTTP:

# mkdir -p /usr/local/www/pxe/bsd/fbsd
# fetch https://mfsbsd.vx.sk/files/iso/12/amd64/mfsbsd-12.2-RELEASE-amd64.iso -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/mfsbsd.iso
# gzip -9 /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/mfsbsd.iso

Now edit the pxelinux.cfg/bsd file and append:

LABEL fbsd-pxe-install
        MENU LABEL Install FreeBSD 12.2 (PXE)
        MENU DEFAULT
        KERNEL memdisk
        INITRD http://10.11.12.1/bsd/fbsd/amd64/12.2-RELEASE/mfsbsd.iso
        APPEND iso raw

That’s it. You can now PXE-boot into FreeBSD.

Installation using mfsBSD

Login with user root and password mfsroot. It includes a “zfsinstall” script that you may want to take a look at. There’s a lot more to mfsBSD, though. Its tools allow you to easily roll your own customized images. If a way to include packages or files, use a custom-built kernel and things like that sounds like something that would be useful for you, take a closer look. I cannot go into more detail here – it’s a topic of its own and would deserve an entire article dedicated to it. In case you just want to use the familiar FreeBSD installer bsdinstall, read on.

Mirroring distfiles and fixing bsdinstall

If you want to install FreeBSD over PXE more than once, it makes sense to provide a local distfile mirror. Since we have a fileserver running anyway, there’s really nothing more to it than getting the distfiles and putting them into the right place. At the very minimum get the following three files:

# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/MANIFEST -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/MANIFEST
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/base.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/base.txz
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/kernel.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/kernel.txz

Depending on which distfiles you usually install, also get any or all of the following files:

# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/base-dbg.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/base-dbg.txz
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/kernel-dbg.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/kernel-dbg.txz
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/lib32-dbg.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/lib32-dbg.txz
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/ports.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/ports.txz
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/src.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/src.txz
# fetch http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE/tests.txz -o /usr/local/www/pxe/bsd/fbsd/amd64/12.2-RELEASE/tests.txz

At this point in time, bsdinstall is broken on mfsBSD. The reason is that the distfile manifest is missing. I think about getting this fixed upstream, so in the future try and see if the following part is obsolete before using it. But for now, let’s create a simple shell script in the webroot directory for convenience:

# vi /usr/local/www/pxe/fbsd.sh

Put the following into the script:

#!/bin/sh
ARCH=`uname -m`
RELEASE=`uname -r | cut -d"-" -f1`
mkdir -p /usr/freebsd-dist
fetch http://10.11.12.1/bsd/fbsd/${ARCH}/${RELEASE}-RELEASE/MANIFEST -o /usr/freebsd-dist/MANIFEST
bsdinstall

Now if you PXE-booted mfsBSD and logged in as root, you just need to execute the following command line and will then be able to use the installer as you are used to it:

# fetch http://10.11.12.1/fbsd.sh && sh fbsd.sh

When you are to select the installation source, there is an “Other” button at the bottom of the window. Choose that and point to your distfile mirror – in my example http://10.11.12.1/bsd/fbsd/amd64/12.2-RELEASE. Happy installing!

One more hint: You may want to look into the environment variables that bsdinstall(8) provides. I briefly attempted to automatically set the URL to the distfile mirror but couldn’t get it working. As I was already running out of time with this article I haven’t looked deeper into it. If anybody figures it out I’d appreciate sharing your solution here.

OpenBSD 6.8

Adding OpenBSD as an option is trivial. The project provides a ramdisk kernel used for installing the system and a NBP capable of loading it. Let’s get those two files in place – and while the ramdisk kernel is fairly small already, we can chop a couple of bytes off by compressing it:

# mkdir -p /usr/local/tftpboot/bsd/obsd
# fetch https://cdn.openbsd.org/pub/OpenBSD/6.8/amd64/pxeboot -o /usr/local/tftpboot/bsd/obsd/pxeboot
# fetch https://cdn.openbsd.org/pub/OpenBSD/6.8/amd64/bsd.rd -o /usr/local/tftpboot/bsd/obsd/6.8-amd64.rd
# gzip -9 /usr/local/tftpboot/bsd/obsd/6.8-amd64.rd

Now we only need to add the required lines to pxelinux.cfg/bsd:

LABEL obsd-pxe-install
        MENU LABEL Install OpenBSD 6.8 (PXE)
        KERNEL pxechn.c32
        APPEND bsd/obsd/pxeboot

That’s it, the OpenBSD loader can be booted! Since we don’t have the kernel in the assumed default location (“/bsd”) we’d need to tell the loader to “boot bsd/obsd/6.8-amd64.rd.gz”. The loader supports a configuration file, though. So for a little extra convenience we can make it pick up the kernel automatically like this:

# mkdir -p /usr/local/tftpboot/etc
# ln -s /usr/local/tftpboot/etc/boot.conf /usr/local/tftpboot/bsd/obsd/boot.conf
# echo "boot bsd/obsd/6.8-amd64.rd.gz" > /usr/local/tftpboot/bsd/obsd/boot.conf
# echo "# OpenBSD boot configuration" >> /usr/local/tftpboot/bsd/obsd/boot.conf

The pxeboot program comes with the configuraton file name of etc/boot.conf hard-coded. To keep things a little cleaner in the hierarchy that I use, I chose to set a symlink in the obsd directory for reference purposes. And that’s all.

NetBSD 9.1

Let’s add NetBSD! It’s somewhat similar to OpenBSD – but a bit more involved unfortunately. The reason is that the NBP by default does not support a configuration file. It has the ability to use one, but that needs to be activated first. Which is fair enough since it’s only a single command – on NetBSD that is! But let’s worry about this in a minute and first get the NBP as well as the install kernel:

# mkdir -p /usr/local/tftpboot/bsd/nbsd
# fetch https://cdn.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/installation/misc/pxeboot_ia32.bin -o /usr/local/tftpboot/bsd/nbsd/pxeboot_ia32.bin
# fetch https://cdn.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/binary/kernel/netbsd-INSTALL.gz -o /usr/local/tftpboot/bsd/nbsd/netbsd-INSTALL.gz

Now we need to add the boot menu entry by adding the following lines to pxelinux.cfg/bsd:

LABEL nbsd-pxe-install
        MENU LABEL Install NetBSD 9.1 (PXE)
        KERNEL pxechn.c32
        APPEND bsd/nbsd/pxeboot_ia32.bin

This is sufficient to load and execute the NetBSD loader. That will then complain that it cannot find the kernel and no hints about NFS were given. Now we have three options:

  1. Manually point the loader to the correct kernel each time
  2. Give the required hint via DHCP
  3. Try to enable the loader configuration

Typing in “tftp:bsd/nbsd/netbsd-INSTALL.gz” is probably fair enough if you are doing NetBSD installs very rarely but it gets old quickly. So let’s try out option two!

Modifying DHCP config for NetBSD

The DHCP server needs to be configured to pass a different Boot File name option when answering the NetBSD loader than otherwise. This is done by matching class information. This topic is beyond the scope of this article, so if you are interested, do some reading on your own. I won’t leave you hanging, though, if you just need to get things working.

Here’s what you have to add to the configuration if you’re using Kea – for example right before the “loggers” section:

    "client-classes": [
        {
            "name": "NetBSDloader",
            "test": "option[vendor-class-identifier].text == 'NetBSD:i386:libsa'",
            "boot-file-name": "tftp:bsd/nbsd/netbsd-INSTALL.gz"
        }
    ],

And here the same thing if you are using DHCPd:

if substring (option vendor-class-identifier, 0, 17) = "NetBSD:i386:libsa" {
    if filename = "netbsd" {
        filename "tftp:bsd/nbsd/netbsd-INSTALL.gz";
    }
}

Restart your DHCP server and you should be good to go.

After accomplishing the easy way via DHCP, I also went down to explore the boot.cfg road but ultimately failed. I’m documenting it here anyway in case somebody wants to pick up where I decided to leave it be.

Enabling boot.cfg in the loader

To mitigate the risk of polluting my main system by doing something stupid I chose to do all of this using my unprivileged user. The first thing I did, was fetching and extracting the basic NetBSD 9.1 sources:

% mkdir -p netbsd-9.1 && cd netbsd-9.1
% fetch ftp://ftp.netbsd.org/pub/NetBSD/NetBSD-9.1/source/sets/src.tgz
% tar xvzf src.tgz

The sources for the installboot program we’re looking for are in usr/src/usr.sbin/installboot. I tried to get that thing to build by pointing the compiler at additional include directories and editing quite some header files, hoping to resolve the conflicts with FreeBSD’s system headers and problems like that. It can probably be done but that would take a C programmer – which I am not.

Fortunately NetBSD is the portability star among the BSDs and should be buildable on many other systems. I’ve never done this before but here was the chance. So I installed CVS and checked out the rest of the NetBSD src module:

% doas pkg install -y cvs
% cd netbsd-9.1/usr
% cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot checkout -r netbsd-9-1-RELEASE -P src

When exploring the source tree, I found a build script that obviously does all the magic required here. Be warned however that this builds a full cross-toolchain including the complete GCC compiler! Then it builds the “tools” subset of the NetBSD code (which includes the installboot that we’re looking for). On my old and slow Atom-based system this process took 6 hours:

% cd src
% ./build.sh -U -m amd64 -T ~/nbsd tools
===> build.sh command:    ./build.sh -U -m amd64 -T /home/kraileth/nbsd tools
===> build.sh started:    Thu Feb  3 00:13:41 CET 2021
===> NetBSD version:      9.1
===> MACHINE:             amd64
===> MACHINE_ARCH:        amd64
===> Build platform:      FreeBSD 12.2-RELEASE-p1 amd64
===> HOST_SH:             /bin/sh
===> No $TOOLDIR/bin/nbmake, needs building.
===> Bootstrapping nbmake
checking for sh... /bin/sh
checking for gcc... cc

[...]

install ===> config
#   install  /home/kraileth/nbsd/bin/nbconfig
mkdir -p /home/kraileth/nbsd/bin
/home/kraileth/nbsd/bin/amd64--netbsdelf-install -c  -r -m 555 config /home/kraileth/nbsd/bin/nbconfig
===> Tools built to /home/kraileth/nbsd
===> build.sh ended:      Thu Feb  3 6:26:25 CET 2021
===> Summary of results:
         build.sh command:    ./build.sh -U -m amd64 -T /home/kraileth/nbsd tools
         build.sh started:    Thu Feb  4 07:13:41 CET 2021
         NetBSD version:      9.1
         MACHINE:             amd64
         MACHINE_ARCH:        amd64
         Build platform:      FreeBSD 12.2-RELEASE-p1 amd64
         HOST_SH:             /bin/sh
         No $TOOLDIR/bin/nbmake, needs building.
         Bootstrapping nbmake
         MAKECONF file:       /etc/mk.conf (File not found)
         TOOLDIR path:        /home/kraileth/nbsd
         DESTDIR path:        /usr/home/kraileth/netbsd-9.1/usr/src/obj/destdir.amd64
         RELEASEDIR path:     /usr/home/kraileth/netbsd-9.1/usr/src/obj/releasedir
         Created /home/kraileth/nbsd/bin/nbmake
         Updated makewrapper: /home/kraileth/nbsd/bin/nbmake-amd64
         Tools built to /home/kraileth/nbsd
         build.sh ended:      Thu Feb  4 12:26:25 CET 2021
===> .

The “-U” flag enables some trickery to build as an unprivileged user. With “-m” you specify the target architecture (I did use i386 but modified the above lines as that will be what most people will want to use instead!). Finally the “-T” switch allows to specify the installation target directory and the “tools” is the make target to use.

When it was done, I did the following (as root):

# cp /usr/local/tftpboot/bsd/nbsd/pxeboot_ia32.bin /usr/local/tftpboot/bsd/nbsd/pxeboot_ia32.bin.bak
# /usr/home/kraileth/netbsd-9.1/usr/src/tools/installboot/obj/installboot -eo bootconf /usr/local/tftpboot/bsd/nbsd/pxeboot_ia32.bin

This should enable the boot config file on the pxeboot loader. It does change the file and probably even makes the right change. I tried to enable module support via installboot, too and that obviously worked (the NFS module was loaded the next time I chainloaded the NetBSD loader). But for some reason I could not get boot.cfg to do what I wanted. Probably I don’t understand the file properly…

While it’s a bit disappointing to stop so close to the goal, messing with NetBSD so much already took much more time away from the other BSDs than I had imagined. And since I could at least offer a working way this was when I decided to move on.

DragonFly BSD

I attempted to get DragonFly BSD to work but failed. I briefly tried out a setup that includes NFS shares but it didn’t work completely either: Kernel booted but failed to execute /sbin/init for some reason or another. Also I don’t really want to cover NFS in this series – there’s enough material in here already. And without NFS… Well, DragonFly BSD has the same problem that FreeBSD has: It will boot the kernel but then be unable to mount the root filesystem.

While I guess that the mfsBSD approach could likely work for DF, too, this is something much more involved than reasonable for our topic here. I would really like to cover DragonFly here, too, but that’s simply a bit too much. If anybody knows how to get it working – please share your knowledge!

HardenedBSD 12-STABLE

HardenedBSD being a FreeBSD fork, inherited the same characteristics as vanilla FreeBSD. Which means that PXE booting the standard images is not an easy thing to do. HardenedBSD uses the same installer, bsdinstall however and for that reason it’s possible to install HardenedBSD by using mfsBSD as prepared above in the FreeBSD section. We only need to point the installer to a different distfile mirror. Let’s create that one now:

# mkdir -p /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE
# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/MANIFEST -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/MANIFEST
# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/base.txz -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/base.txz
# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/kernel.txz -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/kernel.txz

As with FreeBSD, there are some optional distfiles you may or may not want to mirror, too. Provide what you need for your installations:

# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/base-dbg.txz -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/base-dbg.txz
# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/kernel-dbg.txz -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/kernel-dbg.txz
# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/src.txz -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/src.txz
# fetch https://ci-01.nyi.hardenedbsd.org/pub/hardenedbsd/12-stable/amd64/amd64/BUILD-LATEST/tests.txz -o /usr/local/www/pxe/bsd/hbsd/amd64/12-STABLE/tests.txz

Now we’re creating a convenience script for HardenedBSD:

# vi /usr/local/www/pxe/hbsd.sh

Put the following into the script:

#!/bin/sh
ARCH=`uname -m`
MAJOR=`uname -r | cut -d"." -f1`
mkdir -p /usr/freebsd-dist
fetch http://10.11.12.1/bsd/hbsd/${ARCH}/${MAJOR}-STABLE/MANIFEST -o /usr/freebsd-dist/MANIFEST
bsdinstall

Now fire up mfsBSD, login as root and simply run the following command line to start the installer:

# fetch http://10.11.12.1/hbsd.sh && sh hbsd.sh

Select the “Other” button when asked for the installation source. Choose that and point to your distfile mirror – in my example http://10.11.12.1/bsd/hbsd/amd64/12-STABLE. And that’s it.

MidnightBSD 2.0

Since MidnightBSD is a FreeBSD fork as well, it also suffers from the well-known problems related to PXE booting. Again mfsBSD comes to the rescue. Let’s create the distfile mirror first:

# mkdir -p /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/MANIFEST -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/MANIFEST
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/base.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/base.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/kernel.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/kernel.txz

Pick any or all of the remaining optional distfiles to be mirrored, too, if you need them:

# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/lib32.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/lib32.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/doc.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/doc.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/base-dbg.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/base-dbg.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/kernel-dbg.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/kernel-dbg.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/lib32-dbg.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/lib32-dbg.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/src.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/src.txz
# fetch https://discovery.midnightbsd.org/releases/amd64/2.0.3/mports.txz -o /usr/local/www/pxe/bsd/mbsd/amd64/2.0-RELEASE/mports.txz

And here’s the convenience script:

# vi /usr/local/www/pxe/mbsd.sh

Put the following into it:

#!/bin/sh
ARCH=`uname -m`
RELEASE="2.0"
mkdir -p /usr/freebsd-dist
fetch http://10.11.12.1/bsd/mbsd/${ARCH}/${RELEASE}-RELEASE/MANIFEST -o /usr/freebsd-dist/MANIFEST
bsdinstall

Now you can PXE-boot mfsBSD as prepared in the FreeBSD section. After logging in as root execute the following command line that will take you to the installer:

# fetch http://10.11.12.1/mbsd.sh && sh mbsd.sh

When given the choice to select the installation source make sure to select “Other” and point to the right URL – in my example it would be http://10.11.12.1/bsd/mbsd/amd64/2.0-RELEASE, then install the OS as you’re used to.

Watch out for some options in the installer, though! MidnightBSD is mostly on par with FreeBSD 11.4. If you enable e.g. newer hardening options that the installer knows but the target OS doesn’t, you might run into trouble (not sure, tough, I didn’t think of this until after my test installation).

What’s next?

I didn’t plan this post to get that long, especially NetBSD surprised me. Eventually I decided to cover HardenedBSD and MidnightBSD as well and accept that this is a BSD-only article. So the next one will add various Linuxen and other operating systems to the mix.

Multi-OS PXE-booting from FreeBSD 12: Required services (pt. 2)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/multi-os_pxe-booting_from_fbsd_pt2.gmi

The previous post was about what lead me to do this in the first place, featured a little excursion for people new to PXE and most importantly detailed the setup of a FreeBSD router that will be turned into a PXE server in this article.

While I originally intended to show how to boot Ubuntu in this part, things have changed a little. I realized that the software choices I made might not be what a lot of people would have chosen. Therefore I did a little extra work and present my readers with multiple options. This made the article long enough even without the Ubuntu bits which I wanted to cover in part part 3 instead (it was moved to part 4, however).

Current state and scope

Today we will make the machine offer all the services needed for network installations of many Open Source operating systems. My examples are all IPv4, but it should be possible to adapt this to IPv6 fairly easily. As this is *nix, there’s always multiple ways to do things. For software that does not come included in FreeBSD’s base installation, I’ll be giving two options. One will be less commonly deployed but have some advantages in my book. The other will achieve the same goal with a more popular solution.

The machine that I use is an old piece of metal that has 6 NICs – which is why I like to still use it for tinkering with network-related stuff. After part 1, we left off with a gateway that has an Internet-facing adapter em0 which gets its IP via DHCP from my actual router. We’re also using em5 which is statically configured to have the IP 10.11.12.1 and is connected to a separate switch. There’s an unbound nameserver running and serving that interface and a pf firewall is active doing NAT.

This means that everything is in place to serve any host connected to said switch, if it’s manually configured to use a static address in the 10.11.12.0/24 IP range and with default router and nameserver set to 10.11.12.1. Let’s start by getting rid of that “manually configured” requirement!

Excursion: DHCP basics

We do so by configuring the machine as a DHCP server handing out IP addresses for the 10.11.12.0/24 subnet on the 6th NIC. DHCP servers work by listening for DHCP requests which are broadcasted on the network (as the client does not have it’s own IP, yet). When receiving one, it will have a look at its configuration: Is there anything special for the host asking? Is the MAC address included with the request maybe blacklisted? Is there a reserved IP to be handed to this specific machine? Or any particular option to send to the “class” of device asking?

In our simple case there aren’t really any bells and whistles involved. So it will have a look at the IP pool that it manages and if it can find an unused one, it will answer with a DHCP offer. The latter is a proposal for an IP to be leased by the client and also includes various network-related information: Usually at least the netmask, router and nameserver. A lot of additional information can be provided in form of options; you can point the client joining the network to a time server if you have one, inform about the domain being used and much, much more (even custom options are possible if you need them).

For PXE booting to work we need to make use of two particular options: We need the PXE code in the firmware to know which server to turn to if it wants to load the NBP (Network Bootstrap Program) from the net. It also needs to know what the file to ask for is called.

DHCP servers

There are multiple DHCP servers out there. If I were doing this on Linux, I’d probably just pick Dnsmasq and be done: As the name implies, it does DNS. But it also does DHCP and TFTP (which we are in need of as well) and supports PXE. But FreeBSD comes with its own TFTP daemon that I’m going to use and I actually prefer the Unix way over all-rounder software: Do one thing and do it well!

The first thing that comes to mind in terms of DHCP servers is ISC’s DHCPd. It’s small, simple to use (at least for our use case), battle-tested and in use pretty much everywhere. It’s old, though, not extremely fast and certainly not as flexible as you might wish for today. This (among other things) lead the ISC to start a new project meant as a replacement: Kea.

The latter is a modern DHCP server with a lot of nice new features: It is a high-performance solution that’s extensible, supports databases as backends, has a web GUI (Stork) available and more. But since DHCPd works well enough, adoption of Kea has been slow. There are a couple of downsides to it, too: First and foremost – its configuration is written in JSON. Yes, JSON! While there are legitimate use cases for that format, configuration is not one of them if you ask me. That was a terrible choice. Kea also pulls in big dependencies like the boost C++ libraries not everybody is fond of.

IMO the benefits of Kea outweight the drawbacks (if it wasn’t for the JSON configuration, I’d even state: clearly). But it’s your choice of course.

DHCP server option 1: Modern-day Kea

Alright, let’s give Kea a try, shall we? First we need to install it and then edit the configuration file:

# pkg install -y kea
# vi /usr/local/etc/kea/kea-dhcp4.conf

The easiest thing to do is to delete the contents and paste the following. Then adapt it to your network and save:

{
"Dhcp4": {
    "interfaces-config": {
        "interfaces": [ "em5/10.11.12.1" ]
    },
    "control-socket": {
        "socket-type": "unix",
        "socket-name": "/tmp/kea4-ctrl-socket"
    },
    "lease-database": {
        "type": "memfile",
        "lfc-interval": 3600
    },
    "expired-leases-processing": {
        "reclaim-timer-wait-time": 10,
        "flush-reclaimed-timer-wait-time": 25,
        "hold-reclaimed-time": 3600,
        "max-reclaim-leases": 100,
        "max-reclaim-time": 250,
        "unwarned-reclaim-cycles": 5
    },

    "renew-timer": 900,
    "rebind-timer": 1800,
    "valid-lifetime": 3600,
    "next-server": "10.11.12.1",
    "boot-file-name": "pxelinux.0",

    "option-data": [
        {
            "name": "subnet-mask",
            "data": "255.255.255.0"
        },
        {
            "name": "domain-name-servers",
            "data": "10.11.12.1"
        },
        {
            "name": "domain-name",
            "data": "example.org"
        },
        {
            "name": "domain-search",
            "data": "example.org"
        }
    ],

    "subnet4": [
        {
            "subnet": "10.11.12.0/24",
            "pools": [ { "pool": "10.11.12.20 - 10.11.12.30" } ],
            "option-data": [
                {
                    "name": "routers",
                    "data": "10.11.12.1"
                }
            ]
        }
    ],

    "loggers": [
    {
        "name": "kea-dhcp4",
        "output_options": [
            {
                "output": "/var/log/kea-dhcp4.log"

            }
        ],
        "severity": "INFO",
        "debuglevel": 0
    }
  ]
}
}

Yes, looks pretty bad, I know. But that’s only the representation; if something better had been used (say YAML), it’d be about 50 lines instead of 75, be much more readable and above all: less error-prone to edit. Oh well. If you can ignore the terrible representation, the actual data is not so bad and pretty much self-explaining.

I’d like to point you at the “next-server” and “boot-file-name” global options that I set here. These are required for PXE booting by pointing to the server hosting the NBP and telling its file name. Leave them out and you will still have a working DHCP server if you don’t need to do PXE. While this configuration works, you will likely want to extend it for production use.

With the config in place, let’s enable and start the daemon:

# sysrc kea_enable="YES"
# service kea start

A quick look if a daemon successfully bound to port 67 and is listening doesn’t hurt:

# sockstat -4l | grep 67
root     kea-dhcp4  1480  14 udp4   10.11.12.1:67         *:*

Ok, there we are. We now have a DHCP service on our internal network!

DHCP server option 2: Venerable ISC DHCPd

So you’d like to play simple and safe? No problem, going with DHCPd is not a bad choice. But first we need to install it and edit the configuration file:

# pkg install -y isc-dhcp44-server
# vi /usr/local/etc/dhcpd.conf

Delete everything. Then add the following (adjust to your network structure, of course!) and save:

option domain-name "example.org";
option domain-name-servers 10.11.12.1;
option subnet-mask 255.255.255.0;
 
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
log-facility local7;

next-server 10.11.12.1;
filename "pxelinux.0";
 
subnet 10.11.12.0 netmask 255.255.255.0 {
  range 10.11.12.10 10.11.12.20;
  option routers 10.11.12.1;
}

Mind the “next-server” and “filename” options which define the server to get the NBP from as well as the file name of that. You can leave out that block and will still have a working DHCP server – but it won’t allow for PXE booting in that case. I’d also advice you to do a bit of reading and probably do a more comprehensive configuration of DHCPd.

Next thing to do is to enable DHCPd, confine it to serve requests coming in from one particular NIC only and start the service:

# sysrc dhcpd_enable="YES"
# sysrc dhcpd_ifaces="em5"
# service isc-dhcpd start

Quick check to see if the service is running and binding on port 67:

# sockstat -4l | grep 67
dhcpd    dhcpd      1396  8  udp4   *:67                  *:*

Looking good so far, DHCP should be available on our internal net.

Optional: Checking DHCP

If you want to make sure that your DHCP server is not only running but that it can also be reached and actually does what it’s meant to, you can either just try to power up a host in the 10.11.12.0/24 network and configure it to get its IP via DHCP. Or you can for example use the versatile nmap tool to test DHCP discovery from any host on that network:

# pkg install -y nmap
# nmap --script broadcast-dhcp-discover
Starting Nmap 7.91 ( https://nmap.org ) at 2021-01-24 17:56 CET
Pre-scan script results:
| broadcast-dhcp-discover: 
|   Response 1 of 1: 
|     Interface: em0
|     IP Offered: 10.11.12.21
|     DHCP Message Type: DHCPOFFER
|     Subnet Mask: 255.255.255.0
|     Router: 10.11.12.1
|     Domain Name Server: 10.11.12.1
|     Domain Name: example.org
|     IP Address Lease Time: 1h00m00s
|     Server Identifier: 10.11.12.1
|     Renewal Time Value: 15m00s
|_    Rebinding Time Value: 30m00s
WARNING: No targets were specified, so 0 hosts scanned.
Nmap done: 0 IP addresses (0 hosts up) scanned in 10.56 seconds

# pkg delete -y nmap

All right! DHCP server is working and happily handing out leases.

TFTP

The Trivial File Transfer Protocol daemon is up next. FreeBSD ships with a TFTP daemon in the base system, so we’re going to use that. It will not be used by itself but instead from the inetd super daemon. To enable TFTP, we just need to put one line in the inetd configuration file:

# echo "tftp    dgram   udp     wait    root    /usr/libexec/tftpd      tftpd -l -s /usr/local/tftpboot" >> /etc/inetd.conf

Now we need to create the directory that we just referenced, as well as a subdirectory which we’re going to use and create a file there:

# mkdir -p /usr/local/tftpboot/pxelinux.cfg
# vi /usr/local/tftpboot/pxelinux.cfg/default

Put the following in there (for now) and save:

DEFAULT vesamenu.c32
PROMPT 0

All is set, let’s enable and start the service now:

# sysrc inetd_enable="YES"
# service inetd start

Again we can check real quick if the service is running:

# sockstat -4l | grep 69
root     inetd      1709  6  udp4   *:69                  *:*

Good, so now we can test to fetch the configuration file from either our server or from any FreeBSD machine in the 10.11.12.0/24 network:

# tftp 10.11.12.1
tftp> get pxelinux.cfg/default
Received 292 bytes during 0.0 seconds in 1 blocks
tftp> quit
# rm default

Everything’s fine just as expected.

File Server

The remaining piece we need to set up is a means to efficiently transfer larger files over the wire – i.e. not TFTP! You can do it via FTP and use FreeBSD’s built-in FTP daemon. While this works well, it is not the option that I’d recommend. Why? Because FTP is an old protocol that does not play nicely with firewalls. Sure, it’s possible to do FTP properly, but that’s more complex to do than using something else like a webserver that speaks HTTP.

If you want to go down that path, there are a lot of options. There’s the very popular and feature-rich Apache HTTPd and various more light-weight solutions like LighTTPd and many more. I generally prefer OpenBSD’s HTTPd because it is so easy to work with – and when it comes to security or resisting feature creep its developers really mean it. If I need to do something that it cannot do, I usually resort to the way more advanced (and much more popular) Nginx.

Pick any of the two described here, go with FTPd instead or just ignore the following three sections and set up the webserver that you prefer to deploy.

If you didn’t opt for FTP, as a first step create a directory for the webserver and change the ownership:

# mkdir -p /usr/local/www/pxe
# chown -R www:www /usr/local/www/pxe

File Server option 1: OpenBSD’s HTTPd

Next is installing the program and providing the desired configuration. Edit the file:

# pkg install -y obhttpd
# vi /usr/local/etc/obhttpd.conf

Delete the contents and replace it with the following, then save:

chroot "/usr/local/www"
logdir "/var/log/obhttpd"
 
server "pixie.example.org" {
        listen on 10.11.12.1 port 80
        root "/pxe"
        log style combined
}

Super simple, eh? That’s part of the beauty of obhttpd. OpenBSD follows the “sane defaults” paradigm. That way you only have to configure stuff that is specific to your task as well as things where you want to change the defaults. Surprisingly, this configuration does it – and there’s really not much I’d change for a production setup if it is the only site on this server.

It’s always a good idea to check if the configuration is valid, so let’s do that:

# obhttpd -nf /usr/local/etc/obhttpd.conf
configuration OK

If you ever need to debug something, you can start the daemon in foreground and more verbosely by running obhttpd -dvv. Right now the server would not start because the configured log directory does not exist. So this would be a chance to give debugging a try.

Let’s create the missing directory and then enable and start the service:

# mkdir /var/log/obhttpd
# sysrc obhttpd_enable="YES"
# service obhttpd start

As always I prefer to take a quick look if the daemon did bind the way I wanted it to:

# sockstat -4l | grep httpd
www      obhttpd    1933  7  tcp4   10.11.12.1:80         *:*
www      obhttpd    1932  7  tcp4   10.11.12.1:80         *:*
www      obhttpd    1931  7  tcp4   10.11.12.1:80         *:*

Looks good.

File Server option 2: Nginx

Next thing is installing Nginx and providing a working configuration:

# pkg install -y nginx
# vi /usr/local/etc/nginx/nginx.conf

Erase the example content and paste in the following:

user  www;
error_log  /var/log/nginx/error.log;
worker_processes  auto;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    keepalive_timeout  65;

    server {
        listen       80;
        location / {
            root   /usr/local/www/pxe;
        }
    }
}

This is by no means a perfect configuration but only an example. If you want to deploy Nginx in production, you’ll have to further tune it towards what you want to achieve. But now let’s enable and start the daemon:

# sysrc nginx_enable="YES"
# service nginx start

Time for the usual quick check:

# sockstat -4l | grep nginx
www      nginx      1733  6  tcp4   *:80                  *:*
www      nginx      1732  6  tcp4   *:80                  *:*
root     nginx      1731  6  tcp4   *:80                  *:*

Nginx is running and listening on port 80 as it should be.

File Server option 3: FTPd

FTP for you, eh? Here we go. FreeBSD comes with an ftp group but not such a user by default. Let’s create it:

# pw useradd -n ftp -u 14 -c "FTP user" -d /var/ftp -g ftp -s /usr/sbin/nologin

It’s a convention that public data offered via anonymous FTP is placed in a “pub” directory. We’re going to honor that tradition and create the directory now:

# mkdir -p /var/ftp/pub
# chown ftp:ftp /var/ftp/pub

If you intend to use logging, create an empty initial log file:

# touch /var/log/ftpd

Now we need to enable the FTP service for inetd (the “l” flag enables a transfer log, “r” is for operation in read-only mode, “A” allows for anonymous access and “S” is for enabling the download log):

# echo "ftp     stream  tcp     nowait  root    /usr/libexec/ftpd       ftpd -l -r -A -S" >> /etc/inetd.conf

As we intend to run a service that allows local users to log in via FTP, we need to consider the security implications of this. In my case I have created the “kraileth” user and it has the power to become root via doas. While OpenSSH is configured to only accept key-based logins, FTP is not. The user also has a password set – which means that an attacker who suspects that the user might exist, can try brute-forcing my password.

If you’re the type of person who is re-using passwords for multiple things, take this scenario into consideration. Sure, this is an internal server and all. But I recommend to get into a “security first” mindset and just block the user from FTP access, anyway. To do so, we just need to add it to the /etc/ftpusers file:

# echo "kraileth" >> /etc/ftpusers

Now let’s restart the inetd service (as it’s already running for TFTP) and check it:

# service inetd restart
# sockstat -4l | grep 21
root     inetd      1464  7  tcp4   *:21                  *:*

Ready and serving!

Optional: Checking File Server service

Time for a final test. If you’re using a webserver, do this:

# echo "TestTest" > /usr/local/www/pxe/test
# fetch http://10.11.12.1/test
test                                             9  B 8450  Bps    00s
# cat test
TestTest
# rm test /usr/local/www/pxe/test

If you’re using FTP instead:

# echo "TestTest" > /var/ftp/pub/test
# fetch ftp://10.11.12.1/pub/test
test                                                     9  B 9792  Bps    00s
# cat test
TestTest
# rm test /var/ftp/pub/test

Everything’s fine here, so we can move on.

What’s next?

In part 3, we’re finally going to add data and configuration to boot multiple operating systems via PXE.

Multi-OS PXE-booting from FreeBSD 12: Introduction (pt. 1)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/multi-os_pxe-booting_from_fbsd_pt1.gmi

This is an introductory article; if you’re familiar with PXE you will want to skip the excursion but may be interested in the “Why”. The article ends with the post-installation setup of my test machine, turning it into a simple router so that the next article can start with the actual PXE-related setup.

Also see part 2 here and part 3 here as well as part 4 here.

Situation

These days I decided to reactivate an old laptop of mine. It has processor of an older generation (Ivy Bridge) but it’s an i7 – and it has 24 GB of RAM, making it a somewhat nice machine for certain things. Problem with it is: The USB does not work (I think that the controller was fried)!

So how do I get a current OS installed on there as I cannot use the CD emulator I’d normally use? Sure, I could always burn a CD, but I don’t feel like it (unnecessary waste). I could open it up, remove the drive, place it in another machine, install the OS and then put it back. Nah, not too much fun, either. How about doing a network install then?

When I thought of that I realized that I had some old notes from about a year ago somewhere on my workstation. I originally meant to blog about that topic but never actually wrote the post. So here’s how to do what the post’s title says – updated to FreeBSD 12.2 and Ubuntu 20.04!

While I have a clear favorite OS and very much appreciate what FreeBSD has to offer for me as the administrator of a fleet of servers, there really is no reason to turn a blind eye to other Unix-like systems. For one thing it totally makes sense to always pick the best candidate for a job (which might not in all cases be one OS). That and the fact that you cannot judge which one is best suited if you don’t have at least some level of familiarity with various options. But in addition to that my employer runs a heterogeneous environment, anyway – and while I’m mostly occupied with the BSD side of things, there’s simply way, way too many Linux machines around to even think about avoiding them altogether all the time.

Why PXE-boot Linux from FreeBSD?

After an update, the old means of installing Linux servers as we do it at work had stopped working reliably. I looked at it briefly but soon found that too many things weren’t set up like you’d do it today. Therefore I decided that it might make more sense to start fresh. And while at it – wouldn’t it make sense to try and combine the Linux and FreeBSD PXE servers on one machine? That would mean one less server to run after all.

The next installation due was for a customer who requested an Ubuntu server. As a matter of fact we are transitioning to use that distribution more often for our internal infrastructure, too (decision by management, certainly not my choice!). For that reason one weekend I ended up doing something that I hadn’t done in a while: installing a fresh Ubuntu 16.04 system on a test machine I. After doing this I could write a whole post about how bad the installer actually is (static IP addressing, anyone??), but I don’t want this to turn into a rant.

So let’s just mention one single complaint: Installing takes quite a long time (I never understood why Debian and derivatives install stuff that they remove again later during the “cleaning up” phase…). Before the installation was even finished, I admittedly already had mixed feelings about this new system. But then this was what happened on the first boot:

[ TIME ] Timed out waiting for device dev-disk-by\x2duuid-0ef05387\x2d50d9\x2d4cac\x2db96\x2d9808331328af.device.
[DEPEND] Dependency failed for /dev/disk/by-uuid/0ef05387-50d9-4cac-b796-9808331328af.
[DEPEND] Dependency failed for Swap.
[  *** ] A start job is running for udev Coldplug all Devices (3min 34s / no limit)

You’ve got to be kidding! A freshly installed system going sideways like that? Sorry Ubuntu that’s not the kind of stuff that I like wasting my time with! I don’t even care if it’s systemd’s fault or something else. The boss “preferably” wanted an Ubuntu system – I gave it a try and it failed me. Ah, screw it, let’s deploy something I know of that it actually works (same hardware BTW, before anybody wants to blame that for a problem that’s definitely Ubuntu’s fault)!

I had a freshly installed FreeBSD with static IP configuration (which you probably want to use for a DHCP / PXE boot server after all) in a fraction of the time that the Ubuntu installation took. And it goes without saying: System starts as one would expect.

Excursion: PXE – An Overview

While there have been earlier methods for making a machine boot over the network, PXE (Preboot eXecution Environment) is what you want to rely on if you need to do that today. It is a specification widely implemented (in fact just about everywhere) and chances are very low that you will run into problems with it. Have a look around in your computer’s EFI or BIOS, often PXE-booting is disabled (and if you want to use it just once to install an OS on the machine, I recommend that you disable it again afterwards). How to make a machine boot over the net depends on its EFI / BIOS. Figure out how to do it four your metal and give it a try.

On your average home environment not too much should happen. The PXE environment will probe the network for the information that it needs, receive none and eventually give up. But what information does it need and which components are involved?

Well, it needs to load an operating system “from the net” – which means from another machine. To be able to talk to other hosts on the network, it needs to become part of said net. For this it needs to know the network parameters and requires a usable, unique IP address for itself. It can get all of that from a DHCP (Dynamic Host Configuration Protocol) server. If configured correctly, the daemon will accept DHCP requests and provide the client with a suggested IP as well as information about the network (like the netmask, default router, nameservers on the net, etc). It can also tell the client from where it can load the NBP (Network Bootstrap Program).

The latter is a piece of software, usually a special bootloader. It will be downloaded via TFTP (Trivial File Transfer Protocol). Think of it as very similar to FTP but much, much more primitive. The NBP is executed when the transfer is completed. It will then download its configuration file and then act accordingly, most likely fetching an operating system kernel and additional files, then booting the kernel.

TFTP is what the PXE bootcode speaks and uses. But due to its very simplistic nature, TFTP is not well fit for larger file transfers. Therefore such files are usually loaded via other means once the kernel has booted and more options are available. FreeBSD likes to use NFS, while on Linux HTTP is often used to receive a filesystem or installation medium image.

So the following services are involved when setting up a PXE boot server:

  • DHCP server
  • TFTP server
  • Webserver / NFS service stack

Preparing a FreeBSD router

Today I’m redoing my previous work in my home lab with FreeBSD 12.2. The machine that I’m using is pretty old (32-bit only Atom CPU!) but it has 6 Ethernet ports and still works well for network tinkering. I got it for free some years ago, so there’s really no reason to complain.

When it comes to the installation, in this case I’m going with MBR + UFS which I normally wouldn’t do for a amd64 machine. It’s a standard 12.2 installation otherwise with my “kraileth” user (member of the wheel group) added during the installation.

First thing to do is copying my public SSH key onto the server and then SSHing into the machine:

% ssh-copy-id -i ~/.ssh/id_ed25519 kraileth@hel.local
% ssh kraileth@hel.local

Now I switch to the root user, disallow SSH password-based login and restart the SSH daemon:

% su -l
# sed -i "" 's/#ChallengeResponseAuthentication yes/ChallengeResponseAuthentication no/' /etc/ssh/sshd_config
# service sshd restart

I don’t need sendmail on the machine, so off it goes:

# sysrc sendmail_enable="NONE"

Next is bootstrapping the package manager, installing the unbound DNS server and – for convenience – doas (a simpler sudo replacement). Writing a one-line config for doas is fine for my use case:

# env ASSUME_ALWAYS_YES=TRUE pkg bootstrap
# pkg install -y doas unbound
# echo "permit :wheel" > /usr/local/etc/doas.conf

Then unbound needs to be configured. To do so, edit the config file like this:

# vi /usr/local/etc/unbound/unbound.conf

Delete all the content, put the following in there (adjust to your network, of course) and save:

server:
        verbosity: 1
        interface: 10.11.12.1
        access-control: 10.11.12.0/24 allow

Keep in mind that this is not a production configuration! You will have to do some research on the program if you want to put it into production. Do at the very least read the more than thousand lines of comments that you just deleted (you can find it in /usr/local/etc/unbound/unbound.conf.sample). The above config will make the daemon bind on the IP configured via the “interface” statement and allow DNS recursion for any host in the subnet defined via “access-control”. Let’s enable the daemon now (so it will start after rebooting the machine):

# sysrc unbound_enable="YES"

During installation I chose the first NIC (em0) to be configured via DHCP as offered by my home router. This machine will act as a DHCP server for another network, so I assign a static address to the respective interface (em5) that I have connected to a spare switch I had around. It will also act as a gateway between the two networks it is part of:

# sysrc ifconfig_em5="inet 10.11.12.1 netmask 255.255.255.0"
# sysrc gateway_enable="YES"

To actually provide Internet connectivity to hosts in the 10.11.12.0/24 subnet, we have to enable NAT (Network Address Translation). Otherwise any host that is contacted via an IP in that range will have no idea how to answer, even though our gateway forwarded the message to it. NATing is done via the firewall. FreeBSD offers three of them. Pf is my absolute favorite, but ipfw is ok, too. I wouldn’t recommend ipf – it’s old and there is really no reason to use it if you’re not already doing so and don’t want to switch.

First we’ll create a configuration file:

# vi /etc/pf.conf

Now put the following in there (again: Adjust to your network) and save:

ext_if="em0"
int_if="em5"
localnet = $int_if:network

scrub in all
nat on $ext_if from $localnet to any -> ($ext_if)

This defines commonly used three macros: One for the external and internal NIC and one for the local network. It then turns scrubbing on (you almost always want to do this and I suggest that you look up what it does). The final line does the magic of the actual NAT.

You will want to define a more complex firewall configuration if you plan to use this gateway in production. But this is enough for my (demonstration) purposes. I suggest that you follow along and when you made sure that PXE booting works, tighten up security – then test if everything still works. Firewalling is a topic of its own, and in fact not a small one, though. If you’re new to it it makes perfect sense to do some reading. There’s lots of free material on the net as well as a great “Book of Pf” by Peter Hansteen if you prefer.

So let’s enable Pf on startup:

# sysrc pf_enable="YES"
# sysrc pf_rules="/etc/pf.conf"

Reminder: If you want to do things right, you will probably want to enable pflog, too, for example. This post is meant to point you in the right direction and to get stuff to work quickly. Familiarizing yourself with the various components used is your own homework.

It’s always a good idea to update the system to the latest patchlevel:

# freebsd-update fetch install
# freebsd-version -kr
12.2-RELEASE-p1
12.2-RELEASE

Ok, looks like kernel-related updates were installed, so it’s time to reboot:

# shutdown -r now

That’s it for the preparation work. Once the server comes back up again it should function as a router. If you configure a host with a static address in the 10.11.12.0/24 network, it should be able to have DNS queries answered as well as reach the Internet.

What’s next?

In part 2 we’re going to complete the setup of the PXE boot server so that a client can boot into Ubuntu Linux. The rest will be in part 3.

Dystopian Open Source

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/dystopian_open_source.gmi

Happy New Year dear reader! The other day I watched a video on YouTube that had only 6 views since last October. It is about a very important topic, though, and I wish it would have a larger impact as well as get more people alarmed and thinking about the current trends in Open Source. This is not a “OMG we’re all doomed!!1” post, but I want to talk about what I feel are grave dangers that we should really, really aim some serious consideration at.

“Pay to Play”

For the readers who would like to watch the video (about 7 minutes), it’s here. Some background info: It’s by Lucas Holt. He is the lead developer of MidnightBSD, a project that began as a Fork of FreeBSD 6.1 and aimed for better usability on the desktop. There were a couple of people who contributed to the project over time, but it never really took off. Therefore it has continued as a project almost entirely done by one man.

It’s not hard to imagine just how much work it is to keep an entire operating system going; much larger teams have failed to deliver something useful after all. So while it’s no wonder that MidnightBSD is not in a state where anybody would recommend it to put to everyday usage, I cannot deny that I admire all the work that has been done.

Holt has merged changes back from FreeBSD several times, eventually updating the system to basically 11.4 plus the MidnightBSD additions and changes. He maintains almost 5,000 ports for his platform (of course not all are in perfect shape, though). And he has kept the project going since about 2006 – despite all the taunting and acid-tongued comments on “the most useless OS ever” and things like that. Even though I never found somewhat serious use for MidnightBSD (and I tried a couple of times!), considering all of that he has earned my deepest respect.

To sum up the video: He talks about a trend in Open Source that some very important projects started to raise the bar on contributing to them. Sometimes you’re required to employ two full-time (!) developers to be considered even worth hearing. Others require you to provide them with e.g. a paid Amazon EC2 instance to run their CI on. And even where that’s not the case, some decision makers will just turn you down if you dare to hand in patches for a platform that’s not a huge player itself.

Quite a few people do not even try to hide that they only ever care about Linux and Holt has made the observation that some of the worst-behaving, most arrogant of these are – Redhat employees. There are people on various developer teams that choose to deliberately ruin things for smaller projects, which is certainly not good and shouldn’t be what Open Source is about.

What does Open Source mean to us?

At a bare minimum, Open Source only means that the source for some application, collection of software or even entire operating system is available to look at. I could write some program, put the code under an extremely restrictive license and still call this thing “Open Source” as long as I make the code available by some means. One could argue that in the truest sense of the two words that make up the term, that would be a valid way to do things. But that’s not what Open Source is or ever was about!

There are various licenses out there that are closely related to Open Source. Taking a closer look at them is one great way to find the very essence of what Open Source actually is. There are two important families of such licenses: The so-called Copyleft licenses and the permissive licenses. One could say that downright religious wars have been waged about which side holds the one real truth…

People who have been reading my blog for a while know that I do have a preference and made quite clear which camp I belong to, even though I reject the insane hostility that some zealots preach. But while the long-standing… err… let’s say: controversy, is an important part of Open Source culture, the details are less relevant to our topic here. They basically disagree on the question of what requirements to put in the license. Should there be any at all? Is it sufficient to ask for giving credit to the original authors? Or should users be forced to keep the source open for example?

Both license families however do not dispute the fundamental rights given to users: They want you to be able to study the code, to build it yourself, to make changes and to put the resulting programs to good use. While it’s usually not explicit, the very idea behind all of Open Source is to allow for collaboration.

Forkability of Open Source projects

Over the years we’ve seen a lot of uproar in the community when the leaders of some project made decisions that go against these core values of Open Source. While some even committed the ultimate sin of closing down formerly open code, most of the time it’s been slightly less harsh. Still we have seen XFree86 basically falling into oblivion after Xorg was forked from it. The reason this happened was a license change: One individual felt that it was time for a little bit of extra fame – and eventually he ended up blowing his work to pieces. Other examples are pfSense and OPNsense, Owncloud and Nextcloud or Bacula and Bareos. When greed strikes, some previously sane people begin to think that it’s a good idea to implement restrictions, rip off the community and go “premium”.

One of the great virtues of Open Source is that a continuation of the software in the old way of the project is possible. With OPNsense we still have a great, permissively licensed firewall OS based on FreeBSD and Pf despite NetGate’s efforts to mess with pfSense. Bareos still has the features that Bacula cut out (!) of the Open Source version and moved to the commercial one. And so on. The very nature of Open Source also allows for people to pick up and continue some software when the original project shuts down for whatever reason.

There are a lot of benefits to Open Source over Closed Source models. But is it really immune to each and every attack you can aim at it?

Three dangers to Open Source!

There is always the pretty obvious danger of closing down source code if the license does not prohibit that. Though I make the claim that this in fact mostly a non-issue. There are a lot of voices out there who are going hysteric about this. But despite what they try to make things look, it is impossible to close down source code that is under an Open Source license! A project can stop releasing the source for newer versions, effectively stopping to distribute current code. But then the Open Source community can always stop using that stuff and continue on with the a fork that stays open.

But we haven’t talked about three other immanent dangers: narrow-mindedness, non-portability and leadership driven by monetary interest.

Narrow-mindedness

One could say that today Open Source is victim of its overwhelming success. A lot of companies and individual developers jumped the wagon because it’s very much beneficial for them. “Let’s put the source on GitHub and people might report issues or even open pull-requests, actively improving our code – all for free!” While this is a pretty smart thing to do from a commercial point of view, in this case software code was not opened up because somebody really believes in the ideas of Open Source. It was merely done to benefit from some of the most obvious advantages.

Depending on how far-sighted such an actor is, he might understand the indirect advantages to the project when keeping things as open as possible – or maybe not. For example a developer might decide that he’ll only ever use Ubuntu. Somebody reports a problem with Arch Linux: Close (“not supported!”). Another person opens a PR adding NetBSD support: Close (“Get lost, freak!”).

Such behavior is about as stupid and when it comes to the values also as anti Open Source as it gets. Witnessing something like this makes people who actually care about Open Source cringe. How can anybody be too blind to see that they are hurting themselves in the long run? But it happens time and time again. By turning down the Arch guy, the project has probably lost a future contributor – and maybe the issue reported was due to incompatibilities with the never GCC in Arch that will eventually land in Ubuntu, too, and could have been fixed ahead of time…

Open Source is about being open-minded. Just publishing the source and fishing for free contributions while living the ways of a closed-source spirit is in fact a real threat to Open Source. I wish more people would just say no to projects that regularly say “no” to others (without a good reason). It’s perfectly fine that some project cannot guarantee their software to even compile on illumos all the time. But the illumos people will take care of that and probably submit patches if needed. But refusing to even talk about possible support for that platform is very bad style and does not fit well with the ideals of Open Source.

If I witness that an arrogant developer insults, say a Haiku person, I’ll go looking for more welcoming alternatives (and am perfectly willing to accept something that is technically less ideal for now). Not because I’ve ever used Haiku or do plan to do so. But simply because I believe in Open Source and in fact have a heart for the cool smaller projects that are doing interesting things aside of the often somewhat boring mainstream.

Non-portability

Somewhat related to the point above is (deliberate) non-portability. A great example of this is Systemd. Yes, there have been many, many hateful comments about it and there are people who have stated that they really hope the main developer will keep the promise to never make it portable “so that *BSD is never going to be infected”.

But whatever your stance on this particular case is – there is an important fact: As soon as any such non-portable Open Source project gains a certain popularity, it will begin to poison other projects, too. Some developers will add dependencies to such non-portable software and thus make their own software unusable on other platforms even though that very software alone would work perfectly fine! Sometimes this happens because developers make the false assumption that “everybody uses Systemd today, anyway”, sometimes because they use it themselves and don’t realize the implication of making it a mandatory requirement.

If this happens to a project that basically has three users world-wide, it’s a pitty but does not have a major impact. If it’s a software however that is a critical component in various downstream projects it can potentially affect millions of users. The right thing here is not to break solidarity with other platforms. Even if the primary platform for your project is Linux, never ever go as far as adding a hard dependency on Systemd and other such software! If you can, it’s much better to make support optional so that people who want to use it benefit from existing support. But don’t ruin the day for everybody else!

And think again about the exemplary NetBSD pull-request mentioned above: Assume that the developer had shown less hostility and accepted the PR (with no promises to ever test if things actually work properly or at all). The software would have landed in Pkgsrc and somebody else would soon have hit a problem due to a corner case on NetBSD/SPARC64. A closer inspection of that would have revealed a serious bug that remained undetected and unfixed. After a new feature was added not much later, the bug became exploitable. Eventually the project gained a “nice” new CVE of severity 9.2 – which could well have been avoided in an alternate reality where the project leader had had a more friendly and open-minded personality…

Taking portability very seriously is exceptionally hard work. But remember: Nobody is asking you to support all the hardware you probably don’t even have or all the operating systems that you don’t know your way around on. But just be open to enthusiasts who care for such platforms and let them at least contribute.

Leadership with commercial interests

This one is a no-brainer – but unfortunately one that we can see happening more and more often. Over the last few years people started to complain about e.g. Linux being “hi-jacked by corporations”. And there is some truth to it: There is a lot of paid work being done on various Open Source projects. Some of the companies that pay developers do so because they have an interest in improving Open Source software they use. A couple even fund such projects because they feel giving back something after receiving for free is the right thing to do. But then there’s the other type, too: Corporations that have their very own agenda and leverage the fact that decision makers on some projects are their employees to influence development.

Be it the person responsible for a certain kernel subsystem turning down good patches that would be beneficial for a lot of people for seemingly no good reason – but in fact because they were handed in by a competitor because his employer is secretly working on something similar and has an interest to get that one in instead. Be it because the employer thinks that the developer is not payed to do anything for platforms that are not of interest to its own commercial plan and is expected to simply turn those down to “save time” for “important work”. Things like that actually happen and have been happening for a while now.

Limiting the influence of commercial companies is a topic on its own. IMO more projects should think about governance models much more deeply and consider the possible impacts of what can happen if a malicious actor buys in.

Towards a more far-sighted, “vrij” Open Source?

As noted above, I feel that some actors in Open Source are too much focused on their own use-case only and are completely ignorant of what other people might be interested in. But as this post’s topic was a very negative one, I’d like to end it more positively. Despite the relatively rare but very unfortunate misbehaving of some representatives of important projects, the overwhelming majority of people in Open Source are happy to allow contributions from more “exotic” projects.

But what’s that funny looking word doing there in the heading? Let me explain. We already have FOSS, an acronym for “Free and Open Source Software”. There’s a group of people arguing that we should rather focus on what they call FLOSS, “Free and Libre Open Source Software”. The “libre” in there is meant to put focus on some copyleft ideas of freedom – “free” was already taken and has the problem that the English word doesn’t distinguish between free “as in freedom” and free of charge. I feel that a term that emphasizes the community aspect of Open Source, the invitation to just about anybody to collaborate and Open Source solidarity with systems other than what I use, could be helpful. How about VOSS? I think it’s better than fitting in another letter there.

Vrij is the Dutch word for free. Why Dutch? For one part to honor the work that has been done at the Vrije Universiteit of Amsterdam (for readers who noticed the additional “e”: That’s due to inflection). Just think of the nowadays often overlooked work of Professor Tanenbaum e.g. with Minix (which inspired Linux among other things). The other thing is that it’s relatively easy to pronounce for people who speak English. It’s not completely similar but relatively close to the English “fray”. And if you’re looking for the noun, there’s both vrijheid and vrijdom. I think the latter is less common, but again: It’s much closer to English “freedom” and thus probably much more practical.

So… I really care for vrij(e) Open Source Software! Do you?

CentOS killed by IBM – a chance to go new ways?

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2020/centos_killed.gmi

On December 8 2020 a Red Hat employee on the CentOS Governing Board announced that CentOS would continue only as CentOS Stream. For the classical CentOS 8 the curtain will fall at the end of 2021 already – instead of 2029 as communicated before! But thanks to “Stream” the brand will not simply go away but remain and add to the confusion. CentOS Stream is a rolling-release distribution taking a middle grounds between the cutting-edge development happening in Fedora and the stability of RHEL.

Many users feel betrayed by this action and companies who have deployed CentOS 8 in production trusting in the 10 years of support are facing hard times. Even worse for those companies who have a product based on CentOS (which is a pretty popular choice e.g. for Business Telephone Systems and many other things) or who offer a product targeted only at RHEL or CentOS. For those the new situation is nothing but a nightmare come true.

What do we do now?

IBM-controlled Red Hat obviously hopes that many users will now go and buy RHEL. While this is certainly technically an option I would not suggest going down that road. Throwing money at a company that just killed a community-driven distribution 8 years early? Sorry Red Hat but nope. And Oracle is never an option, either!

The reaction from the community has been overwhelmingly negative. Many people announced that they will migrate to Debian or Ubuntu LTS, some will consider SLES. Few said that they will consider FreeBSD now – being a happy FreeBSD user and advocate this is of course something I like to read. And I’d like to help people who want to go in that direction. A couple of weeks ago I announced that I’d write a free ebook called The Penguin’s Guide to Daemonland: An introduction to FreeBSD for Linux users. It is meant as a resource for somewhat experienced Linux users who are new to FreeBSD (get the very early draft here – if anybody is interested in this feedback is welcome).

But this post is not about BSD, because there simply are cases where you want (or need) a Linux system. And when it comes to stability, CentOS was simply a very good choice that’s very hard to replace with something else. Fortunately Rocky Linux was announced – an effort by the original founder of CentOS who wants to basically repeat what he once did. I wish the project good luck. However I’d also like to take the chance as an admin and hobby distro tinkerer to discuss what CentOS actually stood for and if we could even accomplish something better! Heresy? Not so much. There’s always room for improvement. Let’s at least talk about it.

Enterprise software

The name CentOS (which stood for Community Enterprise Operating System) basically states that it’s possible to provide an enterprise OS that is community-built. We all have an idea what a community is and while a lot could be written about how communities work and such I’d rather focus on the other term for now. What exactly is enterprise? It helps to make a distinction of the various grades of software. Here’s my take on several levels of how software can be graded:

  • Hobbyist
  • Semi-professional
  • Professional
  • Enterprise
  • Mission critical and above

Hobbyist software is something that is written by one person or a couple of people basically for fun. It may or may not work, collaboration may or may not be desired and it can vanish from the net any day when the mood strikes the decision maker. While this sounds pretty bad that is not necessarily the case. Using a nifty new window manager on your desktop is probably ok. If the project is cancelled tomorrow it just means that you won’t get any more bug fixes or new features and you can easily return to your previous WM. But you certainly don’t want to use such software in your product(s).

Semi-professional software is developed by one person or a team that is rather serious about the project and aiming for professionalism (but commonly falling short due to limitations of time and resources). Usually the software will at least have releases that follow a versioning scheme, source tarballs will not be re-rolled (re-using the same version number). There will be at least some tests and documentation. Patches as well as feedback and reporting issues are almost certainly welcome. The software will be properly licensed, come with things like change logs and release notes. If the project ends you won’t know because the repo on GitHub was deleted but because at least an announcement was made.

Professionally developed software does what the former paragraph talked about and more. It has a more complex structure with multiple people having commit rights so that a single person e.g. on holiday when a severe bug is found doesn’t mean nobody can fix things for days. The software has good test coverage and uses CI. There are several branches with at least the previous one still receiving bug fixes while the main development takes place in a newer branch. Useful documentation is available and there is a table with dates that show when support ends for which version of the software. There’s some form of support available. Such software is most often developed or at least sponsored by companies.

Enterprise products take the whole thing to another level. Enterprise software means that high-quality support options (most likely in various languages) are available. It means that the software is tested extensively and has a long life cycle (long-term support, LTS). There is probably a list of hardware on which the software is guaranteed to work well. And it’s usually not exactly cheap.

“Mission critical” software has very special requirements. For example it could be that it has to be written in Spark (a very strict Ada dialect) which means that formal verification is possible. Most of us don’t work in the medical or aerospace industries where lives and very expansive equipment may be at stake and fortunately we don’t have to give such hard guarantees. But without going deeper into the topic I wanted to at least mention that there’s something beyond enterprise.

Community enterprise?

Having a community-run project that falls into the professional category is quite an accomplishment even with some corporate backing. Enterprise-grade projects seem to be very much without reach of what a community is able to do. Under the right circumstances however it is doable. CentOS was such a project: They didn’t need to pay highly skilled professionals to patch the kernel or to backport fixes into applications and such. Thanks to the GPL Red Hat is forced to keep the source to the tools they ship open. The project can build upon the paid work of others and create a community-built distribution.

Whenever this is not the case the only option is probably to start as professional as possible and create something that becomes so useful to companies that they decide to fund the effort. Then you go enterprise. Other ways a project can go are aiming to become “Freemium” with a free core and a paid premium product or asking for donations from the community. Neither way is easy and even the most thoughtful planning cannot guarantee success as there’s quite a bit of luck required, too. Still, good planning is essential as it can certainly shift the odds in favor of success.

Another interesting problem is: How to create a community of both skilled and dedicated supporters of your project? Or really: Any community at all? How do you let people who might be interested know about your project in the first place? It requires a passionate person to start something, a person that can convince others that not only the idea is worthwhile but also that the goal is in fact achievable and thus worth the time and effort that needs to be invested.

A stable Linux OS base

As mentioned above, I’m a FreeBSD user. One of the things that I’ve really come to appreciate on *BSD and miss on Linux is the concept of a base system (Gentoo being FreeBSD-inspired kind of has “system” and “world”, though). It’s the components of the actual operating system (kernel and basic userland) developed together in one repository and shaped in a way to be a perfect match. Better yet, it allows for a clean separation of the OS and software from third party packages whereas on Linux the OS components are simply packages, too. On FreeBSD third party software has its own location in the filesystem: The operating system configuration is in /etc, the binaries are in /{s}bin and /usr/{s}bin, the libraries in /lib and /usr/lib, but anything installed from a package lives in /usr/local. There’s /usr/local/etc, /usr/local/bin, /usr/local/lib and so on.

Coming from a Linux background this is a bit strange initially but soon feels completely natural. And this separation brings benefits with it like a much, much more solid upgrade process! I maintain servers that were setup as FreeBSD 5.x in the mid 2000’s and still happily serve their purpose today with version 12.x. OS (and hardware) has been upgraded multiple times but the systems were never reinstalled. This is an important aspect of an enterprise OS if you ask me. I wonder if we could achieve it on Linux, too?

Doing things the BSD way would mean to download the source packages for the kernel, glibc and all the other libraries and tools that would form a rather minimal OS, extract them and put the code in a common repository. Then a Makefile would be written and added that can build all the OS components in order with a single command line (e.g. “make buildworld buildkernel”). Things like ALFS (Automated Linux From Scratch) already exist in the Linux world, so building a complete Linux system isn’t something completely new or revolutionary.

As vulnerabilities are found fixes are committed to the source repo. It can then be cloned and re-built. Ideally we’d have our own tool “os-update” which will do differential updates to bring the OS to the newest patch level. My suggestion would be to combine the components of a RHEL release – e.g. kernel 4.18, glibc 2.28, etc. following RHEL 8. So more or less like CentOS but more minimal and focused on the base system. That system would NOT include any package manager! This is due it being intended as a building block for a complete distribution which adds a package manager (be it rpm/dnf, dpkg/apt, pacman or something else) on top and consumes a stable OS with a known set of components and versions, allowing the distributors to focus on a good selection of packages (and giving them the freedom to select the means of package management).

Just to give it a working title, I’m going to call this BaSyL (Base System Linux). For the OS version I would prefer something like the year of the corresponding RHEL release, eg. BaSyL 2019-p0 for RHEL 8. The patch level increases whenever security updates need to be applied.

Enterprise packages

The other part of creating an enterprise-like distribution is the packages. Let’s call it the SUP-R (Stable Unix-like Package Repo) for now. So you’ve installed BaSyL 2019 and need packages. There’s the SUP-R 2019 repo that you can use: It contains the software that RHEL ships. Ideally a team forms that will create and maintain a SUP-R 2024 repo after five years allowing to optionally switch to newer package versions. The special thing here would be that this SUP-R 2024 package set would be available for both BaSyL 2019 and BaSyL 2025 (provided that’s when the next version is released). That way BaSyL 2019 users that already made the switch to SUP-R 2024 can say on that package set when updating the OS and don’t have to do both at the same time! A package repo change requires to consult the update documentation for your programs: E.g. Apache HTTPd, PostgreSQL database, etc. Most likely there are migration steps like changing configuration and such.

Maintaining LTS package sets is not an easy task, though. However there are great tools available to us today which makes this somewhat easier. Still it would require either a really enthusiastic community or some corporate backing. Especially the task of selecting software versions that work well together is a pretty big thing. It would probably make sense to keep the package count low to start with (quality over quantity) and think of test cases that we can build to ensure common workloads can be simulated beforehand.

In addition to experienced admins for software evaluation, package maintainers and programmers for some patch backporting, also people writing docs (both in general and for version migration) would be needed. Yes, this thing is potentially huge – much, much more involved than doing BaSyL alone.

Conclusion

I’d love to see a continuation of a community-built enterprise Linux distro worth the title. But at the same time getting to know BSD in addition to Linux made me think a little different about certain things. De-coupling the OS and the packaging efforts could open up interesting possibilities. At the same time it could even help both projects succeed: Other operating systems might also like (optional) enterprise package sets. And since they’d be installed into a different prefix (e.g. /usr/supr) they would not even conflict with native packages in e.g. Debian or any other glibc-based distribution.

If a portable means of packaging was chosen it would potentially also be interesting to the BSDs and Open-Solaris derivatives. And the more potential consumers the larger the group of people who could have the motivation to make something like this come true.

This article is meant to be giving food for thought. Interested in talking about going new ways? Please share your thoughts!

My first FreeBSD port: Castor

Summary

My first port has been committed to the FreeBSD port’s collection. It’s Castor, a graphical browser for text-based Internet protocols (currently: Gemini, Gopher, Finger). I write about the porting and about the actual application.

Article

[New to Gemini? Have a look at my Gemini FAQ.]

The actual post can be found here (if you are using a native Gemini client):

gemini://gemini.circumlunar.space/users/kraileth/neunix/2020/new_port_castor.gmi

If you want to use a webproxy to read it from your web browser, follow this Link.

Neunix: Eerie transitioning to Geminispace

Regarding my previous post some witty people may have figured that this was a joke: The last post before that one was the long delayed February one and I wrote that I simply couldn’t post the next one (as I cannot stand the new WP editor). That could make said previous post the April one. Maybe it was meant for April 1st?

No, it wasn’t. I’m serious with my experiment of giving it a try to continue writing BSD and IT material in general in Geminispace. The plan is to continue to announce new posts on this platform but to provide the actual content in the new place.

Making a new home

So what’s the status of this? Well, it’s ready! I’ve contacted one of the people behind Gemini and was given access to a server that would host my new Gemlog for now (thanks solderpunk!). A first version of my capsule (in Web-speech this would be “homepage”) is up and available. It currently looks like this:

Capsule of kraileth - my new home on Gemini
Capsule of kraileth – my new home in Geminispace

I got the nice color scheme for my browser from Samsai and generally recommend his introduction article to people who are interested in Gemini.

Took me a day or so to get used to the reduced markup that is called gemtext. But now it’s working out well for the most part. There are some downsides that I’ll probably write about sometime in the future, but in general it works and it’s simplicity makes up for quite a refreshing experience.

Eerielinux blog -> Neunix Gemlog

I’ve picked a new name for the reboot of my previous blogging project. What was the eerielinux blog before is now the Neunix Gemlog. A few things changed or will change, but in general it’s a continuation of what I did with Eerie: I’ll publish articles that I hope are useful or at least interesting to some people.

Focus will remain *BSD-related material but like before I’ll write about other things as they arise. Of course I have some ideas for article series as well. I’ve also picked some old posts from Eerie and converted them (work to transfer more is ongoing).

This is what it looks like right now:

The Neunix index page - continuation of the eerielinux blog in Geminispace
The Neunix index page – continuation of the eerielinux blog in Geminispace

How to access it?

All nice and pretty, but how to you access this thing? Well, Gemini is a protocol wildly different from HTTP and is not currently supported by any mainstream web browser. It is a superset of what HTTP does, though and Gemtext is a superset of what HTML does, so it’s very much possible to convert one to the other – at least in the direction of simpler to more complex! Proxies that make Gemini content accessible via the Web already exist.

So you have two options:

1) Get a real Gemini browser. There are multiple projects out there. I might write about this topic at a later point in time. For now I can only point you to doing a little research and find one that suits you. I use Castor and find it pretty nice.

Then point your Gemini client to: gemini://gemini.circumlunar.space/users/kraileth/neunix

2) Use a web proxy, e.g. this one.

What’s next?

I’ve already posted one new article on Neunix. In about a week or so when people had time to read this one, I’ll link to it directly with a new post on this platform.

And then? Well, we’ll see. The journey has begun and there’s vast, untamed land before me. The possibilities are next to endless. I’ll just pick one thing as the mood strikes me.

Retreat… and reboot! Is there life outside of the Web?

Dear readers,

the time has come for some fundamental re-organization of my EerieLinux blog. The impact of this year’s pandemic as well as family matters (a new family member that demands quite a bit of care!) have forced me to post-pone many of my posts that I usually published at least monthly. Right when I was getting back on track and stated that I intended to publish the missed articles (I’ve written them after all but didn’t find the time for final editing) something else happened.

No, to be honest, it has been happening for quite a while. I didn’t like it but had to cope with it as there was nothing I could do about that, anyway. What am I talking about? Well, the sad fact that I would call the “degradation of the Web”.

EerieLinux has always been about some experiments that I did and wanted to share in case anybody was interested. Now I’m in for a new experiment, a somewhat radical one (and what could be considered radical after going Linux-only and later FreeBSD-only for all of my machines?). I would appreciate feedback on the path that I choose to give a try. What do you think about it? Are you interested in that, too? Would you continue to read some of my material or does this mean that our ways will part? But let’s detail first what has made the time ripe for what is less of an elitist move then an act of mere desperation.

What’s wrong with the Web?

So, what has happened? And what’s the gripe that I have with the Web? Well… It’s actually not just the Web (read: WWW), it’s more things coming together. There seems to be a general direction that the IT world is heading which I don’t approve of. This blog is powered by WordPress for the simple reason that when I started in 2012 I had some material that I wanted to share with the world and going with a free plan on a blog hosting platform seemed like a nice way to get started quickly. And it was. Mind the past tense here.

WordPress evolved – which is a good thing in general. It evolved in a way that I loathe, however. There have been all these “new” and “easier” modes for XYZ which were admittedly more shiny and more “modern” than before. But they broke my workflow time and time again – and unfortunately not for the better. I actually like to re-evaluate my workflows, like to challenge my habits. When I was still using Ubuntu Linux I didn’t only give Canonical’s Unity DE a chance, I actually tried to use it before putting it aside because in the end it hindered me more than it helped. I forced myself to use the (then) new graphical TrueOS for half a year and suffered through incompleteness and breakage. I have a fairly high tolerance for things that are… sub-optimal. I’m also rather patient, I think, and I’m surely not a rage-quit type of person. But inevitably there’s this point where I have to admit: “Ok, that’s it, that’s simply too much”.

The same thing happened for WordPress, when they came up with various iterations of “new editors” – which is why I opted to go back to the “old editor” for writing and editing my blog posts. As I returned to finish the next old post however, I found out that the old editor was gone. They had “finally” removed it. Alright, so I had to try and get accustomed to the new one again. Perhaps it had matured since I tried it last? It had. But it proved to be even more annoying for me than before! I don’t know what the designers are thinking – certainly not what I do. For me it is a pain to work with the new tool: Things that worked perfectly fine before are gone and the new options are so much inferior to what we had before… And this is not even the worst thing that made me give up when I was more than half through with polishing the next post.

Technically the new editors work well for small posts. I seem to have a bit more to say than the average blogger, though; and here’s a very frustrating issue: When editing longer posts, the editor gets unresponsive and laggy. Yuck! Trading a working one for a “new” one that does less but eats up so much more resources? Are you freaking kidding me? Oh, right, that’s the way things go, I forgot for a second. There’s something within my very self that refuses to accept the lack of sanity around me, sorry. I should know better by now.

And even though WordPress makes up a quite large portion of today’s Web, that’s just the tip of the iceberg. All this JS nonsense, megs and megs of useless graphics, animations, tracking and spying cookies, etc. pp. make the Web such a obnoxious place! And don’t even get me started on the situation with web browsers…

What’s wrong with hardware?

After my little rant I can hear people say: “See, pal… If the WordPress editor makes your system struggle enough that you complain on the net, you reaaally want to get a new PC! There’s machines with more than 4 gigs of memory today, you know.”

The irony here is: I did get a newer laptop – and went back to primarily use the old one! Why? Because the new one is bugging me just too much whereas the older model works well for me. The newer follows that stupid trend of being extremely thin. “Thanks” to that design no DVD drive fits in there. While this is incredibly dumb, I can live with it. Carrying an external drive with me is not so much of a problem, but still the fact that the machine is missing the drive annoys me. The bigger problem: The battery of the old model can easily be removed and I can carry a second one with me e.g. on longer train rides. With the new one I’d need to open the case to access it! WTF HP? And finally: Some idiot decided that the “cool” flat design was so important that it’s thinner than the jack for an Ethernet connector… Yes, really: There’s some kind of… hatch that I need to pull down to make the opening big enough to insert the plug! If I remove the plug the hatch snaps back up. It’s not hard to see that this brain-dead construction is prone to defects. And really, it didn’t take too long until it stopped working reliably and I’m randomly losing connectivity every other day… Sorry, this is garbage. Plain as that.

I have another older laptop – that one has 24 GB of RAM and two hard drives. Nice, eh? Not so much because it also has some of those gross EFI quirks. When one of the drives has a GPT partition scheme applied to it, that drive won’t even be detected by the POST anymore! So on that machine I’m stuck with MBR which is not so much fun if you’re using ZFS and want to encrypt your pool with GELI – it can be done, but the workarounds have the side-effects of not working with Boot Environments! True garbage again.

“Come on, why don’t you simply get a different model then?” Yeah, right. As I said, I type a lot, so I have some demands concerning the keyboard. Unfortunately some manufacturers seem to take pride in coming up with new ways of f*cking up the keyboard layout. There are models that simply dropped the CAPS LOCK key, because nobody uses that, right? Wrong! There are people who remap their keys, and being a NEO² typist (a keymap that allows for 6 layers) I need that key as an additional modifier! Take it away and that keyboard is utterly useless to me. But don’t current models like the X1 Carbon still have CAPS LOCK? They do, but there’s another oddity: They swap the CTRL and FN keys! Having CTRL as the leftmost key on the keyboard that I use as my daily driver and switching around in my head at 3 AM when there’s an alarm during my on-call duty is something I fail to do properly. I used a Thinkpad for about a month for on-call and it drove me completely mad. Sorry Lenovo. Your laptops may be excellent otherwise, but they are not for me.

Oh, and it’s not so much of a secret that the situation with the x86 and x86_64 architectures is abysmal (need I say more than Meltdown / Spectre?). Modern hardware design is fundamentally broken and I cannot say that I completely trust the fixes and what unknown side-effects they might bring with them either. But that’s not even the point. We’re stuck with market-leading technology that has been criticized as crappy right from the start. It has come a long way since then, but it’s not a great technology in any regard. I’ve been playing with my Pinebook and it might have the potential to become something different. Maybe the Open Source PowerPC notebook initiative will succeed. Probably we’ll have something RISC-V-based in the long run. But right now we have to make use of what is available to us.

Can’t the Web be saved?

I’m a somewhat optimistic person, but in this regard not so much. While I have thought about just self-hosting my blog and using a nice static-site generator or something, that would only solve my issues regarding the blog. A huge portion of the Web will still be a terrible place. Browsers and software will still suck hard. Especially in terms of browsers I’ve found even the suckless offering far from being a good one (I like their work and agree with most of the suckless principles, but in my conclusion it’s not enough to really get me out of this whole mess).

Sure, I can install e.g. the Dillo browser. But its development has been extremely slow for years and the subset of HTML that it supports cannot render most of today’s websites properly. And going with just the pages of people who deliberately keep their sites simple? I like the idea, but there’s the problem that no real community (that I know of) exists which agrees on a single standard of which HTML features are ok and which ones should be avoided.

I’m old enough to remember the browser wars well enough. I also confess being guilty of having used “marquee” on my homepage for a while back in the day. Maybe all of the gibberish we’re facing today is a late punishment for sins committed when we were younger? I don’t know.

So – are we doomed? Should we just pull the plug, go to real libraries again, buy books when we need information and spend our free time in the garden? That thought is tempting only for a moment. I love tech. I love the net and the possibilities it brings with it. Plus I’m kind of a liberal today who believes that mankind will eventually make something good from it all. We’re on this planet to play, to learn and to grow. Some insights can only be gained the hard way. It’s ok that commercial interests ruined the Web for people like me. There are other people who have (monetary or idealistic) interests in the Web being like it is. That’s fine. And in fact: Today’s Web “works” for a lot of people who either have no clue why it’s problematic in the first place or who simply don’t care. Which is a legitimate stance. The tragedy is simply that the rest of us has kind of lost home for a while and we haven’t been able to find a new one, yet.

The “Big Internet” vs. the “Small Internet”

While the Web makes up for a very large and definitely the most visible part of the Internet, there are niches, lesser known parts of it which are certainly not less interesting. Some people speak of the Web as the “Big internet” and call the various special parts the “Small Internet”.

One such niche is the so-called Gopherspace. Gopher is a simple communication protocol for documents which predates the Web. Not just by today’s standards it’s primitive – but it works. And while the major browsers either never supported it or removed support long ago (e.g. Firefox dropped Gopher support for release 4.0), it’s still kept alive by enthusiasts. Some sources even suggest that it has seen moderate growth over the last years with more people fed up with the Web trying out something different. If you’re interested, start here with the web proxy or do some research on your own and install a Gopher client.

I’ve dug into various Gopher holes and my experience with it is a mixed bag. On the one hand it’s really cool to see people putting in the effort of creating a place that’s worthwhile to visit. I also liked the experience in general: It’s not bells and whistles everywhere (because that’s in fact impossible due to the protocols limitations) but rather a focus on the actual content. People have found ways to access git repositories over Gopher and there’s a community called Bitreich (via Gopher) that declared “suckless failed” and propose an even more radical approach. While some of it is probably parody, it’s an interesting one.

On the other hand some of the restrictions of the protocol make no sense at all today, like having a hard-coded limit of 80 characters per line. And there are more shortcomings. When Gopher was conceived, nobody thought of internationalization and indicating the charset that a document uses. Also Gopher is all plain-text and does not provide any means to help protect your privacy – which does not exactly make it overly appealing to me.

But there’s something else, an even smaller – but newer – niche, called Project Gemini. It is a newly designed protocol (2019!) that wants to provide kind of a middle ground between Gopher and the Web while clearly leaning towards the former. It remedies the privacy issues by making TLS mandatory and deliberately not supporting “user agents” or other ways to collect data about the user not required to deliver the content. And it wants to simply be an additional option for people to use not to replace either Gopher or the Web.

While I kind of like Gopher, I really admire Gemini. Having the balls to even try to actually do something like this in our time is simply (no pun intended) astonishing. In my opinion it’s high time to re-invent the wheel. Gemini is certainly not a panacea to all the problems that we have today. But it’s a much welcome (for me at last) chance to start fresh without that huge, huge bag of problems that we’re carrying on with the Web but with the experience that we gained from using it for several decades now.

What’s next?

My experiment is to move Eerielinux to Geminispace and post new content there as I find the time. The plan is to keep this WordPress blog and to create new posts here that simply reference the real content, providing a convenient link using a Web proxy. That should help keeping things accessible for people who prefer to stick with their known browsers instead of getting another one for such a tiny little world as Geminispace is right now.

And who knows? Perhaps something nice comes from Gemini in the long run. I’m looking forward to find out. And if you care — see you in Geminispace for some more articles on *BSD, computers and tech in general!