Illumos (v9os) on SPARC64 SunFire v100

Over the last month or so I’ve written a couple of articles on an old SunFire v100 machine that I own for a while now. First I took a look at the hardware of the machine and the LOM (Lights Out Management). Then I installed OpenBSD 6.0 from CD and updated all the way to 6.5. Finally I played a bit with OpenBSD to see what it can do and how well it supports SPARC64. This post will be the last SPARC64 one before I visit other topics again.

v9os?

While I was pretty happy with OpenBSD on the SunFire, there’s one reason that I wanted to try out something else, too. That reason has three letters: Z-F-S. The first thing that I tried out when I got the hardware, was FreeBSD – but I ran into problems. I’ve managed to overcome circumvent them (might be worth another story in the future), only to find that FreeBSD does not support ZFS on SPARC64!

One option that suggests itself, is just putting Solaris on there. I have a copy of Solaris 10 for Sparc, but I prefer to keep things Open-Source. Also there’s the problem, that my machine is old enough to not have a DVD drive and it doesn’t support booting from USB and the like.

So it’s illumos. Since I’m really just getting started with the broader Solaris universe, I had to do a little research first. And I was a little surprised that most illumos distros seem to not even support Sparc at all! Of the four that do

  • OpenSXCE seems dead (last release in 2014)
  • DilOS uses Debian packaging (which is not my cup of tea at all)
  • Tribblix sounds really interesting to me, but does not fit on a CD
  • v9os is a minimal Sparc distro that is small enough

As you can see, there wasn’t so much choice after all! While v9os is an experimental one-man project that you should probably stay away from for production use, it might be just right for my purposes of tinkering with an old machine.

Installing the OS – first try

There are not many preparations necessary: I downloaded the ISO image and burned it on a CD. Then I connected to my SunFire via serial, powered it on and put the CD into the drive. It takes quite some time, but after a while I can read that v9os is in fact starting.

Booting up v9os from the CD

After the system booted, it gives the user the option to select a keymap.

Keymap selection

Then it shows the installation menu. There you can choose if you want to install, load additional drivers, drop to a shell, change the terminal type or reboot. I go with the first option.

v9os installation menu

After a moment the installer has started an a welcome screen is printed. Unfortunately in my case there’s a problem with the CD, so that four lines of debug info overwrite important information: How to actually proceed with the installation! But this is an OpenSolaris derivative, and so it’s not that hard to figure out that F2 is the key to go on.

v9os installer: Welcome screen

Next it’s selecting the disk to install on. I thought that it all looked good – and didn’t pay much attention to the message “A VTOC label was not found.”. VTOC is the Volume Table Of Contents, the SPARC partition scheme (think MBR/GPT on amd64). We’ll come back to that a little later. 😉

v9os installer: Disk selection

I think that the installer is quite nice. It even offers help pages that give newcomers like me an idea of what they should do for the current step. Great work on that!

v9os installer: Disks help page

Then you can choose to either dedicate the whole disk to v9os or just use a slice. I decide to go the easy route and select the former.

v9os installer: Disk layout selection

Now the installer wants to know the hostname for the new system. The suggested default of v9os is fine for me since I don’t plan to add another machine with that OS to my network anytime soon.

v9os installer: Hostname selection

Finally you can select the time zone – or rather: the zone region.

v9os installer: Time zone selection

Unfortunately things went sideways after that choice and I had to reset the machine…

Ok, after going through the previous steps again, I decided to give the advanced setup a try and selected slicing up the drive.

v9os installer: Slice selection

Unfortunately the result was the same as before: The installer just died. I tried again a few times, playing with different slice setup, but didn’t have any luck.

The installer died… Time to reboot.

At this point I was out of ideas on what else I could try, so I removed the CD and powered down the system.

Writing the label manually

When I powered the system on again, I had forgotten that I removed the CD and to my surprise OpenBSD (the system that I had previously installed on the machine) booted up! This meant that the installer had not even changed anything on the disk, yet!

My next guess was (and still is) that the v9os installer might have problems with BSD disklabels being present on the drive. I took a look at the disklabel from OpenBSD, just to find out some information about the drive.

OpenBSD’s disklabel information of the system hard drive

Then I booted the v9os install medium again but this time selected the shell option. After a little research I found out how to get some drive information on Solaris with iostat.

v9os shell session: Collecting drive hardware info

Next I decided to give the format utility a try. I don’t know if v9os stripped down some hardware information and that together with the disk being really old, it wasn’t properly auto-detected. So I had to do something that I haven’t done in years (and never missed it): Typing in the geometry information by hand!

Typing in disk geometry information (Ah, the (bad!) memories…)

Once the drive has been described to the utility, it shows a menu of what it can do. I haven’t used that program before and judging from the name alone was a bit surprised at how powerful it seems to be. Things like being able to define profiles must have been pretty useful in the past.

Solaris’ format utility

Since I want to partition the drive, I select that. I’m presented with a sub-menu, giving me some more choices.

Partitioning menu of format

I have no clue what a Solaris partitioning scheme should look like (need to explore some older versions of that OS somewhen!).

Partitioning the drive for Solaris

So I look around a little but eventually accept the proposed default and just hope that this works.

Installing the OS – second try

After restarting the machine again and choosing the installer, it looks like this time there is no missing disklabel. At least! But will it make a difference?

Returning to the installer: Partitioning was detected

And yes! Now the installer continues and gets the data written to disk!

Finally installing the OS!

The process takes quite a while – but that’s due to the slow machine that I’m using. Eventually the installation is finished.

v9os installer: All done!

First steps with v9os

Another reboot and after removing the CD-ROM from the drive, the freshly installed system boots up. A moment later it displays the prompt where I can log in using the user root and the password solaris.

First start of v9os

The first thing that I want to do is to get rid of the serial console. So I set up networking and enable SSH.

Setting up networking and enabling SSH

Then I disable the automounter to make the home directory writable and create a user for remote SSH login. Finally I enable the machine to do name resolution and give the new user a password.

Adding a user and name resolution capabilities

That should suffice to SSH into the box from another machine.

Package management with IPS

Logging in remotely works just fine. As v9os does not have an online package repository, I have to download a compressed copy of the repository from SourceForge.

SSHing into the v9os box and downloading the package repository

I don’t know much about the IPS package system and thus really struggle to make it all work. There is no guide on the v9os site and so I try to put the downloaded file in various locations, decompress it and try everything again. Since that also doesn’t work, I unpack the contents of the archive but still cannot get it right…

Struggling to get the repo working…

After more than an hour of struggling with pkg, reading manpages, doing online research and trying to fit everything together, I finally manage to remove the default publisher that comes with the system and add a new one that eventually works!

Finally figured out how to deal with IPS publishers

The v9os operating system is one of the strangest Unices that I’ve ever touched in not providing the vi editor with the system! But now that I have the repository available, I can simply install vim to find out that using packages does work after all.

Installing packages (vim) works!

This is about how far I wanted to take this quick post on v9os. If I had a faster machine, I might have been tempted to try and build the system from source. But with my old SunFire… No.

While v9os might not be fit for production use, I accomplished one goal over OpenBSD: I have an operating system on the machine that is installed on ZFS!

ZFS on SPARC64 with v9os

Conclusion

The v9os operating system is an exotic one for sure. But it’s nice to see that somebody values SPARC64 machine and illumos enough to put the time required to built something like this into such a project. And actually I think it’s not half bad! I didn’t do too much with it, but it seemed stable and except for the installer problem (it would probably just have worked on an empty drive) everything worked fine.

Well, maybe some hints on how to get the package repo in place would have saved me some time… On the other hand Solaris veterans are likely to get it working with just a few commands. And while it has been kind of frustrating for a while, it has also lead to at least a basic understanding of what IPS is and how it works. I’m sure that I’d have missed at least some of that if I had just copied some lines from a guide.

I might not end up making v9os my primary operating system (for various obvious reasons). But it’s another nice little part in the mosaic of the illumos world that I’ve started exploring. Also I noticed that I’ve become a little bit more comfortable with using an OpenSolaris-derivative. Compared to my first encounter with OmniOS, it didn’t take me as long to figure out the very basics again. Which is always a good sign.

Running OpenBSD on SPARC64 (HTTPd, packages, patching, X11, …)

In my previous post I described the process of installing OpenBSD 6.0 on a SPARC64 machine and updating it all the way to 6.5. Now it’s time to actually do something with it to get an idea of how well OpenBSD works on this architecture!

OpenBSD’s base system

The OpenBSD team takes pride in providing an ultra-secure operating system. It’s a well-known fact that the project’s extremely high standards only apply to the base system. Every now and then critics pop up and claim that this basically defeats the whole idea and even accuse the project of “keeping their base system so small that it’s useless by itself” to keep up their defined goals.

There’s some truth to it: The base system is kept (relatively) small if you compare it to some of the fatter operating systems out there. But that’s about it because actually these allegation could not be further from the truth. The base system includes doas, a simpler sudo replacement. It comes with tmux. OpenBSD even maintains it’s own fork of X.org, called Xenocara (not even FreeBSD comes with an X11 server by default) and there’s in fact a lot that you can achieve with the base system alone! Let’s look at one such possibility.

HTTPd

Since the OpenBSD developers are convinced that a webserver is something to keep around all the time, there’s one in base. Originally they used the Apache HTTPd for this. The problem was that at some point, the Apache Foundation decided to give up their Apache 1.0 license and replace it with version 2.0 (they had been criticized a lot for being incompatible with the GPL and the new version solved that problem). The newer version also made the license less simple and permissive than it had been before and OpenBSD did not like the new license. For that reason they basically stayed with the old Apache 1.3 webserver for a long time. They maintained and patched it all that time, but the software really begun to show it’s age.

So for version 5.6, OpenBSD finally removed the old Apache webserver in base and replaced it with Nginx. One release later, they did away with that, too, because they felt that it was starting to become too bloated for their needs. They imported OpenBSD HTTPd into base instead: A home-grown, very simple webserver. It evolved over time, but even though it having gotten more features implemented and becoming a fine little webserver, it strives to keep it simple.

The developers resist the temptation to add new features just because they could and have even made a list of things that some people might want which however will never be implemented because they would raise complexity to an unacceptable level. OpenBSD HTTPd does not want to be a webserver for everyone. It wants to be a ultra-secure webserver that does enough to be useful to many people. If you have any needs above what it offers – get another one.

Simple static website configuration of OpenHTTPd

The simplicity of HTTPd adds a lot to its beauty. I’ve written some HTML for a test page (see screenshot). All of the configuration that I need to do for HTTPd is as follows:

server spaffy.local {
    listen on egress port 80
}

Yes, that’s all that is required: I basically define a vHost (“Server” in HTTPd lingo) and have the application listen on the HTTP default port 80 on egress (a keyword which means whatever interface has the default route). Let’s check if that configuration really is valid by issuing

httpd -n

And it is! Impossible? No. Remember that OpenBSD comes with sane defaults. For that reason there’s usually pretty little that you need to configure. You could, of course. And we’ll be doing that a little later.

Now let’s force-start httpd (we need -f since the service is not enabled, yet, and we want to manually start it once):

rcctl -f start httpd

I’ve edited the /etc/hosts file on my laptop to be able to use the spaffy.local name. So now I can just type that into the address bar of my browser and reach the test page that the SPARC64 machine hosts. OK, a static page probably doesn’t impress you so much. Fortunately that’s not all that we can do in just relying on what base offers!

Static test page displayed in browser

CGI

OpenBSD also comes with Perl as part of the default install. I got that Lama book several years ago, read through about 2/3 of it and then decided that I didn’t like Perl too much. For that reason I never really did anything with it, but here I want to do something with what OpenBSD provides me with, so Perl is a logical choice and I might finally do something with it. Here’s what I came up with:

#!/usr/bin/perl
use strict;
use warnings;

my $osname = `uname -s`;
my $osver = `uname -r`;
my $osarch = `uname -m`;
chomp($osname, $osver, $osarch);

my @months = qw( Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec );
my @days = qw( Sun Mon Tue Wed Thu Fri Sat Sun );
my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday ) = gmtime();

print "Content-type: text/html\n\n";
print "<html><head><title>Greetings!</title></head>";
print "<body>Hello from <strong>$osname $osver</strong> on <strong>$osarch</strong>!";
print "<br><br>This page was created by Perl $^V on $days[$wday], $months[$mon] $mday";

if (length($mday < 2)) {
  if (substr($mday, -1) == "1") {
    print "st"; }
  elsif (substr($mday, -1) == "2") {
    print "nd"; }
  elsif (substr($mday, -1) == "3") {
    print "rd"; }
  else {
    print "th"; }
} else {
  if ((substr($mday, 0, 1) ne "1") and (substr($mday, -1) == "1")) {
    print "st"; }
  if ((substr($mday, 0, 1) ne "1") and (substr($mday, -1) == "2")) {
    print "nd"; }
  if ((substr($mday, 0, 1) ne "1") and (substr($mday, -1) == "3")) {
    print "rd"; }
  else {
    print "th"; }
}

print ", $hour:";
if (length($min) == 1) {
  print "0";
}
print "$min (UTC)</body></html>";

Nothing too fancy, but for a first attempt at writing Perl it’s probably OK. After making the script executable, I can run it on the system and get the expected output. Things get a little more complex, though. HTTPd runs in a chroot for security reasons. And just copying the script into the chroot and trying it to execute in a chrooted environment fails with “no such file or directory”.

Huh? I just copied it there, didn’t I? I sure did. The reason for this happening is that the Perl interpreter is not available in the chroot. So let’s copy that one over as well and try again. Abort trap! How do they say? Getting a different error can be considered progress…

Perl CGI script failing due to chroot

Ok, now Perl is there, but it’s not functional. It requires some system libraries not present in the chroot. Using ldd on the Perl executable, I learn which libraries it needs. And after providing them, I can run the script in the chroot! There is a new problem, though: Perl is complaining about missing modules. The simplest solution in our case is to just remove them from the demo script as they are not strictly (haha!) necessary.

Providing Perl dependencies in the web chroot

On to the next step. Here’s a little addition to the HTTPd configuration:

    location "/cgi-bin/*" {
        root "/"
        fastcgi
    }

It basically adds different rules for the case that anything below /cgi-bin is being requested. It changes the document root for this and enables fastcgi. Now I only need to start the slowcgi service (OpenBSD’s shrewdly named fastcgi implementation) and restart HTTPd. My Perl program makes uses of the system’s uname command, so that should be made accessible in the chroot, too, of course.

Finishing the dynamic webpage setup

And that’s it. The script is executed in the webserver and the expected resulting page generated ,which is then served properly:

Dynamically created webpage displayed in beowser

I think this is pretty cool. Try to do that with just the default install of other operating systems! BTW: Want to make HTTPd and slowcgi start automatically after boot? No problem, just put the following into /etc/rc.conf.local:

httpd_flags=""
slowcgi_flags=""

This makes the init system start both daemons by default (and you can of course drop the “-f” flag to rcctl if you need to interact with them).

Binary packages

For OpenBSD 6.5, pre-built packages are offered for 8 of the 13 supported architectures – among them SPARC64. There’s just a couple short of 9,500 packages available (on amd64 it’s 10,600 – so in fact most packages are there)!

Things like GCC 8.3 and even GNAT 4.9 (the Ada part of GCC which is interesting because it’s written in Ada and thus needs to be bootstrapped to every new architecture by means of cross-compiling) are among the packages, as is LLVM 7.0. When it comes to desktop environments, you can choose e.g. between recent versions of Xfce, MATE and Gnome.

Actually, SPARC64 is one of only 4 architectures (the others being the popular ones amd64, i386 and arm64) that are receiving updates to the packages via the packages-stable repository. In there you’ll find newer versions of e.g. PHP, Exim (which had some pretty bad remote exploits fixed), etc.

Basic OpenBSD package management

I choose to install the sysclean package. Remember when I said that I skipped deleting the obsolete files when updating the OS in my last post? This program helps in finding files that should be deleted. However it’s not too intelligent – it just compares a list for a fresh system to the actual system on disk. For that reason it also lists a lot of files that I wouldn’t want to delete. Still it’s helpful to find out obsolete files that you might have forgot to remove.

Sysclean shows a lot of possible remove candidates

Errata patches

While OpenBSD tries it’s very best at providing a safe to use operating system, there really is nothing both useful and free from errors in the IT. If problems with some component of the system are found later, an erratum is published for it. If you are using OpenBSD in production, you are supposed to keep an eye on errata as they are released. Usually they consist of a patch or set of patches for system source code as well as instructions on how to apply it and recompile the needed parts.

Since version 6.1, OpenBSD comes with a handy utility called syspatch(8), which can e.g. be used to fetch binary patches for all known errata that have not been applied to the OS on the respective machine. This is nice – but it’s only available for amd64, i386 and arm64. So on SPARC64 we still have to deal with the old manual way keeping the system secure. However errata patches are also applied to the -STABLE branch and we can use that to get all the fixes.

No syspatch on SPARC64 – tracking -STABLE manually as it used to be

To upgrade our installation to 6.5-STABLE, the first step is to get the operating source of the current release (the sys tarball contains the kernel and src the rest of the base system). After extracting those, CVS is used to update the code to the latest 6.5-STABLE.

Done getting the stable changes from CVS

Once that’s done, it’s time to build the new (non-SMP) kernel:

# cd /sys/arch/$(machine)/compile/GENERIC
# make obj
# make config
# make && make install
# reboot

Building a 6.5-STABLE kernel

On my SunFire v100 the kernel build took 1h 20m. I was curious enough to build the userland as well, just to see how long it would take… The answer is: 85h 17m! I think that LLVM alone took about three days. The rest of the system wasn’t much of a problem for this old machine, but LLVM certainly was.

BTW, I had problems with “permission denied” when trying to “make obj”. After reading the manpage for release(8), I found out that /usr/obj should be owned by build:wobj with 770 permissions which had not been the case on my system.

Kernel build complete

Having done that, I thought that I might build Xenocara as well, to compare how long it takes to build. So I got the sources for that, too, updated them via CVS and built it. It took 9h 26m to build and install.

Xenocara built from (-STABLE) source

X11 on SPARC64

I had left out all X11-related distribution sets when installing OpenBSD. But after having installed Xenocara from source, I had it all available. So I decided to just do something with it. Since the server does not have a graphics card, I cannot run any X program on it directly, because the xserver won’t run. I decided to get a graphical application that is not part of Xenocara installed first. After browsing through the list, I settled on Midori, a WebKitGTK-based webbrowser.

Installing the Midori browser via packages

It took a moment to install all the dependencies, but everything worked. As the next step I enabled SSH X11 forwarding and restarted SSH.

Midori is installed, allowing X11 forwarding for SSH

After connecting to the SPARC64 machine via SSH and checking that the DISPLAY environment variable was set, I could just launch Midori and have it sent over to my laptop that I used to SSH into the other box. So the browser is being executed on the SPARC64 server but displayed on my other machine.

SSHing into the SPARC64 machine and forwarding Midori to my amd64 laptop

Everything worked well, I could even visit the OpenBSD homepage and it was rendered correctly.

The webkit-based browser works well on SPARC64!

Conclusion

OpenBSD is a fine operating system for people who value quality. The SPARC64 port of it seems to be in pretty good shape: Most packages and even stable-package updates are available. What is missing, is syspatch support – but only three architectures have that right now. Also the system compiler is still the ancient GCC version 4.2 which was the last one before the project switched the license to GPLv3.

OpenBSD 6.6 has been released one day after I finished compiling 6.5-STABLE. On amd64 I could now use sysupgrade(8) to upgrade to the new release even easier than before. This is also not supported on SPARC64. But these two little shortcommings just mean a little extra work that all OpenBSD users on any platform had to do anyway until not that long ago.

For 6.6 there are even more packages available for SPARC64. E.g. the Rust compiler has been bootstrapped on this architecture which definitely is great news. Maybe the system compiler will change to LLVM/Clang one day, too. Right now the SPARC64 backend for Clang is incomplete upstream at the LLVM project, if I understood things right. But we’ll see. Maybe it’ll become available in the future. I guess I’ll really have to get a newer SPARC64-based machine with a faster processor. Luckily OpenBSD supports quite a few of them.

OpenBSD on SPARC64 (6.0 to 6.5)

Earlier this year I came by an old SunFire v100 that I wrote about in my previous article. After taking a look at the hardware and the LOM, it’s time to actually do something with it! And that of course means to install an operating system first.

OpenBSD

OpenBSD, huh? Yes, I usually write about FreeBSD and that’s in fact what I tried installing on the machine first. But I ran into problems with it very early on (never even reached single user mode) and put it aside for later. Since I powered up the SunFire again last month, I needed an OS now and chose OpenBSD for the simple reason that I have it available.

First I wanted to call this article simply “OpenBSD on SPARC” – but that would have been misleading since OpenBSD used to support 32-bit SPARC processors, too. The platform was just put to rest after the 5.9 release.

OpenBSD 6.0 CD set

Version 6.0 was the last release of OpenBSD that came on CD-ROM. When I bought it, I thought that I’d never use the SPARC CD. But here was the chance! While it is an obsolete release, it comes with the cryptographic signatures to verify the next release. So the plan is to start at 6.0 as I can trust the original CDs and then update to the latest release. This will also be an opportunity to recap on some of the things that changed over the various versions.

Preparations

I had already prepared the machine for installation previously, so I only had to make a serial connection and everything was good to go. If you’re in need of doing this and don’t feel like reading the whole previous article, here’s the important steps:

  1. Attach power to go to the lom prompt
  2. Issue boot forth and then poweron to go to the loader
  3. At the ok prompt use setenv boot-device cdrom disk to set the boot order
  4. Set an alias for the CD-ROM device with nvalias cdrom /pci@1f,0/ide@d/cdrom@3,0:f
  5. Reset the machine with reset-all or powerdown and then poweron again

Booting up the OpenBSD 6.0 sparc64 CD

Insert the OpenBSD installation CD for SPARC64 and after just a moment you should be in the installation program.

Installing 6.0

OpenBSD’s installation program is very simple. It’s basically an installation script that asks the user several questions and then goes ahead and does the things required for the desired options. In the Linux world e.g. Alpine Linux does the same, and I’ve always liked that approach.

OpenBSD 6.0 installer started

On a casual installation, the script would ask for the keyboard layout. But since we’re installing over serial here, that doesn’t matter. It asks for the kind of terminal instead. Since our CPU architecture is SPARC64, OpenBSD assumes we’re using a Sun Terminal. Well, I don’t, so I choose Xterm.

Of course we need a hostname for the new system. Since it’s Puffy (the OpenBSD mascot) on SPARC here, I settled on spaffy. 😉

Choosing the root password

Next is network configuration. DHCP is fine for this test machine. Then the root password is being set.

Of course I want to access the box over SSH later, so that I don’t need the serial connection anymore and can put the machine in a different room. Compared to many x86 servers it’s not as loud as those, but still quite a bit louder than you would want a machine sitting directly next to you to be. Allowing root over SSH is very bad practice, so I create a user next and disallow remote root logins.

Selecting the partitioning

Then I choose my timezone. Next is deciding on the partitioning. There I noticed a difference compared to i386/amd64 installations. I have a habit of creating partition B first (to put the swap space on the beginning of the drive). When I tried to do this, the installer told me that this architecture didn’t allow doing that. I assume that limitation is due to Sun’s partitioning scheme VTOC that is being used on the SPARC machines. So I created them in order.

What you can see on the screenshot is OpenBSD’s default partitioning. It’s more complex than many people may be used to, but for a good reason. Remember that you can mount filesystems with different options? That way you can e.g. have /tmp mounted noexec. OpenBSD makes good use of this, e.g. enabling or disabling W^X protection on a filesystem-wide base. This is not a production machine, though, and the drive is fairly small for today’s needs. So in the end I went with a much simpler way of dividing the drive.

Selecting the distribution sets to install

Finally I need to choose what to install. OpenBSD offers so-called “sets” for various parts of the full operating system. Since I’m only installing 6.0 as a starting point, I go with the minimum required options: The kernel (bsd) and the base system.

I have no use for the install (ramdisk) kernel (bsd.rd) or the SMP-enabled multi processor kernel (bsd.mp). Also I don’t need the system compiler (comp), manpages (man) or small games (game). Of course I also don’t need the X11-related sets.

Installation finished!

Then the installer goes off and prepares everything. When it has finished, the only thing that is left is rebooting the system (and removing the CD). Now we can also change the boot order in the ok prompt, to set it to booting from disk only, speeding up the boot time minimally:

ok> setenv boot-device disk

And that’s it! Now I have an old but known good version of OpenBSD on my SunFire box.

Freshly installed OpenBSD 6.0 booted up

Updating to 6.1

Alright. What’s next? Running a 3 years old version of OpenBSD is probably not that good an idea if newer versions are available for this architecture – and they are.

So the first thing to do is fetching the ramdisk kernel of version 6.1 and the signature for it. Then I check the integrity of the kernel with signify(1). Everything is fine, so I go on and replace the standard kernel with the install kernel for the newer version. There’s probably a better way to do this, but the SPARC bootcode seems to have “bsd” as the kernel file name hard-coded and I admittedly didn’t dig very deep to figure out a different way of booting alternate kernels.

Getting 6.1 ramdisk kernel and verifying signature

After restarting, the systems boots into the install kernel. This time I select upgrade instead of install, of course. The installer then checks the existing operating system (or at least the root partition).

I then select http for the location of the sets and point the installer to a mirror that still holds the old releases.

Installer started in upgrade mode

Next is selecting the distribution sets to be installed. Again I choose only the bare minimum, since the upgrade is just an intermediary step to upgrading all the way to a current release.

In earlier versions of OpenBSD, etc was a separate set. Since the files required to check newer releases are in /etc, I’d have chosen a different installation strategy if they were still available separately. However the etc set has been included in the big base set for a while now.

Necessary sets updated

After the sets have been downloaded and extracted the upgrade is mostly complete. The remaining things are done in the live system. So it’s time to complete this step and reboot.

Configuration files get updated on first boot after the OS upgrade

OpenBSD automatically updates various configuration files for the new release. If you pay attention, you’ll see that there is one case where the changes could not be merged automatically. So we will I need to see to that myself.

The system also looked if newer firmware was available. However this was not the case (which really is no wonder on this old machine).

Merging OpenSSH config and adding installurl

After doing the manual merge of the OpenSSH configuration, it’s time to do the final tasks to complete the upgrade. OpenBSD keeps a detailed upgrade guide for each version that lists the required manual steps. In fact you should read it before doing the upgrade, since it can involve steps that need to be done prior to booting the install kernel and updating the base system! I skipped them, because they didn’t apply in my case – e.g. I hadn’t installed the manpages anyway.

I chose to only set the installurl since that one is really convenient. Actually I should remove some obsolete files from the filesystem, too. But I decided to leave this for later as there is another method to do so.

Updating to 6.2

Getting the system updated to 6.2 means repeating what I did for the 6.1 update: Get the ramdisk kernel for the new release as well as the signature and verify it. Once that’s done, another reboot is in order.

Downloading and preparing OpenBSD 6.2 install kernel

One thing that’s different is that the installer now defaults to fetching from the web and not from CD. And thanks to setting the installurl before I rebooted, it also knows the default mirror to get the sets from. Which makes the process of upgrading even more straight-forward and convenient.

OpenBSD 6.2 installer: Now knows the URL to fetch from

Finishing the upgrade after the actual unpacking of the new files takes a bit longer for this version. After making all known device nodes, the installer re-links the kernel! This is due to a new feature called KARL (Kernel Address Randomized Link). The idea here is that the objects that make up the kernel are linked in random order for each reboot, essentially creating a new and unique kernel every time. This makes it much harder or even impossible to use parts of the kernel otherwise known to be in certain memory regions for sophisticated attacks.

OpenBSD 6.2 introduced Kernel re-linking (“KARL”)

Oh, and did you notice that the bsd.mp set is gone? This machine only has a single-core CPU and therefore the SMP kernel doesn’t make much sense. The installer detected the CPU and did not offer to install the SMP kernel (even though it of course is still available for machines with multiple cores).

As always, the system needs to rebooted after the upgrade is complete. Just a moment later I’m greeted by my new OpenBSD 6.2! Again I’m skipping the manual steps to be taken afterwards.

OpenBSD 6.2 booted up

Updating to 6.3

Preparing and doing the upgrade for 6.3 is just like you’ve seen twice now, so I’m not going to repeat it. There’s one new feature in the installer that could be mentioned, though: After the upgrade is complete, the reboot option is now the default thing that the installer offers instead of just dropping you to a shell. This means you can save another 6 keystrokes when updating! Yay! 😉

OpenBSD 6.3 install kernel: Rebooting after completion is now the default choice

Updating to 6.5

The upgrade to 6.4 is simply more of the same. Of course I did that step, but I’m cutting it out here. 6.5 is the most recent release as I’m writing this (though 6.6 is already around the corner). This means I’m going to do one more upgrade, following the process that we know pretty well by now: Get and verify bsd.rd, boot it and select “Upgrade”.

Choosing all the sets except for X11-related ones for 6.5

This time I decide to install all the sets except for anything X11-related. The SunFire v100 is a server-class machine which does not even have a graphics card! For that reason there’s no VGA port to connect a monitor to, either. And while X11 could still be of some use, it’s simply not needed at all.

Upgrade to OpenBSD 6.5 complete

Again the upgrade process takes a bit longer, but that’s only thanks to the additional sets (as well as the base distribution getting a little bigger and bigger with each release). After just a little while everything is done and there’s one more reboot to make.

OpenBSD 6.5 booted up and ready

All done! I now have a fine OpenBSD 6.5 system up and running on my old SPARC64 box. And even better: Everything has been cryptographically verified to be the data that I want and no bad person has tempered with it. Sure, the system has not been cleaned up, yet – and it’s just 6.5-RELEASE with no errata fixes applied. Still I’d say: We’re off to a good start! Aren’t we?

What’s next?

In the next post I intend to explore the system a little and find out where there are differences from a common amd64 installation of OpenBSD.

A SPARC in the night – SunFire v100 exploration

While we see a total dominance of x86_64 CPUs today, there are at least some alternatives like ARM and in the long run hopefully RISC-V. But there are other interesting architectures as well – one of them is SPARC (the Scalable Processor ARChitecture).

This article is purely historic, I’m not reviewing new hardware here. It’s more of a “20 years ago” thing (the v100 is almost that old) written for people interested in the old Sun platform. The intended audience is persons who are new to the Sun world, who are either to young like me (while I had a strong interest in computers back in the day, I hadn’t even finished school, yet, and heck… I was still using Windows!) or never had the chance to work with that kind of hardware in their professional career. Readers who know machines like that quite well and don’t feel like reading this article for nostalgic reasons might just want to skip it.

The SPARC platform

SPARC is a Reduced Instruction Set Computing (RISC) Instruction Set Architecture (ISA) developed by Sun Microsystems and Fujitsu in 1986. Up to the Sun-3 series of computers, Sun had used the m68k processors but with Sun-4 started to use 32-bit SPARC processors instead. The first implementation is known as SPARCv7. In 1992 Sun introduced machines with v8, also known as SuperSPARC and in 1995 the first processors of SPARCv9 became available. Version 9, known as UltraSPARC, is a 64-bit architecture that is still in use today.

SunFire v100: Top and front view

SPARC is a fully open ISA, taken care of by SPARC International. Architecture licenses are available for free (only an administration fee of 99$ has to be payed) and thus any interested corporation could start designing, manufacturing and marketing components conforming to the SPARC Architecture. And Sun did really mean it with OpenSPARC: They released the Verilog code for their T1 and T2 processors under the GPLv2, making them the first ever 64 bit processors that were open-sourced. And not enough with that – they also released a lot of tools along with it like a verification suite, a simulator, hypervisor code and such!

After Sun was acquired by Oracle in 2010, the future of the platform became unclear. Initially, Oracle continued development of SPARC processors, but in 2017 completely terminated any further efforts and laid off employees from the SPARC team.

Fujitsu has made official statements that they are continuing to develop the SPARC-based servers and even about a “100 percent commitment”. In the beginning of this year, they even wrote about a Resurgence of SPARC/Solaris on the company’s blog and since they are the last one to provide SPARC servers (which are still highly valued by some customers), chances are that they will continue improving SPARC. According to their roadmap, even a new generation is due for 2020.

So while SPARC is not getting a lot of attention these days, it’s not a dead platform either. But will it survive in the long run? Time will tell.

SunFire v100

I’m working for company that offers various hosting services. We run our own data center where we also provide colocation for customers who want that. Years ago a customer ran a root server with an (now) old SunFire v100 machine. I don’t remember when it was decommissioned and removed from the rack, but that must have been quite a while ago.

SunFire v100: Back view

That customer was meant to come over to collect the old hardware and so we put the machine in the storage room. For whatever reason, he never came to get it. Since it had been sitting there for years now, I decided to mail the customer and asked if he still wanted the machine. He didn’t and would in fact prefer to have us to dispose of it. So I asked if he’d be ok with us shreddeing the hard drives and me taking the actual machine home. He didn’t have any objections and thus I got another interesting machine to play with.

The SunFire v100 is a 1U server that was introduced in 2001 and went EOL in 2006. According to the official documentation, the machine came with 64 bit Solaris 8 pre-installed. It was available with an UltraSPARC IIe or IIi processor and had a 40 GB, 7200 RPM IDE HDD built-in. My v100 has 1GB of RAM and a 550 MHz UltraSPARC IIe. I also put a 60 GB IBM HDD into it.

It has a single PDU, two ethernet ports as well as two USB ports. It also features two serial ports – and these are a little special. Not only are they RJ-45, but they have two different uses cases. One is for the LOM (we’ll come to that a little later), the other one is a regular serial port that can be used e.g. to upload data uninterrupted (i.e. not going to be processed by the LOM). The serial connection uses 9600 baud, no parity, one stop bit and full duplex mode.

RJ-45 to DB9 cable and DB9 to USB cable

The other interesting thing is the system configuration card. It stores host ID and MAC address of the server as well as NVRAM settings. What is NVRAM? It’s an acronym for Non-Volatile Random-Access Memory, a means for storing information that must not be lost when the power goes off like regular RAM does. If you’re thinking “CMOS” in PC terms, you’re right – except it seems that Sun used a proper means of NVRAM and not an in fact volatile source made “non-volatile” by keeping the data alive with the help of a battery. The data is stored on a dedicated chip, or in this case on a card. The advantage of the latter is that it can be easily transferred to another system, taking all the important configuration with it! Pretty neat.

Inside the v100

When I opened up the box, I was actually astonished by how much space there was inside. I know some old 1U x86 servers from around that time (or probably a little later) that really are a pain to work with. Fitting two drives into them? It’s sure possible, but certainly not fun at all. At least I hated doing anything with them. And those at least used SATA drives – I haven’t seen any IDE machines in our data center, not even with the oldest replacement stuff (it was all thrown out way before I got my job). But this old Sun machine? I must say that I immediately liked it.

SunFire v100: Inside view

Taking out the HDD and replacing it with another drive was a real joy compared to what I had feared that I’d be in for. The drive bays are fixed using a metal clamp that snaps into a small plastic part (the lavender ones in the picture). I’ve removed the empty bay and leaned it against the case so that it’s easier to see what they look like. It belongs where the ribbon cable lies – rotated 90 degrees of course.

Old x86 server for comparison – getting two drives in there is very unpleasant to do…

All the other parts are easily accessible as well: The PDU in the upper left corner of the picture, the CDROM drive in the lower right, as well as the RAM modules in the lower left one. It’s all nicely laid out and well assembled. Hats off to Sun, they really knew what they were doing!

Lights out!

I briefly mentioned the LOM before. It’s short for Lights-Out Management. You might want to think IPMI here. While this LOM is specific to Sun, its basic idea is the same as the wide-spread x86 management system: It allows you do things to the machine even when it’s powered off. You can turn it on for example. Or change values stored in the NVRAM.

LOM starting up

How do we access it? Well, the machine has a RJ-45 socket for serial connections appropriately labeled “LOM”. The server came with two cables to use with it, one RJ-45 to DB26 (“parallel port”) used with e.g. a Sun Workstation, and a RJ-45 to DB9 (“serial port” a.k.a. “COM port”). Then you can use any of the various tools usually used for serial connections like cu, tip or even screen.

Just plug your cable into say your laptop and the other end into the A/LOM port, then you can then access the serial console. If you plug in the power cable of the SunFire machine now, you will see the LOM starting up. Notice that the actual server is still off. It’s in standby mode now but the LOM is independent of that.

LOM help text

By default, the LOM port operates in mixed mode, allowing to access both the LOM and the serial console. These two things can be separated if desired; then the A port is dedicated to the LOM only and the console can be accessed via the B port.

In case you have no idea how to work with the LOM, there’s a help command available to at least give you an idea what commands are supported. Most of these commands have names that make it pretty easy to guess what they do. Let’s try out some!

LOM monitoring overview (powered off)

Viewing the environment gives some important information about the system. Here it reveals that ALARM 3 is set. Alarm 1, 2 and 3 are software flags that don’t do anything by themselves. They can be set and used by software installed on the Solaris operating system that came with the machine.

I really have no idea why the alarm is set. It was that way when I got the server. Even though it’s harmless, let’s just clear it.

Disabling alarm, showing users and booting to the ok prompt

The LOM is pretty advanced in even supporting users and privileges. Up to four LOM users can be created, each with an individual password. There are four privileges that these can have: A for general LOM administration like setting variables, U for managing LOM users, C to allow console access as well as R for power-related commands (e.g. resetting the machine). When no users are configured, the LOM prompt is not protected and has full privileges.

OpenBoot prompt

It is also possible to set the boot mode in the LOM. By doing this, the boot process can e.g. be interrupted at the OpenBoot prompt which (for obvious reasons) is also called the ok prompt. In case you wonder why the command is “boot forth” – this is because of the programming language Forth which the loader is written in (and can be programmed in).

ok prompt help

In the ok prompt you can also get help if you are lost. As you can see, it is also somewhat complex and you can get more help on the respective areas that interest you.

Resetting defaults and probing devices

OpenBoot has various variables to control the boot sequence. Since I got a used machine, it’s probably a good idea to reset everything to the defaults.

From the ok prompt it’s also possible to probe for devices built into the server. In this case, an HDD and a CDROM drive were found which is correct.

Setting NVRAM variables, escaping to LOM, returning to the ok prompt and resetting the machine

The ok prompt allows for setting variables, too, of course. Here I create an alias for the CDROM drive to get rid of working with the long and complex device path. Don’t ask me about the details of the latter however. I found this alias on the net and it worked. I don’t know enough about Solaris’ device naming to explain it.

Next I set the boot order to CDROM first and then HDD. Just to show it off here, I switch back to the LOM – using #. (hash sign and dot character). That is the default LOM escape sequence, however it can be reconfigured if desired. In the LOM I use the date command to display how long the LOM has been running and then switch back to the ok prompt using break.

LOM monitoring overview while the machine is running

Finally I reset the machine, so that the normal startup process is initiated and an attempt at booting from the CDROM is being made. I threw in a FreeBSD CD and escaped to the FreeBSD bootloader (which was also written in Forth until it was replaced with a LUA-based one recently).

Showing the monitoring overview while the machine is actually running is much more interesting of course. Here we can see that all the devices still work fine which is great.

LOM log and date, returning to console and powering off

Finally I wanted to show the LOM log and returning to the console. The latter shows the OK prompt now. Mind the case here! It’s OK and not ok. Why? Because this is not the OpenBoot prompt from the SunFire but the prompt from the FreeBSD loader which is the second-stage loader in my case!

That’s it for the exploring this old machine’s capabilities and special features. I just go back to the LOM again and power down the server.

Conclusion

The SunFire v100 is a very old machine now and probably not that useful anymore (can you say: IDE drive?). Still it was an interesting adventure for me to figure out what the old Sun platform would have been like.

While I’m not entirely sure if this is useful knowledge (SPARC servers in the wild are more exotic then ever – and who knows what the platform has evolved into in almost 20 years!), I enjoy digging into Unix history. And Sun’s SPARC servers are most definitely an important mosaic in the big picture!

What’s next?

Reviewing this old box without installing something on there would feel very incomplete. For that reason I plan to do another article about installing a BSD and something Solaris-like on it.

Using FreeBSD with Ports (2/2): Tool-assisted updating

In the previous post I explained why sometimes building your software from ports may make sense on FreeBSD. I also introduced the reader to the old-fashioned way of using tools to make working with ports a bit more convenient.

In this follow-up post we’re going to take a closer look at portmaster and see how it especially makes updating from ports much, much easier. For people coming here without having read the previous article: What I describe here is not what every FreeBSD admin today should consider good practice (any more)! It can still be useful in special cases, but my main intention is to discuss this for building up the foundation for what you actually should do today.

Building a desktop

Last time we stopped after installing the xorg-minimal meta-port. Let’s say we want a very simple desktop installed on this machine. I chose the most frugal *nix desktop, EDE (the Equinox Desktop Environment, looking kind of like Win95), because that’s drawing in things that I need for demonstrating a few interesting things here – and not that much more.

Unfortunately in the ports tree that we’re using, exactly that port is broken (the newer compiler in FreeBSD 11.2 is more picky than the older ones and not quite happy with EDE code). So to go on with it, we have to fix it first. I’ve uploaded an additional patch file from a later version of the port and also prepared a patch for the port’s Makefile. If you want to follow along, you can just copy the three lines below to your terminal:

# fetch http://www.elderlinux.org/files/patch-evoke_Xsm.cpp -o /usr/ports/x11-wm/ede/files/patch-evoke_Xsm.cpp
# fetch http://www.elderlinux.org/files/patch_ede_port -o /usr/ports/x11-wm/ede/patch_ede_port
# patch -d /usr/ports/x11-wm/ede -i patch_ede_port

Using Portmaster to build and install EDE

Thanks to build-time dependencies and default options in FreeBSD it’s still another 110 ports to build, but that’s fine. We could remove some unneeded options and cut it down quite a bit. Just to give you an idea: By configuring only one package (doxygen) to not pull in all the dependencies that it usually does, it would be just 55 (!) ports.

But let’s say we’re lazy. Do we have to face all of those configure dialogs (72 in cause you are curious)? No, we don’t. That’s why portmaster has the -G flag which skips the config make target and just uses the standard port options:

# portmaster -DG x11-wm/ede

EDE was successfully installed

Using this option can be a huge time-saver if you’re building something where you know that you don’t need to change the options for the application and its dependencies.

System update

Now that we have a simple test system with 265 installed but outdated packages. Let’s update it! Remember that unlike e.g. Linux, FreeBSD keeps third party software installed from packages or ports and the actual operating system separate. We’ll update the latter first:

# freebsd-update upgrade -r 11.3-RELEASE

With this command, we make the updater download the required files for the upgrade from 11.2-RELEASE to 11.3-RELEASE.

Upgrading FreeBSD to 11.3-RELEASE

When it’s done, and we’ve closed the three lists with the removed, updated and new files, we can install the new kernel:

# freebsd-update install

Once that’s done, it’s time to reboot the system so the new kernel boots up. Then the userland part of the operating system is installed by using the same command again:

# shutdown -r now
# freebsd-update install

Kernel upgrade complete

Preparations

Now in our fresh 11.3 system we should first get rid of the old ports tree to replace it with a newer one, right? Wait, hold that rm command for a second and let me show you something really useful!

If you take a look at the /usr/ports directory, you’ll find a file appropriately named UPDATING. And since that’s right what we were about to do, why not take a look at it?

So what is this? It’s an important file for people updating their systems using ports. Here is where ports maintainers document breaking changes. You are free to ignore it and the advice that it gives and there’s actually a chance that you’ll get away with it and everything will be fine. But sometimes you don’t – and fixing stuff that you screwed up might take you quite a bit longer than at least skimming over UPDATING.

# less /usr/ports/UPDATING

But right now it’s completely sufficient to look at the metadata of the first notification which reads:

20180330:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

The main takeaway here is the date. The last heads-up notice for our old ports tree was on 2018-03-30.

Checking out a newer ports tree

Now let’s throw it all away and then get the new ports tree. Usually I’d use portsnap for this, but in this case I want a special ports tree (the one that would have come with the OS if I got ports from a fresh 11.3 installation), so I’m checking it out from SVN:

# rm -rf /usr/ports/.* /usr/ports/*
# svnlite co svn://svn.freebsd.org/ports/tags/RELEASE_11_3_0 /usr/ports

If you’re serious about updating a production server that you care about, now is the time to read through UPDATING again. Search for the date string that you previously took a note of and then read the messages all the way up to the beginning of the file. It’s enough to read the AFFECTS lines until you hit one message that describes a port which you are using. You can ignore all the rest but should really read those heads-up messages that affect your system.

What software can be updated?

BTW… You know which packages you have installed, don’t you? A huge update like what we’re facing here takes some planning up-front if you want to do it in a professional manner. In general you should update much more often, of course! This makes things much, much easier.

Updating from ports

Ok, we’re all set. But which software can be updated? You can ask pkg(8) to compare installed packages to the respective distinfo from the corresponding port:

# pkg version -l "<"

If you pipe that into wc -l you will see that 165 of the 265 installed packages can (and probably should) be updated.

Updating software from ports

We’ll start with something really simple. Let’s say, we just want to update pkgconf for now. How do we do that with portmaster? Easy enough: Just like we would have portmaster install it in the first place. If something is already installed, the tool will figure out if an update is available and then offer to update it.

# portmaster -G devel/pkgconf

And what will happen if the port is already up to date? Well, then portmaster will simply re-build and re-install it.

Partial update finished

While partial updates are possible, it’s a much better idea to keep the whole system updated (if at all possible). To do that, you can use the -a flag for portmaster to update all installed software.

Another nice flag is -i, which is for interactive mode. In this mode, portmaster will ask for every port if it should be updated or not. If you’re leaving out critical ports (dependencies) this can lead to an impossible update plan and portmaster will not start updating. But it can be useful for cherry-picking.

Interactive update mode

Now let’s attempt to upgrade all the ports, shall we?

# portmaster -aG

As always, portmaster will show you its plan and ask for your confirmation. There are two things that probably deserve a word or two about them. Usually the plan is to update an application, but sometimes portmaster wants to re-install or even downgrade them (see picture)! What’s happening here?

Re-installs mostly happen when a dependency changed and portmaster figured out that it might be a good idea to rebuild the port against the newer version of the dependency, even though there is no newer version available for the actual application.

Upgrading, “downgrading”, re-installing

Downgrades are a different thing. They can happen when you installed something from a newer ports tree and then go back to an older one (something you usually shouldn’t do). But in this case it’s actually a false claim. Portmaster will not downgrade the package – it was merely confused by the fact that the versioning scheme changed (because of the 0 in 2018.4 it thinks that this version is older than the previous 2.1.3…).

Moved ports

If you’re paying close attention to all the information that portmaster gives you, you’ll have seen lines like the following one:

===>>> The x11/bigreqsproto port moved to x11/xorgproto

There’s another interesting file in the ports tree called MOVED. It keeps track of moved or removed ports. Sometimes ports are renamed or moved to another category if the maintainer decides it fits better. Portmaster for example started as sysutils/portmaster and was later moved when the ports-mgmt category was introduced. However you won’t find this information in the MOVED file – because it happened before the time that the current MOVED keeps records for (i.e. early 2007 in this case).

The example above is due to the fact that last year the upstream project (Xorg) decided to combine the protocol headers into one distribution package. Before that there were more than 20 separate packages for them (and before that, once upon a time, all of Xorg had been one giant monolithic release – but I digress…)

Problem with merged ports

The good news here is that portmaster is smart enough to parse the MOVED file and figure out how to deal with this kind of changes in the ports tree! The bad news is that this does not work for more complicated things like the merges that we just talked about…

So what now? Good thing you read the relevant UPDATING notification, eh?

20180731:
  AFFECTS: users of x11/xorg and all ports with USE_XORG=*proto
  AUTHOR: zeising@FreeBSD.org

Bulk-deleting obsolete ports and trying again

So let’s first get rid of the old *proto packages with the command that developer Niclas Zeising proposes and then try again:

# pkg version -l \? | cut -f 1 -w | grep -v compat | xargs pkg delete -fy
# portmaster -aG

Required options

Alright, we have one more problem to overcome. There are ports that will fail to build if we run portmaster with the -G flag. Why? Because they have mandatory ports options where you need to choose from things like a backend or a certain mechanic.

The “mandatory options” case

One such case is freetype2. Since this one fails, build it separately and do not skip the config dialog for this one:

# portmaster print/freetype2

Once that’s done, we can continue with updating all the remaining ports. After quite a while (because LLVM is a beast) all should be done!

Updating complete!

Default version changes

Did you read the following notice in UPDATING?

20181213:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

For the big update run we ignored this. And in fact, portmaster did update Perl, but only to the latest version in the 5.26 branch of the language:

# pkg info -x perl
perl5-5.26.3

Why? Well, because that was the version of Perl that was already installed. Actually this is ok, if you’re not using Perl yourself and thus can live without the latest features added. However if you want to (or have to) upgrade to a later Perl major version we have a little bit more work to do.

First edit /etc/make.conf and put the following line in there:

DEFAULT_VERSIONS+=perl5=5.28

This is a hint to the ports framework that whenever you’re building a Perl port, you want to build it against this version. If you don’t do that, you will receive a warning when building the other Perl. Because in this case you’re installing an additional Perl version but all the ports will use the primary one. So more likely than not you don’t want to do this.

Upgrading Perl

Next we need to build the newer Perl. The special thing here is that we need to tell portmaster of the changed origin so that the new version actually replaces the old one. We can do this by using the -o flag. Mind the syntax here, it’s new origin first and then old origin and not vice versa (took me a while to get used to it…)!

But let’s check the origin real quick, before we go on. The pkg command above showed that the package is called perl5. This outputs what we wanted to know:

# pkg info perl5
perl5-5.26.3
Name           : perl5
Version        : 5.26.3
Installed on   : Tue Sep 10 21:15:36 2019 CEST
Origin         : lang/perl5.26
[...]

There we have it. Now portmaster can begin doing its thing:

# portmaster -oG lang/perl5.28 lang/perl5.26

Rebuilding ports that depend on Perl

Ok, the default Perl version has been updated… But this broke all the Perl ports! So it’s time to fix them by rebuilding against the newer Perl 5.28. Luckily the UPDATING notice points us to a simple way to do this:

# portmaster -f `pkg shlib -qR libperl.so.5.26`

And that’s it! Congratulations on updating an old test system via ports.

At last: All done!

What if something goes wrong?

You know your system and applications, are proficient with your shell and you’ve read UPDATING. What could possibly go wrong? Well, we’re dealing with computers here. Something really strange can go wrong anytime. It’s not very likely, but sometimes things happen.

Portmaster can help you if you ask for it before attempting upgrades. Before deinstalling an old package, it creates a backup. However after installing the new version it throws it away. But you can make it keep the backup by supplying the -b flag. Sometimes the old package can come in handy if something goes completely sideways. If you need backup packages, have a look in /usr/ports/packages/portmaster-backup. You can simple pkg add those if you need the old version back (of course you need to be sure that you didn’t update the packages dependencies or you need the downgrade them again, too!).

If you want to be extra cautious when updating that very special old box (that nobody dared to touch for nearly a decade because the boss threatened to call down terrible curses (not the library!) upon the one who breaks it), portmaster will also support you. Use the -w flag and have it preserve old shared libs before deinstalling an old package. I wouldn’t normally do it (and think my example made it clear that it’s really special). In fact I’ve never used it. But it might be good to know about it should you ever need it.

That said, on the more complicated boxes I usually create a temporary directory and issue a pkg create -a, completely backing up all the packages before I begin the update process. Usually I can throw away everything a while later, but having the backups saved me some pain a couple of times already.

In the end it boils down to: Letting your colleagues call you a coward or being the tough guy but maybe ruining your evening / weekend. Your choice.

Anf if you need to know even more about the tool that we’ve been using over and over now, just man portmaster.

What’s next?

I haven’t decided on the next topic that I’m going to write about. However I’ve planned for two more articles that will cover building from ports the modern way(tm) – and I hope that it will not take me another two years before I come to it…

Using FreeBSD with Ports (1/2): Classic way with tools

Installing software on Unix-like systems originally meant compiling everything from source and then copying the resulting program to the desired location. This is much more complicated than people unfamiliar with the subject may think. I’ve written an article about the history of *nix package management for anyone interested to get an idea of what it once was like and how we got what we have today.

FreeBSD pioneered the Ports framework which quickly spread across the other BSD operating systems which adopted it to their specific needs. Today we have FreeBSD Ports, OpenBSD Ports and NetBSD’s portable Pkgsrc. DragonflyBSD currently still uses DPorts (which is meant to be replaced by Ravenports in the future) and there are a few others like e.g. mports for MidnightBSD. Linux distributions have developed a whole lot of packaging systems, some closer to ports (like Gentoo’s Portage where even the name gives you an idea where it comes from), most less close. Ports however are regarded the classical *BSD way of bringing software to your system.

While the ports frameworks have diverged quite a bit over time, they all share a basic purpose:They are used to build binary packages that can then be installed, deinstalled and so on using a package manager. If you’re new to package management with FreeBSD’s pkg(8), I’ve written two introduction articles about pkg basics and pkg common usage that might be helpful for you.

Why using Ports?

OpenBSD’s ports for example are maintained only to build packages from and the regular user is advised to stay away from it and just install pre-built packages. On FreeBSD this is not the case. Using Ports to install software on your machines is nothing uncommon. You will read or hear pretty often that in general people recommend against using ports. They point to the fact that it’s more convenient to install packages, that it’s so much quicker to do so and that especially updating is much easier. These are all valid points and there are more. You’re now waiting for me to make a claim that they are wrong after all, aren’t you? No they are right. Use packages whenever possible.

But when it’s not possible to use the packages that are provided by the FreeBSD project? Well, then it’s best to… still use packages! That statement doesn’t make sense? Trust me, it does. We’ll come back to it. But to give you more of an idea of the advantages and disadvantages of managing a FreeBSD system with ports, let’s just go ahead and do it anyway, shall we?

Just don’t do this to a production system but get a freshly installed system to play with (probably in a VM if you don’t have the right hardware lying around). It still is very much possible to manage your systems with ports. In fact at work I have several dozen of legacy systems some of which begun their career with something like FreeBSD 5.x and have been updated since then. Of course ports were being used to install software on them. Unfortunately machines that old have a tendency to grow very special quirks that make it not feasible to really convert them to something easier to maintain. Anyways… Be glad if you don’t have to mess with it. But it’s still a good thing to know how you’d do it.

So – why would you use Ports rather than packages? In short: When you need something that packages cannot offer you. Probably you need a feature compiled in that is not activated by default and thus not available in FreeBSD’s packages. Perhaps you totally need the latest version in ports and cannot wait for the next package run to complete (which can take a couple of days). Maybe you require software licensed in a way that binary redistribution is prohibited. Or you’ve got that 9-STABLE box over there which you know quite well you shouldn’t run anymore. But because you relay on special hardware that offers drivers only for FreeBSD 9.x… Yeah, there’s always things like that and sometimes it’s hard to argue away. But when it comes to packages – since FreeBSD 9 is EOL for quite some time there are no packages being built for it anymore. You’ll come up with any number of other reasons why you can’t use packages, I’m not having any doubts.

Using Ports

Alright, so let’s get to it. You should have a basic understanding of the ports system already. If you haven’t, then I’d like to point you to another two articles written as an introduction to ports: Ports basics and building from ports.

If you just read the older articles, let me point out two things that have happened since then. In FreeBSD 11.0 a handy new option was added to portsnap. If you do

# portsnap auto

it will automatically figure out if it needs to fetch and extract the ports tree (if it’s not there) of if fetching the latest changesets and updating is the right thing to do.

The other thing is a pretty big change that happened to the ports tree in December 2017 not too long after I wrote the articles. The ports framework gained support for flavors and flavored ports. This is used heavily for Python ports which can often build for Python 2.7, 3.5, 3.6, … Now there’s generally only one Port for both Python2 and Python3 which can build the py27-foo, py36-foo, py37-foo and so on packages. I originally intended to cover tool-assisted ports management shortly after the last article on Ports, but after that major change it wasn’t clear if the old tools would be updated to cope with flavors at all. They were eventually, but since then I never found the time to revisit this topic.

Scenario

I’ve setup a fresh FreeBSD 11.2 system on my test machine and selected to install the ports that came with this old version. Yes, I deliberately chose that version, because in a second step we’re going to update the system to FreeBSD 11.3.

Using two releases for this has the advantage that it’s two fixed points in time that are easy to follow along if you want to. The ports tree changes all the time thanks to the heroic efforts that the porters put into it. Therefor it wouldn’t make sense to just use the latest version available because I cannot know what will happen in the future when you, the reader, might want to try it out. Also I wanted to make sure that some specific changes happened in between the two versions of the ports tree so that I can show you something important to watch out for.

Mastering ports – with Portmaster

There are two utilities that are commonly used to manage ports: portupgrade and portmaster. They pretty much do the same thing and you can choose either. I’m going to use portmaster here that I prefer for the simple reason of being more light-weight (portupdate brings in Ruby which I try to avoid if I don’t need it for something else). But there’s nothing wrong with portupgrade otherwise.

Building portmaster

You can of course install portmaster as a package – or you can build it from ports. Since this post is about working with ports, we’ll do the latter. Remember how to locate ports if you don’t know their origin? Also you can have make mess with a port without changing your shell’s pwd to the right directory:

# whereis portmaster
portmaster: /usr/ports/ports-mgmt/portmaster
# make -C /usr/ports/ports-mgmt/portmaster install clean

Portmaster build options

As soon as pkg and dialog4ports are in place, the build options menu is displayed. Portmaster can be built with completions for bash and zsh, both are installed by default. Once portmaster is installed and available, we can use it to build ports instead of doing so manually.

Building / installing X.org with portmaster

Let’s say we want to install a minimal X11 environment on this machine. How can we do it? Actually as simple as telling portmaster the origin of the port to install:

# portmaster x11/xorg-minimal

Port options…

After running that command, you’ll find yourself in a familiar situation: Lots and lots of configuration menus for port options will be presented, one after another. Portmaster will recursively let you configure the port and all of its dependencies. In our case you’ll have 42 ports to configure before you can go on.

Portmaster in action

In the background portmaster is already pretty busy. It’s gathering information about dependencies and dependencies of dependencies. It starts fetching all the missing distfiles so that the compilation can start right away when it’s the particular port’s turn. All the nice little things that make building from ports a bit faster and a little less of a hassle.

Portmaster: Summary for building xorg-minimal (1/2)

Once it has all the required information gathered and sorted out, portmaster will report to you what it is about to do. In our case it wants to build and install 152 ports to fulfill the task we gave it.

Portmaster: Summary for building xorg-minimal (2/2)

Of course it’s a good idea to take a look at the list before you acknowledge portmaster’s plan.

Portmaster: distfile deletion?

Portmaster will ask whether to keep the distfiles after building. This is ok if you’re building something small, but when you give it a bigger task, you probably don’t want to sit and wait to decide on distfiles before the next port gets built… So let’s cancel the job right there by hitting Ctrl-C.

Portmaster: Ctrl-C report (1/2)

When you cancel portmaster, it will try to exit cleanly and will also give a report. From that report you can see what has already been done and what still needs to be done (in form of a command that would basically resume building the leftovers).

Portmaster: Ctrl-C report (2/2)

It’s also good to know that there’s the /tmp/portmasterfail.txt file that contains the command should you need it later (e.g. after you closed your terminal). Let’s start the building process again, but this time specify the -D argument. This tells portmaster to keep the distfiles. You could also specify -d to delete them if you need the space. Having told portmaster about your decision helps to not interrupt the building all the time.

# portmaster -D x11/xorg-minimal

Portmaster: Build error

All right, there we have an error! Things like this happen when working with older ports trees. This one is a somewhat special case. Texinfo depends on one file being fetched that changes over time but doesn’t have any version info or such. So portmaster would fetch the current file, but that has silently changed since the distfile information was put into our old ports tree. Fortunately it’s nothing too important and we can simply “fix” this by overwriting the file’s checksum with the one for the newer file:

# make makesum -C /usr/ports/print/texinfo
# portmaster -D x11/xorg-minimal

Manually fetching missing distfile

Next problem: This old version of mesa is no longer available in the places that it used to be. This can be fixed rather easily if you can find the distfile somewhere else on the net! Just put it into its place manually. Usually this is /usr/ports/distfiles, but there are ports that use subdirectories like /usr/ports/distfiles/bash or even deeper structures like e.g. /usr/ports/distfiles/xorg/xserver!

# fetch https://mesa.freedesktop.org/archive/older-versions/17.x/mesa-17.3.9.tar.xz -o /usr/ports/distfiles/mesa-17.3.9.tar.xz
# portmaster -D x11/xorg-minimal

Portmaster: Success!

Eventually the process should end successfully. Portmaster gives a final report – and our requested program has been installed!

What’s next?

Now you know how to build ports using portmaster. There’s more to that tool, though. But we’ll come to that in the next post which will also show how to use it to update ports.

Summer Sun and microsystems

Juli is coming to an end so there should be a new article on the Eerie Linux blog! Here it is. It’s not about a single technical topic, though. More of a “meta” article. Im writing about what I’d like to be writing about soon! No, I haven’t been on vacation or anything. Sometimes things just don’t work out as planned. More on that in a second.

Illumos!

Have you noticed that I changed the header graphic? In 2016 I added beastie and puffy. And now there’s also the Illumos phoenix. Being a Unix lover, I’ve long felt that I’m really missing something in knowing next to nothing about the former Sun platform. Doing just a little bit of experimentation, I’ve become somewhat fascinated with the Solaris / Illumos world.

It’s Unix, so I know my way around for the most basic things. But it’s quite different from *BSD and Linux that I work with on a daily base – and from what I can say so far, definitely not for the worse. The system left an impression on me of being well engineered and offering interesting or even beautiful solutions for common problems! I can’t remember the last time when something in Linux-space struck me as being beautifully crafted – which is no wonder given the fact that all the tools are developed separately and are only bundled together to form a complete operating system by the distributions. While it usually works, the aspect of a consistent OS with a closely coordinated userland was one of the strong points that drew me towards *BSD. Obviously the same is true for the various Illumos distributions.

Since I’m really just starting my journey, I don’t have that much to say, yet. I plan on doing an interview with an OmniOS user who has switched over from FreeBSD not too long ago and hope that I’ll be able to arrange something. Other than that I’ll have to do some reading and will probably visit the official Solaris, too, even though I’m all for Open Source. When I feel capable of writing something remotely useful, I plan on presenting the various Illumos distributions and their strong points. Might take me some time, though, but from now on several Solaris/Illumos topics are on my todo list.

Illumos people reading this: If you’d like me to write about your OS, please help me by making some suggestions of topics for beginners! That would be very much appreciated.

BSD router

The article series about the custom-built BSD router is still the most popular one on my blog. Given that so many people are interested in this topic, I wanted to write a follow-up for quite some time now. Since the newest version of OPNsense (19.7) was released this month, I decided to finally pick that topic. Would have been nice to do a fresh reinstall and cover what has changed.

Also there have been many new firmware releases for the APU2 since I covered it. I was rather excited when I learned (in January or so) that they had finally enabled the ECC feature for the RAM! Other stuff happened, too, so I’d definitely have something to play with and to write about.

There’s only one problem… As of exactly this month, I don’t have a working APU2 anymore. If you have children, keep your stuff out of their reach! It doesn’t make that great a toy, anyway.

So this topic has to be postponed due to lack of hardware. But I will get a new one sometime and then write about it. I already have some ideas on what to there for a second mini series.

Ravenports

Another thing that I’d have liked to write about is my favorite packaging system, Ravenports. There’s a pretty big thing coming soon (I hope), but it’s not there, yet. So no new issue of the “raven report” this month!

Other than that big change, I still have the other issue on my todo list: Re-bootstrap on FreeBSD i386. I’ve done that once (before the toolchain update to gcc 8) and will do it again as I find the time. Of course right now it makes sense to wait for the anticipated new component to land first!

I’ve also been thinking about writing a bootstrap script. Probably I’ll give it a shot – and if it can do i386, I’ll have that covered, too.

HardenedBSD

This has been on my list for over a year now, too. I took a quick look at that project and like what they do very much. It’s basically FreeBSD with lots and lots of security enhancements, built and maintained by a small team of people dedicated to make FreeBSD more secure. Why a separate project, then? For the simple reason that FreeBSD is a huge project where several parties have various interests in the OS and where getting very invasive changes in is sometimes not that easy. So HardenedBSD is developed in parallel to give the developers all the freedom to make changes as they please.

Except for all the security related stuff there are a few things where the system is different from upstream FreeBSD, too. I’ll need to explore this a little more and start actually doing stuff with HardenedBSD. Then I’ll definitely write about it here on the blog.

Ports and packages

In 2017 I wrote about package management and Ports on FreeBSD in a mini series of articles. It ended with building software from ports. Actually it wasn’t meant to end right there, as I had at least two more articles in the pipe. I’ll be concentrating on this one and hope to have a blog post out in early August.

Other stuff

My interest in the ARM platform has not vanished and I want to do more with it (and then probably blog about it, too). Also I want to revisit FreeBSD jails and still have to get my series on email done… And I’m not done with the “power user” series, either.

So there’s more than enough topics and far too little time. Let’s see what I can get done this year and what will have to wait!

Testing OmniOSce on real hardware

[edit 07-02]Added images to make the article a little prettier[/edit]

Last year I decided to take a first look at OmniOSce. It’s a community-driven continuation of the open source operating system that originated at OmniTI. It is also an Illumos distribution and as such a continuation of the former Open Solaris project by Sun that was shut down after Oracle acquired the former company.

In my First post I installed the OS on a VM and showed what the installation procedure was like. Post two took a look at how the man pages are organized, system services with SMF, as well as user management. And eventually post three was about doing network configuration by hand.

One reader mentioned on Reddit that he’d be more interested in an article about an installation on real hardware. I wanted to write that article, too – and here it is.

Scope of this article

While it certainly could be amusing to accompany somebody without any clue on what he’s doing as he tries to find his way around in a completely unfamiliar operating system, there’s no guarantees for that. And actually there’s a pretty big chance of it coming out rather boring. This is why I decided to come up with a goal instead of taking a look at random parts of the operating system.

Beautiful loader with r151030

Last time I wanted to bring up the net, make SSH start and add an unprivileged user to the system so I could connect to the OmniOS box from my FreeBSD workstation. All of that can be done directly from the installer, however and while I went the hard way before, I’m going to use those options this time. My new goal is to take a look at an UEFI installation, the Solaris way of dealing with privileged actions as well as a little package management and system updates. That should be enough for another article!

OmniOSce follows a simple release schedule with two stable releases per year where every fourth release is an LTS one. When I wrote my previous articles, r151022 was the current LTS release and r151026 the newest stable release (which I installed). In the meantime, another stable release has been released (r151028) and the most recent version, r151030, is the new LTS release. Since I want to do an upgrade, I’m going to install r151028 rather than r151030.

To boot or not to boot (UEFI)

My test machine is configured for UEFI with Legacy Modules enabled. Going to the boot menu I see the OmniOSce CD only under “legacy boot”. I choose it, the loader comes up, boots the system and just a little later I’m in the installer. It detects the hard drive and will let me install to it. The default option is to install using the UEFI scheme, so I accept that. After the installation is complete, I reboot – and the system cannot find any bootable drives…

Ok, let’s try the latest version. Perhaps they did some more work on UEFI? They did: This time the CD is listed in the UEFI boot sources, too and a beautiful loader greets me after I selected it. The text and color looks a bit nicer in the EFI mode console, too. I repeat the installation, reboot… And again, the hard drive is not bootable!

This machine does not support “pure” UEFI mode. I switch to legacy mode and try the older CD image again. Installing to GPT has the same effect as before: The system is not bootable. I do not really want to use MBR anymore, but fortunately the OmniOS installer has two more options to work around the quirks of some EFI implementations. Let’s try the first one, GPT+Active. Once again: The system is not bootable… But then there’s GPT+Slot1 – and that finally did the trick! The system boots from hard disk.

At this point I decided to sacrifice another machine for my tests. It doesn’t even recognize the newer r151030 ISO as an UEFI boot source – neither in mixed mode nor in pure UEFI mode. But things are getting even more weird: I install OmniOS for the UEFI scheme again and the system does recognize the drive and is able to boot it – however only in legacy mode!

UEFI is a topic for itself – and actually a pretty strange one. It has meant headaches for me before, so I wouldn’t overrate OmniOS not working with the UEFI on those particular two machines. My primary laptop will try to PXE-boot if a LAN cable is attached – even though PXE-booting is disabled in the EFI. Another machine is completely unable to even detect (!) hard drives partitioned with GPT when running in legacy mode… To me it looks that most EFI implementations have their very own quirks and troubles. Again: Don’t overrate this. The OmniOS community is rather small and it’s completely impossible for them to make things work on all kinds of crappy hardware.

Chances are that it just works on your machine. While I’d like to test more machines, I don’t have the time for this right now. So let’s move on with GPT and Legacy/BIOS boot.

Installation

I like Kayak, the installer used by OmniOS. It’s simple and efficient – and it does it’s thing without being over-engineered. Being an alternative OS enthusiast, I’ve seen quite a bunch of installation methods. Some more to my liking, some less. When it comes to this one, I didn’t really have any problems with it and am pretty much satisfied. If I had to give any recommendation, I’d suggest adding a function to generate the “whatis” database (man -w) for the user (and probably make that the default option). I’ve come to expect that the apropos command just works when I try out a new system. And other newcomers might benefit from that, too.

Creating a test user with the installer

When I installed OmniOS last year, I ran into a problem with the text installer. I reported it and it was fixed really, really quickly. Of course I could not resist the temptation to try out the text installer for this newer release. With r151028 it works well. However it doesn’t offer any options over the new dialog-based one (on the contrary), so I’d recommend to use the new one.

Shell selection for the new user

As mentioned above, this time I decided to let the installer do the network setup and create a user for me (which made /home/kraileth my home directory and not /exports/home/kraileth!). When creating a user, I’m given the choice to use either ksh93, bash or csh. The latter is just plain old csh, and while I prefer tcsh over bash anytime, this is not such a tempting choice. But the default (ksh) is actually fine for me.

Selecting privileges for the new user

More interesting however is the installer’s ability to enable privileged access for the new user: I can choose to give it the “Primary Administrator” profile and / or to enable sudo (optionally without password).

Several new configuration options in the r151030 installer

Also the installer for r151030 features a few new options like enabling the Extra repository or the Serial Console. Certainly nice to see how this OS evolves!

Profiles

The installer allowed me to give my user the privilege to use sudo. Just like with *BSD and Linux this gives me the the ability to run commands as root or even become root using sudo -i. But this is not the only way to handle privileged actions. In fact there is a much better one on Solaris systems! It works by using profiles.

What are profiles? They are one of the tools present in Solaris (and Solaris-derived systems) that allow for really fine-grained access control. While traditional Unix access control is primitive to say the least, the same is not true for Solaris.

Taking a peek at user’s profiles

Let’s see what profiles my user has:

$ profiles
Primary Administrator
Basic Solaris User
All

My guess here is that every user has the profiles “All” and “Basic Solaris User” – while the “Primary Administrator” was added by the installer. The root user has some more (see screenshot).

The profiles of a user are assigned in the file /etc/user_attr and the actual profiles are defined in /etc/security/prof_attr. While all of this is probably not rocket science, it definitely looks complex and pretty powerful. Take a look at the screenshot to get a first impression and do some reading on your own if you’re interested.

Some profile definitions

As a newbie I didn’t know much about it, yet. The profiles mention help files however, and so I thought it might be worth the effort to go looking for them. Eventually I located them in /usr/lib/help/profiles. There is an HTML help for people like me who are new to the various profiles.

Help file for the “Primary Administrator” profile

Privileged Actions

Alright! But how do you make use of the cumulative privileges of your profiles? There are two ways: Running a single privileged command (much like with sudo) or executing a shell with elevated privileges. The first is accomplished using pfexec like this:

$ pfexec cat /root/.bashrc

Playing with privileged access

The system provides wrappers for some popular shells. However not all of the associated shells are installed by default! So on a fresh installation you should only count on the system shells.

Wrappers for the various profile shells

Basic package management

In the Solaris world there are two means for package management, known as SVR4 and IPS. The former is the old one used up to Solaris 10. I did a little reading on this and it looks like they are quite similar to the traditional *BSD pkg_tools. Basically it’s a set of programs like pkginfo, pkgadd, pkgrm and so on (where it’s really not so hard to tell from their names what they are used for).

The newer one uses the pkg(5) client command of the Image Packaging System. While the name of the binary suggests a relation with e.g. FreeBSD’s pkg(8), this is absolutely not the case.

Package management is a topic that deserves its own article (that I consider writing at some point). But basic operation is as simple as this:

$ pfexec pkg install tmux

It’s not too hard to anticipate that after a short while, Tmux should be available on your system (provided the command doesn’t error out).

System updates

Updating the operating system to a new release is a pretty straight-forward process. It’s recommended (but optional) to create a Boot Environment. The first required step is setting the pkg publisher to a new URI (pointing to a repository containing the data for the new release). Then you update the system (preferably to a new Boot Environment that is made the standard one) and reboot. Yes, that’s all there is to it.

System upgrade to version r151030

After the reboot you’ll see the new boot menu and better looking system fonts – even without using UEFI. Obviously r151030 implements a new frame buffer. I’ve also noticed that boot and especially shutdown times have decreased notably. Very nice!

Upgrade finished

Conclusion

If you’ve never worked with a Solaris-like system this article might have provided you with some new insights. However we’ve barely scratched the surface. The profiles are looking like a great means of access control to me, but usually you’d want to use OmniOSce for different reasons. Why? Because it has some really cool features that make it the OS of choice for some people who are doing impressive things.

What are those features? So far we didn’t talk about ZFS and the miracles of this great filesystem/volume manager (I’ve mentioned Boot Environments, but if you don’t know what BEs are you of course also don’t know why you totally want them – but trust me, you do). We didn’t talk about zones (think FreeBSD’s jails, but not quite – or Linux containers, but totally on steroids). Also I didn’t mention the great networking capabilities of the OS, the debugability and things like that.

As you can see, I probably wouldn’t run out of topics to write about, even if I decided to switch gears on my blog entirely to Illumos instead of just touching on them among more BSD and Linux articles. If you don’t want to wait for me to write about it again – why don’t you give it a try yourself?

Rusted ravens: Ravenports march 2019 status update

It’s been a couple of months since I last wrote about Ravenports, the universal *nix application building framework. Not exactly being a slowly moving project, a lot has happened since then.

Platform support

Raven currently supports DragonFly BSD, FreeBSD, Linux and Solaris/Illumos, the latter being only in the form of binary packages (except for when you have access to an installation of Solaris 10u8 – which can be used to build packages, too).

People following the project will notice the lack of macOS/Darwin support mentioned here. This is not a mistake as support for that platform has been put on hold for now. While Raven has successfully been bootstrapped on macOS before, the developers have lost access to any macOS machines and thus cannot continue support for it.

This does not mean that platform is gone forever. It might be resurrected at a later point in time if given access to a Mac again. The adventurous can even try to bootstrap Raven on different platforms now as the process has been documented (with macOS as the example).

I intended to do some work on bootstrapping Raven on FreeBSD/ARM64 – only to find that FreeBSD unfortunately still has a long way before making that platform tier 1. At work I had access to server-class ARM64 hardware, but current versions of FreeBSD have trouble booting up and I could not get the network running at all (if you’re interested in the details see my previous post). I’m still hoping for reactions on the mailing list but until upstream FreeBSD is fixed on ThunderX trying to bootstrap does not make much sense.

Toolchain and package updates

The toolchain used by Ravenports has been updated to GCC 8.3 and Binutils 2.32 on all four supported platforms (access to Mac was lost before the toolchain update).

As usual, Solaris needed a bit of extra treatment but up to date compiler and tools are available for it now, too. Even better: The linker from the LLVM project (lld) is available and usable on Solaris/Illumos now as well. Since it takes several hours (!) to link on Solaris, a mostly static lld executable was added to the sysroot package for that platform. This way this long-building package does not have to be rebuilt as often.

Packages have been rebuilt with this bleeding-edge toolchain (plus the usual fallout has been collected and fixed). So if you are using Raven, you are making use of the latest compiler technology with the best in optimization. Of course a lot of effort went into providing the most current versions of the packaged software, too (at least where that is feasible).

On the desktop side of things I’ve added the awesome window manager to the list of available software. It’s actually my WM of choice, but not too many people are into tiling so I postponed this one for after making Xfce available. Work on bringing in more Lua-related ports for an advanced configuration it is ongoing, but the WM is already usable as it is now.

I’ve also done a bit of less visible work, going back to many ports that I created previously and added in missing license info. This work is also not completed, yet, but the situation is improving, of course.

Rust!

One of the big drawbacks of Ravenports as stated last time, was the lack of the Rust compiler. This effectively meant a showstopper for things like current versions of Firefox, Thunderbird, librsvg, etc. The great news is that this blocker has been mostly removed: Rust is available via Raven for Dragonfly, FreeBSD and Linux! Solaris/Illumos support is pending, I think that any helping hand would be greatly appreciated.

Bringing in Rust was a big project on its own. Adding an initial bootstrap package for Dragonfly alone took months (thank you, Mr. Neumann!). The first working version of the port made Rust 1.31 available. It has since been updated to version 1.32 and 1.33 and John has added functionality to the Raven framework to work with Rust’s crates as well as scripts to assist with future updates. Taking all of that into consideration, Rust support in Raven is already pretty good for the short time that we have it.

Eventually even a port for Firefox landed – as of now it’s marked broken, though. The reason is that while it does compile just fine, the program crashes when actually started. The exact cause for this is yet unknown. If anybody with some debugging abilities has a little time on his hands, nailing down what happens would be a task that a lot of people will be benefit from for sure!

Updated ravenadm

Ravenadm, the Ravenports administration tool, has seen several updates with new features. Some have brought internal changes or new features necessary for new or updated packages. One example is a project-wide fix for ports that build with Meson: Before the change many programs needed special treatment to make Meson honor the rpath for the resulting binaries. Now Raven can automatically take care of this, saving us a whole bunch of sed commands in the specification file. Another new feature is the “Solaris functions” mechanism which can automatically fix certain functions that required generating patches before. Certainly also very nice to have!

Probably my favorite new feature is that Ravenadm now supports concurrent processes in several cases: While you cannot start a second set of package builds at the same time for obvious reasons, it is now possible to ask Ravenadm in which bucket a certain port lives, sort manifests, and such while building packages! I cannot say how much the previous behavior got in my way while doing porting work… This makes porting much, much more pleasant.

A last improvement that I want to mention here is a rather simple one – however one that has a huge impact. Newer versions of Ravenadm put all license-related texts into the logs! This means you can simply look at the log and see if e.g. the terms got extracted correctly. Before you had to use the ENTERAFTER option to enter an interactive build session and look at the extracted file. This is a huge improvement for porters.

SSL

Another big and most likely unique feature added to Raven recently is SSL autoselection. Raven has had autoselection facilities for Python, Ruby and Perl for about a year now. The latter allow for multiple versions of the interpreters to be installed in parallel and take care of calling the actual binary with the same parameters, preferring the newest version over older ones (until configured differently).

Raven supports LibreSSL, OpenSSL as well as LibreSSL-devel and OpenSSL-devel. Before the change, you could select the SSL library to use in the profile and it would be used to link all packages against it. Now we have even more flexibility: You can e.g. build all the packages against LibreSSL by default and just fall back to OpenSSL for the few packages that really require it!

And in fact Raven takes it all one step further: You can have OpenSSL 1.0.2 and OpenSSL 1.1.1 (which introduced braking changes) installed in parallel and use packages on the same system where some require the new version and some that cannot use it, yet! Pretty nice, huh?

Future work

Of course there are still enough rough edges that require work. Probably the most pressing issue is to get Firefox working so Raven’s users can have access to a convenient and modern browser. There are also quite some programs which need ports created for them. The goal here is to provide the most critical programs to allow Dragonfly to make the switch from Dports to Ravenports for the official packages.

On FreeBSD Filezilla does not currently work: It cannot be built with GCC due to a compiler bug in GCC 7.x and 8.x. Therefore it is a special port that get’s build with Clang instead. The problem is that libfilezilla needs to be built with the same toolchain – and that cannot currently be built with Clang using Raven…

Raven on Linux has some packages not available due to additional dependencies on that platform. I begun adding some Linux-specific ports but lost motivation to do so pretty fast (there are enough other things after all). Also the package manager is still causing pain, randomly crashing.

Solaris is also missing quite some packages. This is due to additional patches being required for a lot of software to build properly. Ravenports tries to support this platform as good as possible; however this could surely be improved if anybody using Solaris or an Illumos distribution as his or her OS of choice would start using Raven and giving feedback or even contribute.

Get in touch!

Interested in Raven? Get in touch with us! There is an official IRC channel now (#ravenports on Freenode) which is probably the best place to talk to other Raven users or the porters and developers. You can of course also send an email.

If you want to contribute, there is now a new “Customsource” repository on GitHub that you can create pull requests against. Feel free to contribute anything from finished ports that might only need polish to WIP ports that are important for you but you got stuck with.

There are many other means of helping with the than project then doing porting work, though. Open issues if you find problems with packages or have an idea. Also tell us if you read the wiki and found something hard to understand. Or if you could use a tutorial for something – just ping me. Asking doesn’t hurt and chances are that I can write something up.

Got something else that I didn’t talk about here? Tell us anyway. We’re pretty approachable and less elitist than you might think people who work on new package systems would be! 😉

ARM’d and dangerous: FreeBSD on Cavium ThunderX (aarch64)

While I don’t remember for how many years I’ve had an interest in CPU architectures that could be an alternative to AMD64, I know pretty well when I started proposing to test 64-bit ARM at work. It was shortly after the disaster named Spectre / Meltdown that I first dug out server-class ARM hardware and asked whether we should get one such server and run some tests with it.

While the answer wasn’t a clear “no” it also wasn’t exactly “yes”. I tried again a few times over the course of 2018 and each time I presented some more points why I thought it might be a good thing to test this. But still I wasn’t able to get a positive answer. Finally in January 2019 year I got a definitive answer – and it was “yes, go ahead”! The fact that Amazon had just presented their Graviton ARM Processor may have helped the decision.

Getting my hands on an ARM64 server

Eventually a Gigabyte R120-T32 was ordered and arrived at our Data Center in early February. It wasn’t perfect timing at all since there are quite a few important projects running currently which draw just about all the resources that we currently have. Still we put together a draft of an evaluation sheet with quite a few things to test. There are a lot of “yes / no” things and some that require performance measurements.

We set aside a few hours to put some drives into the machine, put it into the rack, configure the EFI and get BMC/IPMI working. We quickly found that there was no firmware update or anything available, so we should be good to go. However… The IPMI console (Java…) is not working at all. A few colleagues that have different Linux distros running on their workstations tried to access it – but no luck. It looks like a problem with too new Java versions (*sigh*). I didn’t even try it with my BSD workstation: I could never get Supermicro IPMIs to work – even on AMD64. So for me it’s Linux in Bhyve for that kind of task (if anybody has any idea on how to get IPMI console access working on FreeBSD please let me know!!).

Our common installation procedure for special servers involves a little device that stores CD/DVD images and presents a virtual optical drive to the machine. Since ARM64 machines usually don’t have optical drives anymore, there are no CD images available for that architecture. Fortunately it’s not that complicated to prepare a USB thumb drive instead.

Once that was done I was able to confirm that remote administration using SoL (“Serial over LAN”) worked, so I could start playing around with the new server it in my free time.

Hello Beastie!

Being our company’s “BSD guy”, it was pretty obvious that testing FreeBSD on the machine (even if it’s only for comparison reasons with Linux) would by my natural task. FreeBSD on ARM64 is currently a tier 2 platform. However it has been the plan for several years now to eventually make it tier 1 – i.e. a perfectly well supported platform. Even better: The Cavium ThunderX was selected as the reference platform for ARM64 and a partnership established between Cavium and the FreeBSD foundation! So doing a few tests there should be a piece of cake, right?

I booted the machine with the USB stick attached. Just as expected, there’s no working VGA console support (this is pretty much the default when working with ARM). The SoL connection worked, though and showed the nice beastie menu! “Off to a good start”, I thought. I was wrong.

The loader did its job of loading the kernel and… that’s where everything stopped. I rebooted again and again, trying to set loader tunables to get the system to boot (need I mention that the machine literally takes minutes to execute EFI stuff before it gets to the loader?). No luck whatsoever. The line “Entered …. EfiSelReportStatusCode Value: 3101019” was always the last thing printed to the screen. And also waiting for minutes the machine didn’t do anything afterwards.

“Maybe that status code shows that something is wrong?” I thought. A quick search on the net revealed a console log for NetBSD booting on the same hardware. It printed the same line but the kernel obviously did it’s thing. Weird – and quite discouraging. At this point I had no more ideas that I could try and was about to accept that it “just didn’t work”. Then I read a post to the FreeBSD-arm mailing list: Somebody else had the same problem! However it looked like nobody had a solution – there was not a single answer to that mail… But the post included one important piece of information: FreeBSD 11.2 did work?

FreeBSD 11.2

Ok, that would be a lot better than nothing. So I downloaded the 11.2 image, dd’d it to the stick and booted. This time there was no beastie (before 12.0 FreeBSD used a different loader on ARM) – but the kernel booted! I even got to the installer, Yay!

I did a fairly simple install like I often do it. The big difference was that I couldn’t configure the network. But let’s leave that for later, shall we? The installation completed and I rebooted. The loader appeared, the kernel booted and… Bam! The system is unable to import the pool! Too bad…

So I booted the stick again, did a second install choosing UFS instead of ZFS. This time FreeBSD would boot successfully into the new system and allow me to log in. Unfortunately no NICs were detected – and a server without network connectivity is rather… boring. What now?

After a bit of research on the net I learned about LIQUIDIO(4), the “Cavium 10Gb/25Gb Ethernet driver”. Sounded great. However the manpage says that it “first appeared in FreeBSD 12.0”. *sigh* Looks like I have to upgrade to 12.0 somehow.

FreeBSD 12?

I downloaded the source for the latest 12.0 from the releng branch, put it on a stick and attached it to the ARM server. That alone – I didn’t even get to import the pool – was enough to make the system panic! Oh great…

A UFS formatted stick then? Yes, that worked. I transferred the source, built the system, installed the kernel and saw that familiar line: “Entered …. EfiSelReportStatusCode Value: 3101019”. Obviously it’s not the images that are broken but actually FreeBSD 12.0 is…

What’s next? Let’s try 12-STABLE. Maybe the problem is fixed there. Unless… The installed kernel is broken and trying to boot kernel.old does not work! Oh excellent, it looks like “make installkernel” did not create any backup! This probably makes a lot of sense on small ARM devices, but it doesn’t make any in this case. Now I’m in for another reinstall.

After reinstalling 11.2 I did “cp /boot/kernel /boot/kernel.11” – just in case. Then I transferred the code for 12-STABLE onto the machine and built it. You would probably nave guessed it by now: It’s broken.

HEADaches

The next day at work I organized a USB to ethernet adapter and configured ue0 so that I could finally ssh into the OS. That felt a lot better and I could also use svnlite directly to checkout the source code. Quite some time later I had built and installed -CURRENT. But would it work? Nope!

Now this was all very unfortunate. But I happen to like FreeBSD and really want it to work on ARM64, too. Just waiting for somebody to fix it didn’t sound like a good strategy, however. The hardware is still pretty exotic and if just one person reported breakage in the final days of RCs for 12.0, it’s pretty clear that not too many developers have access to it to test things regularly… So I figured that I should try to find out exactly when HEAD broke. That would most likely be helpful information.

Alright… r302409 was the commit that turned HEAD into 12-CURRENT. Let’s make sure that this version still works – otherwise it would mean that 11.0 on ThunderX was fixed after branching off HEAD. Seems very unlikely, but making sure wouldn’t hurt, right?

usr/local/aarch64-freebsd/bin/ranlib -D libc_pic.a
--- libc.so.7.full ---
/usr/local/aarch64-freebsd/bin/ld: getutxent.So(.debug_info+0x3c): R_AARCH64_ABS64 used with TLS symbol udb                                                                                   
/usr/local/aarch64-freebsd/bin/ld: getutxent.So(.debug_info+0x59): R_AARCH64_ABS64 used with TLS symbol uf                                                                                    
/usr/local/aarch64-freebsd/bin/ld: utxdb.So(.debug_info+0x5c): R_AARCH64_ABS64 used with TLS symbol futx_to_utx.ut                                                                            
/usr/local/aarch64-freebsd/bin/ld: jemalloc_tsd.So(.debug_info+0x3d): R_AARCH64_ABS64 used with TLS symbol __je_tsd_tls                                                                       
/usr/local/aarch64-freebsd/bin/ld: jemalloc_tsd.So(.debug_info+0x1434): R_AARCH64_ABS64 used with TLS symbol __je_tsd_initialized                                                             
/usr/local/aarch64-freebsd/bin/ld: xlocale.So(.debug_info+0x404): R_AARCH64_ABS64 used with TLS symbol __thread_locale                                                                        
/usr/local/aarch64-freebsd/bin/ld: setrunelocale.So(.debug_info+0x3d): R_AARCH64_ABS64 used with TLS symbol _ThreadRuneLocale                                                                 
cc: error: linker command failed with exit code 1 (use -v to see invocation)
*** [libc.so.7.full] Error code 1

Perhaps building an older version of FreeBSD on a newer one is not such a good idea… So I chose to install 11.0 first. That worked without problems. However… 11.0 was cut before the switch to LLVM 4.0 – which brought in a usable lld! For that reason 11.0 needed an external toolchain. Of course that version is not supported anymore so there are no packages for it around. To really build anything on 11.0 would mean that I’d first have to cross-compile a binutils package for it! I admit that this wasn’t terribly attractive to me – especially since I already had a way different goal.

Ok… So HEAD likely broke somewhere between r302409 and r343815 – that’s roughly 40,000 commits to check! Let’s just hope that it broke after FreeBSD on ARM64 became self-hosting…

Checking r330000 – it works! r340000? Broken. r335000: Still works. r337500: Nope. After a lot of deleting previous builds and source, checking out other old revisions and rebuilding kernel-toolchain and kernel (and hitting some versions that didn’t build on ARM in the first place), I found out that it was commit r336520 that broke FreeBSD on ThunderX: The vt_efifb device was added to the GENERIC kernel on ARM64. According to the commit, it was tested on PINE64 but obviously it does not work with ThunderX, yet.

Problem solved?

So if it’s just one line in the GENERIC configuration it should not be any problem to fix this, right? To test I checked out HEAD again, removed the line and rebuilt the kernel. And after rebuilding and installing once more – it booted just fine!

Then I restored the GENERIC config and wrote a custom kernel configuration instead:

/usr/src/sys/arm64/conf/THUNDERX:

include GENERIC
ident THUNDERX

nodevice        vt_efifb

I wrote to the mailing list that I had identified a problem with ThunderX and made proposals on how to unbreak HEAD.

An answer pointed out that a custom kernel was not even required: It suffices to set the loader tunable hw.syscons.disable to make HEAD work again! No need to remove the device from GENERIC. That’s even better! But what I actually wanted was 12.0 or 12-STABLE and not HEAD. Installing that however, I found that it’s… well. Broken!

It broke twice!

What’s going on here? Could it be that HEAD broke twice and the other problem was fixed in a later revision? I had no better explanation. So let’s get compiling for another few nights again and see…

r339436 is when HEAD was renamed to 13-CURRENT. Would it work just setting the loader tunable? Nope. I played the same game as before to eventually figure out that r338537 was the offending commit here. It’s message says that it’s increasing two values because of ThunderX – but unfortunately actually that broke the platform.

On to more compiling… Finally here’s the other commit: r343764 changed those values again in preparation for ThunderX2 – and accidentally fixed ThunderX, too! Unfortunately that wasn’t until after 12.0 was branched off.

Again it’s a rather simple patch to make 12-STABLE work:

--- sys/arm/arm/physmem.c.orig  2019-02-17 08:47:05.675448000 +0100
+++ sys/arm/arm/physmem.c       2019-02-17 08:48:53.209050000 +0100
@@ -29,6 +29,7 @@
  #include 
  __FBSDID("$FreeBSD: stable/12/sys/arm/arm/physmem.c 341760  
2018-12-09 06:46:53Z mmel $");

+#include "opt_acpi.h"
  #include "opt_ddb.h"

  /*
@@ -48,8 +49,13 @@
   * that can be allocated, or both, depending on the exclusion flags  
associated
   * with the region.
   */
+#ifdef DEV_ACPI
+#define MAX_HWCNT       32      /* ACPI needs more regions */
+#define MAX_EXCNT       32
+#else
  #define        MAX_HWCNT       16
  #define        MAX_EXCNT       16
+#endif

  #if defined(__arm__)
  #define        MAX_PHYS_ADDR   0xFFFFFFFFull

Fixing 12?

After I had a working 12 system, I wrote to the mailing list again, asking if the change could be MFC’d from r343764 into 12-STABLE.

Rod Grimes was nice enough to ping the developer who had made the commit. So far he hasn’t replied but I really hope that it’ll be possible to MFC the change. Because otherwise that would mean all of FreeBSD 12.x would remain broken – and that would definitely not be cool… If it is possible we’re likely to have a working 12.1 image when that version is out. 12.0 will remain broken, I guess.

Until we have a working version again, you can either install using 11.2 and build from (currently patched) source. Or you can try a fixed image that I build (if you trust me). What I did to build was this:

  1. svnlite co svn://svn.freebsd.org/base/stable/12 /usr/src
  2. cd /usr/src
  3. patch -i patchfile
  4. make -j 49 buildworld
  5. make -j 49 buildkernel
  6. edit stand/defaults/loader.conf to set hw.syscons.disable
  7. cd release
  8. make memstick NOPKG=1 NOPORTS=1 NOSRC=1 NODOC=1 WITH_COMPRESSED_IMAGES=1

Conclusion

Tier 1? Obviously we’re not quite there, yet. Booting off of ZFS does not work. There are no binary updates. Even with 12-STABLE I’ve been unable to get the NICs working – and you wouldn’t want to run a server with 4x10Gbit and one 40Gbit NICs using just a 1Gbit USB to ethernet adapter.

On the plus side, FreeBSD generally works and seems stable with UFS. Building from source using 49 threads works and so does building software from ports. I even installed Synth and ran the “build-everything” for over a day to put some serious load on the machine without any problems.

I’d like to at least get the network working, too, before that machine will turn into a penguin for production. Maybe I can set aside a few more hours at night. If I end up with anything noteworthy, I’ll follow-up on this post. Oh, and if anybody has any information on making it work, please let me know!