Using FreeBSD with Ports (2/2): Tool-assisted updating

In the previous post I explained why sometimes building your software from ports may make sense on FreeBSD. I also introduced the reader to the old-fashioned way of using tools to make working with ports a bit more convenient.

In this follow-up post we’re going to take a closer look at portmaster and see how it especially makes updating from ports much, much easier. For people coming here without having read the previous article: What I describe here is not what every FreeBSD admin today should consider good practice (any more)! It can still be useful in special cases, but my main intention is to discuss this for building up the foundation for what you actually should do today.

Building a desktop

Last time we stopped after installing the xorg-minimal meta-port. Let’s say we want a very simple desktop installed on this machine. I chose the most frugal *nix desktop, EDE (the Equinox Desktop Environment, looking kind of like Win95), because that’s drawing in things that I need for demonstrating a few interesting things here – and not that much more.

Unfortunately in the ports tree that we’re using, exactly that port is broken (the newer compiler in FreeBSD 11.2 is more picky than the older ones and not quite happy with EDE code). So to go on with it, we have to fix it first. I’ve uploaded an additional patch file from a later version of the port and also prepared a patch for the port’s Makefile. If you want to follow along, you can just copy the three lines below to your terminal:

# fetch http://www.elderlinux.org/files/patch-evoke_Xsm.cpp -o /usr/ports/x11-wm/ede/files/patch-evoke_Xsm.cpp
# fetch http://www.elderlinux.org/files/patch_ede_port -o /usr/ports/x11-wm/ede/patch_ede_port
# patch -d /usr/ports/x11-wm/ede -i patch_ede_port

Using Portmaster to build and install EDE

Thanks to build-time dependencies and default options in FreeBSD it’s still another 110 ports to build, but that’s fine. We could remove some unneeded options and cut it down quite a bit. Just to give you an idea: By configuring only one package (doxygen) to not pull in all the dependencies that it usually does, it would be just 55 (!) ports.

But let’s say we’re lazy. Do we have to face all of those configure dialogs (72 in cause you are curious)? No, we don’t. That’s why portmaster has the -G flag which skips the config make target and just uses the standard port options:

# portmaster -DG x11-wm/ede

EDE was successfully installed

Using this option can be a huge time-saver if you’re building something where you know that you don’t need to change the options for the application and its dependencies.

System update

Now that we have a simple test system with 265 installed but outdated packages. Let’s update it! Remember that unlike e.g. Linux, FreeBSD keeps third party software installed from packages or ports and the actual operating system separate. We’ll update the latter first:

# freebsd-update upgrade -r 11.3-RELEASE

With this command, we make the updater download the required files for the upgrade from 11.2-RELEASE to 11.3-RELEASE.

Upgrading FreeBSD to 11.3-RELEASE

When it’s done, and we’ve closed the three lists with the removed, updated and new files, we can install the new kernel:

# freebsd-update install

Once that’s done, it’s time to reboot the system so the new kernel boots up. Then the userland part of the operating system is installed by using the same command again:

# shutdown -r now
# freebsd-update install

Kernel upgrade complete

Preparations

Now in our fresh 11.3 system we should first get rid of the old ports tree to replace it with a newer one, right? Wait, hold that rm command for a second and let me show you something really useful!

If you take a look at the /usr/ports directory, you’ll find a file appropriately named UPDATING. And since that’s right what we were about to do, why not take a look at it?

So what is this? It’s an important file for people updating their systems using ports. Here is where ports maintainers document breaking changes. You are free to ignore it and the advice that it gives and there’s actually a chance that you’ll get away with it and everything will be fine. But sometimes you don’t – and fixing stuff that you screwed up might take you quite a bit longer than at least skimming over UPDATING.

# less /usr/ports/UPDATING

But right now it’s completely sufficient to look at the metadata of the first notification which reads:

20180330:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

The main takeaway here is the date. The last heads-up notice for our old ports tree was on 2018-03-30.

Checking out a newer ports tree

Now let’s throw it all away and then get the new ports tree. Usually I’d use portsnap for this, but in this case I want a special ports tree (the one that would have come with the OS if I got ports from a fresh 11.3 installation), so I’m checking it out from SVN:

# rm -rf /usr/ports/.* /usr/ports/*
# svnlite co svn://svn.freebsd.org/ports/tags/RELEASE_11_3_0 /usr/ports

If you’re serious about updating a production server that you care about, now is the time to read through UPDATING again. Search for the date string that you previously took a note of and then read the messages all the way up to the beginning of the file. It’s enough to read the AFFECTS lines until you hit one message that describes a port which you are using. You can ignore all the rest but should really read those heads-up messages that affect your system.

What software can be updated?

BTW… You know which packages you have installed, don’t you? A huge update like what we’re facing here takes some planning up-front if you want to do it in a professional manner. In general you should update much more often, of course! This makes things much, much easier.

Updating from ports

Ok, we’re all set. But which software can be updated? You can ask pkg(8) to compare installed packages to the respective distinfo from the corresponding port:

# pkg version -l "<"

If you pipe that into wc -l you will see that 165 of the 265 installed packages can (and probably should) be updated.

Updating software from ports

We’ll start with something really simple. Let’s say, we just want to update pkgconf for now. How do we do that with portmaster? Easy enough: Just like we would have portmaster install it in the first place. If something is already installed, the tool will figure out if an update is available and then offer to update it.

# portmaster -G devel/pkgconf

And what will happen if the port is already up to date? Well, then portmaster will simply re-build and re-install it.

Partial update finished

While partial updates are possible, it’s a much better idea to keep the whole system updated (if at all possible). To do that, you can use the -a flag for portmaster to update all installed software.

Another nice flag is -i, which is for interactive mode. In this mode, portmaster will ask for every port if it should be updated or not. If you’re leaving out critical ports (dependencies) this can lead to an impossible update plan and portmaster will not start updating. But it can be useful for cherry-picking.

Interactive update mode

Now let’s attempt to upgrade all the ports, shall we?

# portmaster -aG

As always, portmaster will show you its plan and ask for your confirmation. There are two things that probably deserve a word or two about them. Usually the plan is to update an application, but sometimes portmaster wants to re-install or even downgrade them (see picture)! What’s happening here?

Re-installs mostly happen when a dependency changed and portmaster figured out that it might be a good idea to rebuild the port against the newer version of the dependency, even though there is no newer version available for the actual application.

Upgrading, “downgrading”, re-installing

Downgrades are a different thing. They can happen when you installed something from a newer ports tree and then go back to an older one (something you usually shouldn’t do). But in this case it’s actually a false claim. Portmaster will not downgrade the package – it was merely confused by the fact that the versioning scheme changed (because of the 0 in 2018.4 it thinks that this version is older than the previous 2.1.3…).

Moved ports

If you’re paying close attention to all the information that portmaster gives you, you’ll have seen lines like the following one:

===>>> The x11/bigreqsproto port moved to x11/xorgproto

There’s another interesting file in the ports tree called MOVED. It keeps track of moved or removed ports. Sometimes ports are renamed or moved to another category if the maintainer decides it fits better. Portmaster for example started as sysutils/portmaster and was later moved when the ports-mgmt category was introduced. However you won’t find this information in the MOVED file – because it happened before the time that the current MOVED keeps records for (i.e. early 2007 in this case).

The example above is due to the fact that last year the upstream project (Xorg) decided to combine the protocol headers into one distribution package. Before that there were more than 20 separate packages for them (and before that, once upon a time, all of Xorg had been one giant monolithic release – but I digress…)

Problem with merged ports

The good news here is that portmaster is smart enough to parse the MOVED file and figure out how to deal with this kind of changes in the ports tree! The bad news is that this does not work for more complicated things like the merges that we just talked about…

So what now? Good thing you read the relevant UPDATING notification, eh?

20180731:
  AFFECTS: users of x11/xorg and all ports with USE_XORG=*proto
  AUTHOR: zeising@FreeBSD.org

Bulk-deleting obsolete ports and trying again

So let’s first get rid of the old *proto packages with the command that developer Niclas Zeising proposes and then try again:

# pkg version -l \? | cut -f 1 -w | grep -v compat | xargs pkg delete -fy
# portmaster -aG

Required options

Alright, we have one more problem to overcome. There are ports that will fail to build if we run portmaster with the -G flag. Why? Because they have mandatory ports options where you need to choose from things like a backend or a certain mechanic.

The “mandatory options” case

One such case is freetype2. Since this one fails, build it separately and do not skip the config dialog for this one:

# portmaster print/freetype2

Once that’s done, we can continue with updating all the remaining ports. After quite a while (because LLVM is a beast) all should be done!

Updating complete!

Default version changes

Did you read the following notice in UPDATING?

20181213:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

For the big update run we ignored this. And in fact, portmaster did update Perl, but only to the latest version in the 5.26 branch of the language:

# pkg info -x perl
perl5-5.26.3

Why? Well, because that was the version of Perl that was already installed. Actually this is ok, if you’re not using Perl yourself and thus can live without the latest features added. However if you want to (or have to) upgrade to a later Perl major version we have a little bit more work to do.

First edit /etc/make.conf and put the following line in there:

DEFAULT_VERSIONS+=perl5=5.28

This is a hint to the ports framework that whenever you’re building a Perl port, you want to build it against this version. If you don’t do that, you will receive a warning when building the other Perl. Because in this case you’re installing an additional Perl version but all the ports will use the primary one. So more likely than not you don’t want to do this.

Upgrading Perl

Next we need to build the newer Perl. The special thing here is that we need to tell portmaster of the changed origin so that the new version actually replaces the old one. We can do this by using the -o flag. Mind the syntax here, it’s new origin first and then old origin and not vice versa (took me a while to get used to it…)!

But let’s check the origin real quick, before we go on. The pkg command above showed that the package is called perl5. This outputs what we wanted to know:

# pkg info perl5
perl5-5.26.3
Name           : perl5
Version        : 5.26.3
Installed on   : Tue Sep 10 21:15:36 2019 CEST
Origin         : lang/perl5.26
[...]

There we have it. Now portmaster can begin doing its thing:

# portmaster -oG lang/perl5.28 lang/perl5.26

Rebuilding ports that depend on Perl

Ok, the default Perl version has been updated… But this broke all the Perl ports! So it’s time to fix them by rebuilding against the newer Perl 5.28. Luckily the UPDATING notice points us to a simple way to do this:

# portmaster -f `pkg shlib -qR libperl.so.5.26`

And that’s it! Congratulations on updating an old test system via ports.

At last: All done!

What if something goes wrong?

You know your system and applications, are proficient with your shell and you’ve read UPDATING. What could possibly go wrong? Well, we’re dealing with computers here. Something really strange can go wrong anytime. It’s not very likely, but sometimes things happen.

Portmaster can help you if you ask for it before attempting upgrades. Before deinstalling an old package, it creates a backup. However after installing the new version it throws it away. But you can make it keep the backup by supplying the -b flag. Sometimes the old package can come in handy if something goes completely sideways. If you need backup packages, have a look in /usr/ports/packages/portmaster-backup. You can simple pkg add those if you need the old version back (of course you need to be sure that you didn’t update the packages dependencies or you need the downgrade them again, too!).

If you want to be extra cautious when updating that very special old box (that nobody dared to touch for nearly a decade because the boss threatened to call down terrible curses (not the library!) upon the one who breaks it), portmaster will also support you. Use the -w flag and have it preserve old shared libs before deinstalling an old package. I wouldn’t normally do it (and think my example made it clear that it’s really special). In fact I’ve never used it. But it might be good to know about it should you ever need it.

That said, on the more complicated boxes I usually create a temporary directory and issue a pkg create -a, completely backing up all the packages before I begin the update process. Usually I can throw away everything a while later, but having the backups saved me some pain a couple of times already.

In the end it boils down to: Letting your colleagues call you a coward or being the tough guy but maybe ruining your evening / weekend. Your choice.

Anf if you need to know even more about the tool that we’ve been using over and over now, just man portmaster.

What’s next?

I haven’t decided on the next topic that I’m going to write about. However I’ve planned for two more articles that will cover building from ports the modern way(tm) – and I hope that it will not take me another two years before I come to it…

Advertisements

Using FreeBSD with Ports (1/2): Classic way with tools

Installing software on Unix-like systems originally meant compiling everything from source and then copying the resulting program to the desired location. This is much more complicated than people unfamiliar with the subject may think. I’ve written an article about the history of *nix package management for anyone interested to get an idea of what it once was like and how we got what we have today.

FreeBSD pioneered the Ports framework which quickly spread across the other BSD operating systems which adopted it to their specific needs. Today we have FreeBSD Ports, OpenBSD Ports and NetBSD’s portable Pkgsrc. DragonflyBSD currently still uses DPorts (which is meant to be replaced by Ravenports in the future) and there are a few others like e.g. mports for MidnightBSD. Linux distributions have developed a whole lot of packaging systems, some closer to ports (like Gentoo’s Portage where even the name gives you an idea where it comes from), most less close. Ports however are regarded the classical *BSD way of bringing software to your system.

While the ports frameworks have diverged quite a bit over time, they all share a basic purpose:They are used to build binary packages that can then be installed, deinstalled and so on using a package manager. If you’re new to package management with FreeBSD’s pkg(8), I’ve written two introduction articles about pkg basics and pkg common usage that might be helpful for you.

Why using Ports?

OpenBSD’s ports for example are maintained only to build packages from and the regular user is advised to stay away from it and just install pre-built packages. On FreeBSD this is not the case. Using Ports to install software on your machines is nothing uncommon. You will read or hear pretty often that in general people recommend against using ports. They point to the fact that it’s more convenient to install packages, that it’s so much quicker to do so and that especially updating is much easier. These are all valid points and there are more. You’re now waiting for me to make a claim that they are wrong after all, aren’t you? No they are right. Use packages whenever possible.

But when it’s not possible to use the packages that are provided by the FreeBSD project? Well, then it’s best to… still use packages! That statement doesn’t make sense? Trust me, it does. We’ll come back to it. But to give you more of an idea of the advantages and disadvantages of managing a FreeBSD system with ports, let’s just go ahead and do it anyway, shall we?

Just don’t do this to a production system but get a freshly installed system to play with (probably in a VM if you don’t have the right hardware lying around). It still is very much possible to manage your systems with ports. In fact at work I have several dozen of legacy systems some of which begun their career with something like FreeBSD 5.x and have been updated since then. Of course ports were being used to install software on them. Unfortunately machines that old have a tendency to grow very special quirks that make it not feasible to really convert them to something easier to maintain. Anyways… Be glad if you don’t have to mess with it. But it’s still a good thing to know how you’d do it.

So – why would you use Ports rather than packages? In short: When you need something that packages cannot offer you. Probably you need a feature compiled in that is not activated by default and thus not available in FreeBSD’s packages. Perhaps you totally need the latest version in ports and cannot wait for the next package run to complete (which can take a couple of days). Maybe you require software licensed in a way that binary redistribution is prohibited. Or you’ve got that 9-STABLE box over there which you know quite well you shouldn’t run anymore. But because you relay on special hardware that offers drivers only for FreeBSD 9.x… Yeah, there’s always things like that and sometimes it’s hard to argue away. But when it comes to packages – since FreeBSD 9 is EOL for quite some time there are no packages being built for it anymore. You’ll come up with any number of other reasons why you can’t use packages, I’m not having any doubts.

Using Ports

Alright, so let’s get to it. You should have a basic understanding of the ports system already. If you haven’t, then I’d like to point you to another two articles written as an introduction to ports: Ports basics and building from ports.

If you just read the older articles, let me point out two things that have happened since then. In FreeBSD 11.0 a handy new option was added to portsnap. If you do

# portsnap auto

it will automatically figure out if it needs to fetch and extract the ports tree (if it’s not there) of if fetching the latest changesets and updating is the right thing to do.

The other thing is a pretty big change that happened to the ports tree in December 2017 not too long after I wrote the articles. The ports framework gained support for flavors and flavored ports. This is used heavily for Python ports which can often build for Python 2.7, 3.5, 3.6, … Now there’s generally only one Port for both Python2 and Python3 which can build the py27-foo, py36-foo, py37-foo and so on packages. I originally intended to cover tool-assisted ports management shortly after the last article on Ports, but after that major change it wasn’t clear if the old tools would be updated to cope with flavors at all. They were eventually, but since then I never found the time to revisit this topic.

Scenario

I’ve setup a fresh FreeBSD 11.2 system on my test machine and selected to install the ports that came with this old version. Yes, I deliberately chose that version, because in a second step we’re going to update the system to FreeBSD 11.3.

Using two releases for this has the advantage that it’s two fixed points in time that are easy to follow along if you want to. The ports tree changes all the time thanks to the heroic efforts that the porters put into it. Therefor it wouldn’t make sense to just use the latest version available because I cannot know what will happen in the future when you, the reader, might want to try it out. Also I wanted to make sure that some specific changes happened in between the two versions of the ports tree so that I can show you something important to watch out for.

Mastering ports – with Portmaster

There are two utilities that are commonly used to manage ports: portupgrade and portmaster. They pretty much do the same thing and you can choose either. I’m going to use portmaster here that I prefer for the simple reason of being more light-weight (portupdate brings in Ruby which I try to avoid if I don’t need it for something else). But there’s nothing wrong with portupgrade otherwise.

Building portmaster

You can of course install portmaster as a package – or you can build it from ports. Since this post is about working with ports, we’ll do the latter. Remember how to locate ports if you don’t know their origin? Also you can have make mess with a port without changing your shell’s pwd to the right directory:

# whereis portmaster
portmaster: /usr/ports/ports-mgmt/portmaster
# make -C /usr/ports/ports-mgmt/portmaster install clean

Portmaster build options

As soon as pkg and dialog4ports are in place, the build options menu is displayed. Portmaster can be built with completions for bash and zsh, both are installed by default. Once portmaster is installed and available, we can use it to build ports instead of doing so manually.

Building / installing X.org with portmaster

Let’s say we want to install a minimal X11 environment on this machine. How can we do it? Actually as simple as telling portmaster the origin of the port to install:

# portmaster x11/xorg-minimal

Port options…

After running that command, you’ll find yourself in a familiar situation: Lots and lots of configuration menus for port options will be presented, one after another. Portmaster will recursively let you configure the port and all of its dependencies. In our case you’ll have 42 ports to configure before you can go on.

Portmaster in action

In the background portmaster is already pretty busy. It’s gathering information about dependencies and dependencies of dependencies. It starts fetching all the missing distfiles so that the compilation can start right away when it’s the particular port’s turn. All the nice little things that make building from ports a bit faster and a little less of a hassle.

Portmaster: Summary for building xorg-minimal (1/2)

Once it has all the required information gathered and sorted out, portmaster will report to you what it is about to do. In our case it wants to build and install 152 ports to fulfill the task we gave it.

Portmaster: Summary for building xorg-minimal (2/2)

Of course it’s a good idea to take a look at the list before you acknowledge portmaster’s plan.

Portmaster: distfile deletion?

Portmaster will ask whether to keep the distfiles after building. This is ok if you’re building something small, but when you give it a bigger task, you probably don’t want to sit and wait to decide on distfiles before the next port gets built… So let’s cancel the job right there by hitting Ctrl-C.

Portmaster: Ctrl-C report (1/2)

When you cancel portmaster, it will try to exit cleanly and will also give a report. From that report you can see what has already been done and what still needs to be done (in form of a command that would basically resume building the leftovers).

Portmaster: Ctrl-C report (2/2)

It’s also good to know that there’s the /tmp/portmasterfail.txt file that contains the command should you need it later (e.g. after you closed your terminal). Let’s start the building process again, but this time specify the -D argument. This tells portmaster to keep the distfiles. You could also specify -d to delete them if you need the space. Having told portmaster about your decision helps to not interrupt the building all the time.

# portmaster -D x11/xorg-minimal

Portmaster: Build error

All right, there we have an error! Things like this happen when working with older ports trees. This one is a somewhat special case. Texinfo depends on one file being fetched that changes over time but doesn’t have any version info or such. So portmaster would fetch the current file, but that has silently changed since the distfile information was put into our old ports tree. Fortunately it’s nothing too important and we can simply “fix” this by overwriting the file’s checksum with the one for the newer file:

# make makesum -C /usr/ports/print/texinfo
# portmaster -D x11/xorg-minimal

Manually fetching missing distfile

Next problem: This old version of mesa is no longer available in the places that it used to be. This can be fixed rather easily if you can find the distfile somewhere else on the net! Just put it into its place manually. Usually this is /usr/ports/distfiles, but there are ports that use subdirectories like /usr/ports/distfiles/bash or even deeper structures like e.g. /usr/ports/distfiles/xorg/xserver!

# fetch https://mesa.freedesktop.org/archive/older-versions/17.x/mesa-17.3.9.tar.xz -o /usr/ports/distfiles/mesa-17.3.9.tar.xz
# portmaster -D x11/xorg-minimal

Portmaster: Success!

Eventually the process should end successfully. Portmaster gives a final report – and our requested program has been installed!

What’s next?

Now you know how to build ports using portmaster. There’s more to that tool, though. But we’ll come to that in the next post which will also show how to use it to update ports.

Summer Sun and microsystems

Juli is coming to an end so there should be a new article on the Eerie Linux blog! Here it is. It’s not about a single technical topic, though. More of a “meta” article. Im writing about what I’d like to be writing about soon! No, I haven’t been on vacation or anything. Sometimes things just don’t work out as planned. More on that in a second.

Illumos!

Have you noticed that I changed the header graphic? In 2016 I added beastie and puffy. And now there’s also the Illumos phoenix. Being a Unix lover, I’ve long felt that I’m really missing something in knowing next to nothing about the former Sun platform. Doing just a little bit of experimentation, I’ve become somewhat fascinated with the Solaris / Illumos world.

It’s Unix, so I know my way around for the most basic things. But it’s quite different from *BSD and Linux that I work with on a daily base – and from what I can say so far, definitely not for the worse. The system left an impression on me of being well engineered and offering interesting or even beautiful solutions for common problems! I can’t remember the last time when something in Linux-space struck me as being beautifully crafted – which is no wonder given the fact that all the tools are developed separately and are only bundled together to form a complete operating system by the distributions. While it usually works, the aspect of a consistent OS with a closely coordinated userland was one of the strong points that drew me towards *BSD. Obviously the same is true for the various Illumos distributions.

Since I’m really just starting my journey, I don’t have that much to say, yet. I plan on doing an interview with an OmniOS user who has switched over from FreeBSD not too long ago and hope that I’ll be able to arrange something. Other than that I’ll have to do some reading and will probably visit the official Solaris, too, even though I’m all for Open Source. When I feel capable of writing something remotely useful, I plan on presenting the various Illumos distributions and their strong points. Might take me some time, though, but from now on several Solaris/Illumos topics are on my todo list.

Illumos people reading this: If you’d like me to write about your OS, please help me by making some suggestions of topics for beginners! That would be very much appreciated.

BSD router

The article series about the custom-built BSD router is still the most popular one on my blog. Given that so many people are interested in this topic, I wanted to write a follow-up for quite some time now. Since the newest version of OPNsense (19.7) was released this month, I decided to finally pick that topic. Would have been nice to do a fresh reinstall and cover what has changed.

Also there have been many new firmware releases for the APU2 since I covered it. I was rather excited when I learned (in January or so) that they had finally enabled the ECC feature for the RAM! Other stuff happened, too, so I’d definitely have something to play with and to write about.

There’s only one problem… As of exactly this month, I don’t have a working APU2 anymore. If you have children, keep your stuff out of their reach! It doesn’t make that great a toy, anyway.

So this topic has to be postponed due to lack of hardware. But I will get a new one sometime and then write about it. I already have some ideas on what to there for a second mini series.

Ravenports

Another thing that I’d have liked to write about is my favorite packaging system, Ravenports. There’s a pretty big thing coming soon (I hope), but it’s not there, yet. So no new issue of the “raven report” this month!

Other than that big change, I still have the other issue on my todo list: Re-bootstrap on FreeBSD i386. I’ve done that once (before the toolchain update to gcc 8) and will do it again as I find the time. Of course right now it makes sense to wait for the anticipated new component to land first!

I’ve also been thinking about writing a bootstrap script. Probably I’ll give it a shot – and if it can do i386, I’ll have that covered, too.

HardenedBSD

This has been on my list for over a year now, too. I took a quick look at that project and like what they do very much. It’s basically FreeBSD with lots and lots of security enhancements, built and maintained by a small team of people dedicated to make FreeBSD more secure. Why a separate project, then? For the simple reason that FreeBSD is a huge project where several parties have various interests in the OS and where getting very invasive changes in is sometimes not that easy. So HardenedBSD is developed in parallel to give the developers all the freedom to make changes as they please.

Except for all the security related stuff there are a few things where the system is different from upstream FreeBSD, too. I’ll need to explore this a little more and start actually doing stuff with HardenedBSD. Then I’ll definitely write about it here on the blog.

Ports and packages

In 2017 I wrote about package management and Ports on FreeBSD in a mini series of articles. It ended with building software from ports. Actually it wasn’t meant to end right there, as I had at least two more articles in the pipe. I’ll be concentrating on this one and hope to have a blog post out in early August.

Other stuff

My interest in the ARM platform has not vanished and I want to do more with it (and then probably blog about it, too). Also I want to revisit FreeBSD jails and still have to get my series on email done… And I’m not done with the “power user” series, either.

So there’s more than enough topics and far too little time. Let’s see what I can get done this year and what will have to wait!

Testing OmniOSce on real hardware

[edit 07-02]Added images to make the article a little prettier[/edit]

Last year I decided to take a first look at OmniOSce. It’s a community-driven continuation of the open source operating system that originated at OmniTI. It is also an Illumos distribution and as such a continuation of the former Open Solaris project by Sun that was shut down after Oracle acquired the former company.

In my First post I installed the OS on a VM and showed what the installation procedure was like. Post two took a look at how the man pages are organized, system services with SMF, as well as user management. And eventually post three was about doing network configuration by hand.

One reader mentioned on Reddit that he’d be more interested in an article about an installation on real hardware. I wanted to write that article, too – and here it is.

Scope of this article

While it certainly could be amusing to accompany somebody without any clue on what he’s doing as he tries to find his way around in a completely unfamiliar operating system, there’s no guarantees for that. And actually there’s a pretty big chance of it coming out rather boring. This is why I decided to come up with a goal instead of taking a look at random parts of the operating system.

Beautiful loader with r151030

Last time I wanted to bring up the net, make SSH start and add an unprivileged user to the system so I could connect to the OmniOS box from my FreeBSD workstation. All of that can be done directly from the installer, however and while I went the hard way before, I’m going to use those options this time. My new goal is to take a look at an UEFI installation, the Solaris way of dealing with privileged actions as well as a little package management and system updates. That should be enough for another article!

OmniOSce follows a simple release schedule with two stable releases per year where every fourth release is an LTS one. When I wrote my previous articles, r151022 was the current LTS release and r151026 the newest stable release (which I installed). In the meantime, another stable release has been released (r151028) and the most recent version, r151030, is the new LTS release. Since I want to do an upgrade, I’m going to install r151028 rather than r151030.

To boot or not to boot (UEFI)

My test machine is configured for UEFI with Legacy Modules enabled. Going to the boot menu I see the OmniOSce CD only under “legacy boot”. I choose it, the loader comes up, boots the system and just a little later I’m in the installer. It detects the hard drive and will let me install to it. The default option is to install using the UEFI scheme, so I accept that. After the installation is complete, I reboot – and the system cannot find any bootable drives…

Ok, let’s try the latest version. Perhaps they did some more work on UEFI? They did: This time the CD is listed in the UEFI boot sources, too and a beautiful loader greets me after I selected it. The text and color looks a bit nicer in the EFI mode console, too. I repeat the installation, reboot… And again, the hard drive is not bootable!

This machine does not support “pure” UEFI mode. I switch to legacy mode and try the older CD image again. Installing to GPT has the same effect as before: The system is not bootable. I do not really want to use MBR anymore, but fortunately the OmniOS installer has two more options to work around the quirks of some EFI implementations. Let’s try the first one, GPT+Active. Once again: The system is not bootable… But then there’s GPT+Slot1 – and that finally did the trick! The system boots from hard disk.

At this point I decided to sacrifice another machine for my tests. It doesn’t even recognize the newer r151030 ISO as an UEFI boot source – neither in mixed mode nor in pure UEFI mode. But things are getting even more weird: I install OmniOS for the UEFI scheme again and the system does recognize the drive and is able to boot it – however only in legacy mode!

UEFI is a topic for itself – and actually a pretty strange one. It has meant headaches for me before, so I wouldn’t overrate OmniOS not working with the UEFI on those particular two machines. My primary laptop will try to PXE-boot if a LAN cable is attached – even though PXE-booting is disabled in the EFI. Another machine is completely unable to even detect (!) hard drives partitioned with GPT when running in legacy mode… To me it looks that most EFI implementations have their very own quirks and troubles. Again: Don’t overrate this. The OmniOS community is rather small and it’s completely impossible for them to make things work on all kinds of crappy hardware.

Chances are that it just works on your machine. While I’d like to test more machines, I don’t have the time for this right now. So let’s move on with GPT and Legacy/BIOS boot.

Installation

I like Kayak, the installer used by OmniOS. It’s simple and efficient – and it does it’s thing without being over-engineered. Being an alternative OS enthusiast, I’ve seen quite a bunch of installation methods. Some more to my liking, some less. When it comes to this one, I didn’t really have any problems with it and am pretty much satisfied. If I had to give any recommendation, I’d suggest adding a function to generate the “whatis” database (man -w) for the user (and probably make that the default option). I’ve come to expect that the apropos command just works when I try out a new system. And other newcomers might benefit from that, too.

Creating a test user with the installer

When I installed OmniOS last year, I ran into a problem with the text installer. I reported it and it was fixed really, really quickly. Of course I could not resist the temptation to try out the text installer for this newer release. With r151028 it works well. However it doesn’t offer any options over the new dialog-based one (on the contrary), so I’d recommend to use the new one.

Shell selection for the new user

As mentioned above, this time I decided to let the installer do the network setup and create a user for me (which made /home/kraileth my home directory and not /exports/home/kraileth!). When creating a user, I’m given the choice to use either ksh93, bash or csh. The latter is just plain old csh, and while I prefer tcsh over bash anytime, this is not such a tempting choice. But the default (ksh) is actually fine for me.

Selecting privileges for the new user

More interesting however is the installer’s ability to enable privileged access for the new user: I can choose to give it the “Primary Administrator” profile and / or to enable sudo (optionally without password).

Several new configuration options in the r151030 installer

Also the installer for r151030 features a few new options like enabling the Extra repository or the Serial Console. Certainly nice to see how this OS evolves!

Profiles

The installer allowed me to give my user the privilege to use sudo. Just like with *BSD and Linux this gives me the the ability to run commands as root or even become root using sudo -i. But this is not the only way to handle privileged actions. In fact there is a much better one on Solaris systems! It works by using profiles.

What are profiles? They are one of the tools present in Solaris (and Solaris-derived systems) that allow for really fine-grained access control. While traditional Unix access control is primitive to say the least, the same is not true for Solaris.

Taking a peek at user’s profiles

Let’s see what profiles my user has:

$ profiles
Primary Administrator
Basic Solaris User
All

My guess here is that every user has the profiles “All” and “Basic Solaris User” – while the “Primary Administrator” was added by the installer. The root user has some more (see screenshot).

The profiles of a user are assigned in the file /etc/user_attr and the actual profiles are defined in /etc/security/prof_attr. While all of this is probably not rocket science, it definitely looks complex and pretty powerful. Take a look at the screenshot to get a first impression and do some reading on your own if you’re interested.

Some profile definitions

As a newbie I didn’t know much about it, yet. The profiles mention help files however, and so I thought it might be worth the effort to go looking for them. Eventually I located them in /usr/lib/help/profiles. There is an HTML help for people like me who are new to the various profiles.

Help file for the “Primary Administrator” profile

Privileged Actions

Alright! But how do you make use of the cumulative privileges of your profiles? There are two ways: Running a single privileged command (much like with sudo) or executing a shell with elevated privileges. The first is accomplished using pfexec like this:

$ pfexec cat /root/.bashrc

Playing with privileged access

The system provides wrappers for some popular shells. However not all of the associated shells are installed by default! So on a fresh installation you should only count on the system shells.

Wrappers for the various profile shells

Basic package management

In the Solaris world there are two means for package management, known as SVR4 and IPS. The former is the old one used up to Solaris 10. I did a little reading on this and it looks like they are quite similar to the traditional *BSD pkg_tools. Basically it’s a set of programs like pkginfo, pkgadd, pkgrm and so on (where it’s really not so hard to tell from their names what they are used for).

The newer one uses the pkg(5) client command of the Image Packaging System. While the name of the binary suggests a relation with e.g. FreeBSD’s pkg(8), this is absolutely not the case.

Package management is a topic that deserves its own article (that I consider writing at some point). But basic operation is as simple as this:

$ pfexec pkg install tmux

It’s not too hard to anticipate that after a short while, Tmux should be available on your system (provided the command doesn’t error out).

System updates

Updating the operating system to a new release is a pretty straight-forward process. It’s recommended (but optional) to create a Boot Environment. The first required step is setting the pkg publisher to a new URI (pointing to a repository containing the data for the new release). Then you update the system (preferably to a new Boot Environment that is made the standard one) and reboot. Yes, that’s all there is to it.

System upgrade to version r151030

After the reboot you’ll see the new boot menu and better looking system fonts – even without using UEFI. Obviously r151030 implements a new frame buffer. I’ve also noticed that boot and especially shutdown times have decreased notably. Very nice!

Upgrade finished

Conclusion

If you’ve never worked with a Solaris-like system this article might have provided you with some new insights. However we’ve barely scratched the surface. The profiles are looking like a great means of access control to me, but usually you’d want to use OmniOSce for different reasons. Why? Because it has some really cool features that make it the OS of choice for some people who are doing impressive things.

What are those features? So far we didn’t talk about ZFS and the miracles of this great filesystem/volume manager (I’ve mentioned Boot Environments, but if you don’t know what BEs are you of course also don’t know why you totally want them – but trust me, you do). We didn’t talk about zones (think FreeBSD’s jails, but not quite – or Linux containers, but totally on steroids). Also I didn’t mention the great networking capabilities of the OS, the debugability and things like that.

As you can see, I probably wouldn’t run out of topics to write about, even if I decided to switch gears on my blog entirely to Illumos instead of just touching on them among more BSD and Linux articles. If you don’t want to wait for me to write about it again – why don’t you give it a try yourself?

Rusted ravens: Ravenports march 2019 status update

It’s been a couple of months since I last wrote about Ravenports, the universal *nix application building framework. Not exactly being a slowly moving project, a lot has happened since then.

Platform support

Raven currently supports DragonFly BSD, FreeBSD, Linux and Solaris/Illumos, the latter being only in the form of binary packages (except for when you have access to an installation of Solaris 10u8 – which can be used to build packages, too).

People following the project will notice the lack of macOS/Darwin support mentioned here. This is not a mistake as support for that platform has been put on hold for now. While Raven has successfully been bootstrapped on macOS before, the developers have lost access to any macOS machines and thus cannot continue support for it.

This does not mean that platform is gone forever. It might be resurrected at a later point in time if given access to a Mac again. The adventurous can even try to bootstrap Raven on different platforms now as the process has been documented (with macOS as the example).

I intended to do some work on bootstrapping Raven on FreeBSD/ARM64 – only to find that FreeBSD unfortunately still has a long way before making that platform tier 1. At work I had access to server-class ARM64 hardware, but current versions of FreeBSD have trouble booting up and I could not get the network running at all (if you’re interested in the details see my previous post). I’m still hoping for reactions on the mailing list but until upstream FreeBSD is fixed on ThunderX trying to bootstrap does not make much sense.

Toolchain and package updates

The toolchain used by Ravenports has been updated to GCC 8.3 and Binutils 2.32 on all four supported platforms (access to Mac was lost before the toolchain update).

As usual, Solaris needed a bit of extra treatment but up to date compiler and tools are available for it now, too. Even better: The linker from the LLVM project (lld) is available and usable on Solaris/Illumos now as well. Since it takes several hours (!) to link on Solaris, a mostly static lld executable was added to the sysroot package for that platform. This way this long-building package does not have to be rebuilt as often.

Packages have been rebuilt with this bleeding-edge toolchain (plus the usual fallout has been collected and fixed). So if you are using Raven, you are making use of the latest compiler technology with the best in optimization. Of course a lot of effort went into providing the most current versions of the packaged software, too (at least where that is feasible).

On the desktop side of things I’ve added the awesome window manager to the list of available software. It’s actually my WM of choice, but not too many people are into tiling so I postponed this one for after making Xfce available. Work on bringing in more Lua-related ports for an advanced configuration it is ongoing, but the WM is already usable as it is now.

I’ve also done a bit of less visible work, going back to many ports that I created previously and added in missing license info. This work is also not completed, yet, but the situation is improving, of course.

Rust!

One of the big drawbacks of Ravenports as stated last time, was the lack of the Rust compiler. This effectively meant a showstopper for things like current versions of Firefox, Thunderbird, librsvg, etc. The great news is that this blocker has been mostly removed: Rust is available via Raven for Dragonfly, FreeBSD and Linux! Solaris/Illumos support is pending, I think that any helping hand would be greatly appreciated.

Bringing in Rust was a big project on its own. Adding an initial bootstrap package for Dragonfly alone took months (thank you, Mr. Neumann!). The first working version of the port made Rust 1.31 available. It has since been updated to version 1.32 and 1.33 and John has added functionality to the Raven framework to work with Rust’s crates as well as scripts to assist with future updates. Taking all of that into consideration, Rust support in Raven is already pretty good for the short time that we have it.

Eventually even a port for Firefox landed – as of now it’s marked broken, though. The reason is that while it does compile just fine, the program crashes when actually started. The exact cause for this is yet unknown. If anybody with some debugging abilities has a little time on his hands, nailing down what happens would be a task that a lot of people will be benefit from for sure!

Updated ravenadm

Ravenadm, the Ravenports administration tool, has seen several updates with new features. Some have brought internal changes or new features necessary for new or updated packages. One example is a project-wide fix for ports that build with Meson: Before the change many programs needed special treatment to make Meson honor the rpath for the resulting binaries. Now Raven can automatically take care of this, saving us a whole bunch of sed commands in the specification file. Another new feature is the “Solaris functions” mechanism which can automatically fix certain functions that required generating patches before. Certainly also very nice to have!

Probably my favorite new feature is that Ravenadm now supports concurrent processes in several cases: While you cannot start a second set of package builds at the same time for obvious reasons, it is now possible to ask Ravenadm in which bucket a certain port lives, sort manifests, and such while building packages! I cannot say how much the previous behavior got in my way while doing porting work… This makes porting much, much more pleasant.

A last improvement that I want to mention here is a rather simple one – however one that has a huge impact. Newer versions of Ravenadm put all license-related texts into the logs! This means you can simply look at the log and see if e.g. the terms got extracted correctly. Before you had to use the ENTERAFTER option to enter an interactive build session and look at the extracted file. This is a huge improvement for porters.

SSL

Another big and most likely unique feature added to Raven recently is SSL autoselection. Raven has had autoselection facilities for Python, Ruby and Perl for about a year now. The latter allow for multiple versions of the interpreters to be installed in parallel and take care of calling the actual binary with the same parameters, preferring the newest version over older ones (until configured differently).

Raven supports LibreSSL, OpenSSL as well as LibreSSL-devel and OpenSSL-devel. Before the change, you could select the SSL library to use in the profile and it would be used to link all packages against it. Now we have even more flexibility: You can e.g. build all the packages against LibreSSL by default and just fall back to OpenSSL for the few packages that really require it!

And in fact Raven takes it all one step further: You can have OpenSSL 1.0.2 and OpenSSL 1.1.1 (which introduced braking changes) installed in parallel and use packages on the same system where some require the new version and some that cannot use it, yet! Pretty nice, huh?

Future work

Of course there are still enough rough edges that require work. Probably the most pressing issue is to get Firefox working so Raven’s users can have access to a convenient and modern browser. There are also quite some programs which need ports created for them. The goal here is to provide the most critical programs to allow Dragonfly to make the switch from Dports to Ravenports for the official packages.

On FreeBSD Filezilla does not currently work: It cannot be built with GCC due to a compiler bug in GCC 7.x and 8.x. Therefore it is a special port that get’s build with Clang instead. The problem is that libfilezilla needs to be built with the same toolchain – and that cannot currently be built with Clang using Raven…

Raven on Linux has some packages not available due to additional dependencies on that platform. I begun adding some Linux-specific ports but lost motivation to do so pretty fast (there are enough other things after all). Also the package manager is still causing pain, randomly crashing.

Solaris is also missing quite some packages. This is due to additional patches being required for a lot of software to build properly. Ravenports tries to support this platform as good as possible; however this could surely be improved if anybody using Solaris or an Illumos distribution as his or her OS of choice would start using Raven and giving feedback or even contribute.

Get in touch!

Interested in Raven? Get in touch with us! There is an official IRC channel now (#ravenports on Freenode) which is probably the best place to talk to other Raven users or the porters and developers. You can of course also send an email.

If you want to contribute, there is now a new “Customsource” repository on GitHub that you can create pull requests against. Feel free to contribute anything from finished ports that might only need polish to WIP ports that are important for you but you got stuck with.

There are many other means of helping with the than project then doing porting work, though. Open issues if you find problems with packages or have an idea. Also tell us if you read the wiki and found something hard to understand. Or if you could use a tutorial for something – just ping me. Asking doesn’t hurt and chances are that I can write something up.

Got something else that I didn’t talk about here? Tell us anyway. We’re pretty approachable and less elitist than you might think people who work on new package systems would be! 😉

ARM’d and dangerous: FreeBSD on Cavium ThunderX (aarch64)

While I don’t remember for how many years I’ve had an interest in CPU architectures that could be an alternative to AMD64, I know pretty well when I started proposing to test 64-bit ARM at work. It was shortly after the disaster named Spectre / Meltdown that I first dug out server-class ARM hardware and asked whether we should get one such server and run some tests with it.

While the answer wasn’t a clear “no” it also wasn’t exactly “yes”. I tried again a few times over the course of 2018 and each time I presented some more points why I thought it might be a good thing to test this. But still I wasn’t able to get a positive answer. Finally in January 2019 year I got a definitive answer – and it was “yes, go ahead”! The fact that Amazon had just presented their Graviton ARM Processor may have helped the decision.

Getting my hands on an ARM64 server

Eventually a Gigabyte R120-T32 was ordered and arrived at our Data Center in early February. It wasn’t perfect timing at all since there are quite a few important projects running currently which draw just about all the resources that we currently have. Still we put together a draft of an evaluation sheet with quite a few things to test. There are a lot of “yes / no” things and some that require performance measurements.

We set aside a few hours to put some drives into the machine, put it into the rack, configure the EFI and get BMC/IPMI working. We quickly found that there was no firmware update or anything available, so we should be good to go. However… The IPMI console (Java…) is not working at all. A few colleagues that have different Linux distros running on their workstations tried to access it – but no luck. It looks like a problem with too new Java versions (*sigh*). I didn’t even try it with my BSD workstation: I could never get Supermicro IPMIs to work – even on AMD64. So for me it’s Linux in Bhyve for that kind of task (if anybody has any idea on how to get IPMI console access working on FreeBSD please let me know!!).

Our common installation procedure for special servers involves a little device that stores CD/DVD images and presents a virtual optical drive to the machine. Since ARM64 machines usually don’t have optical drives anymore, there are no CD images available for that architecture. Fortunately it’s not that complicated to prepare a USB thumb drive instead.

Once that was done I was able to confirm that remote administration using SoL (“Serial over LAN”) worked, so I could start playing around with the new server it in my free time.

Hello Beastie!

Being our company’s “BSD guy”, it was pretty obvious that testing FreeBSD on the machine (even if it’s only for comparison reasons with Linux) would by my natural task. FreeBSD on ARM64 is currently a tier 2 platform. However it has been the plan for several years now to eventually make it tier 1 – i.e. a perfectly well supported platform. Even better: The Cavium ThunderX was selected as the reference platform for ARM64 and a partnership established between Cavium and the FreeBSD foundation! So doing a few tests there should be a piece of cake, right?

I booted the machine with the USB stick attached. Just as expected, there’s no working VGA console support (this is pretty much the default when working with ARM). The SoL connection worked, though and showed the nice beastie menu! “Off to a good start”, I thought. I was wrong.

The loader did its job of loading the kernel and… that’s where everything stopped. I rebooted again and again, trying to set loader tunables to get the system to boot (need I mention that the machine literally takes minutes to execute EFI stuff before it gets to the loader?). No luck whatsoever. The line “Entered …. EfiSelReportStatusCode Value: 3101019” was always the last thing printed to the screen. And also waiting for minutes the machine didn’t do anything afterwards.

“Maybe that status code shows that something is wrong?” I thought. A quick search on the net revealed a console log for NetBSD booting on the same hardware. It printed the same line but the kernel obviously did it’s thing. Weird – and quite discouraging. At this point I had no more ideas that I could try and was about to accept that it “just didn’t work”. Then I read a post to the FreeBSD-arm mailing list: Somebody else had the same problem! However it looked like nobody had a solution – there was not a single answer to that mail… But the post included one important piece of information: FreeBSD 11.2 did work?

FreeBSD 11.2

Ok, that would be a lot better than nothing. So I downloaded the 11.2 image, dd’d it to the stick and booted. This time there was no beastie (before 12.0 FreeBSD used a different loader on ARM) – but the kernel booted! I even got to the installer, Yay!

I did a fairly simple install like I often do it. The big difference was that I couldn’t configure the network. But let’s leave that for later, shall we? The installation completed and I rebooted. The loader appeared, the kernel booted and… Bam! The system is unable to import the pool! Too bad…

So I booted the stick again, did a second install choosing UFS instead of ZFS. This time FreeBSD would boot successfully into the new system and allow me to log in. Unfortunately no NICs were detected – and a server without network connectivity is rather… boring. What now?

After a bit of research on the net I learned about LIQUIDIO(4), the “Cavium 10Gb/25Gb Ethernet driver”. Sounded great. However the manpage says that it “first appeared in FreeBSD 12.0”. *sigh* Looks like I have to upgrade to 12.0 somehow.

FreeBSD 12?

I downloaded the source for the latest 12.0 from the releng branch, put it on a stick and attached it to the ARM server. That alone – I didn’t even get to import the pool – was enough to make the system panic! Oh great…

A UFS formatted stick then? Yes, that worked. I transferred the source, built the system, installed the kernel and saw that familiar line: “Entered …. EfiSelReportStatusCode Value: 3101019”. Obviously it’s not the images that are broken but actually FreeBSD 12.0 is…

What’s next? Let’s try 12-STABLE. Maybe the problem is fixed there. Unless… The installed kernel is broken and trying to boot kernel.old does not work! Oh excellent, it looks like “make installkernel” did not create any backup! This probably makes a lot of sense on small ARM devices, but it doesn’t make any in this case. Now I’m in for another reinstall.

After reinstalling 11.2 I did “cp /boot/kernel /boot/kernel.11” – just in case. Then I transferred the code for 12-STABLE onto the machine and built it. You would probably nave guessed it by now: It’s broken.

HEADaches

The next day at work I organized a USB to ethernet adapter and configured ue0 so that I could finally ssh into the OS. That felt a lot better and I could also use svnlite directly to checkout the source code. Quite some time later I had built and installed -CURRENT. But would it work? Nope!

Now this was all very unfortunate. But I happen to like FreeBSD and really want it to work on ARM64, too. Just waiting for somebody to fix it didn’t sound like a good strategy, however. The hardware is still pretty exotic and if just one person reported breakage in the final days of RCs for 12.0, it’s pretty clear that not too many developers have access to it to test things regularly… So I figured that I should try to find out exactly when HEAD broke. That would most likely be helpful information.

Alright… r302409 was the commit that turned HEAD into 12-CURRENT. Let’s make sure that this version still works – otherwise it would mean that 11.0 on ThunderX was fixed after branching off HEAD. Seems very unlikely, but making sure wouldn’t hurt, right?

usr/local/aarch64-freebsd/bin/ranlib -D libc_pic.a
--- libc.so.7.full ---
/usr/local/aarch64-freebsd/bin/ld: getutxent.So(.debug_info+0x3c): R_AARCH64_ABS64 used with TLS symbol udb                                                                                   
/usr/local/aarch64-freebsd/bin/ld: getutxent.So(.debug_info+0x59): R_AARCH64_ABS64 used with TLS symbol uf                                                                                    
/usr/local/aarch64-freebsd/bin/ld: utxdb.So(.debug_info+0x5c): R_AARCH64_ABS64 used with TLS symbol futx_to_utx.ut                                                                            
/usr/local/aarch64-freebsd/bin/ld: jemalloc_tsd.So(.debug_info+0x3d): R_AARCH64_ABS64 used with TLS symbol __je_tsd_tls                                                                       
/usr/local/aarch64-freebsd/bin/ld: jemalloc_tsd.So(.debug_info+0x1434): R_AARCH64_ABS64 used with TLS symbol __je_tsd_initialized                                                             
/usr/local/aarch64-freebsd/bin/ld: xlocale.So(.debug_info+0x404): R_AARCH64_ABS64 used with TLS symbol __thread_locale                                                                        
/usr/local/aarch64-freebsd/bin/ld: setrunelocale.So(.debug_info+0x3d): R_AARCH64_ABS64 used with TLS symbol _ThreadRuneLocale                                                                 
cc: error: linker command failed with exit code 1 (use -v to see invocation)
*** [libc.so.7.full] Error code 1

Perhaps building an older version of FreeBSD on a newer one is not such a good idea… So I chose to install 11.0 first. That worked without problems. However… 11.0 was cut before the switch to LLVM 4.0 – which brought in a usable lld! For that reason 11.0 needed an external toolchain. Of course that version is not supported anymore so there are no packages for it around. To really build anything on 11.0 would mean that I’d first have to cross-compile a binutils package for it! I admit that this wasn’t terribly attractive to me – especially since I already had a way different goal.

Ok… So HEAD likely broke somewhere between r302409 and r343815 – that’s roughly 40,000 commits to check! Let’s just hope that it broke after FreeBSD on ARM64 became self-hosting…

Checking r330000 – it works! r340000? Broken. r335000: Still works. r337500: Nope. After a lot of deleting previous builds and source, checking out other old revisions and rebuilding kernel-toolchain and kernel (and hitting some versions that didn’t build on ARM in the first place), I found out that it was commit r336520 that broke FreeBSD on ThunderX: The vt_efifb device was added to the GENERIC kernel on ARM64. According to the commit, it was tested on PINE64 but obviously it does not work with ThunderX, yet.

Problem solved?

So if it’s just one line in the GENERIC configuration it should not be any problem to fix this, right? To test I checked out HEAD again, removed the line and rebuilt the kernel. And after rebuilding and installing once more – it booted just fine!

Then I restored the GENERIC config and wrote a custom kernel configuration instead:

/usr/src/sys/arm64/conf/THUNDERX:

include GENERIC
ident THUNDERX

nodevice        vt_efifb

I wrote to the mailing list that I had identified a problem with ThunderX and made proposals on how to unbreak HEAD.

An answer pointed out that a custom kernel was not even required: It suffices to set the loader tunable hw.syscons.disable to make HEAD work again! No need to remove the device from GENERIC. That’s even better! But what I actually wanted was 12.0 or 12-STABLE and not HEAD. Installing that however, I found that it’s… well. Broken!

It broke twice!

What’s going on here? Could it be that HEAD broke twice and the other problem was fixed in a later revision? I had no better explanation. So let’s get compiling for another few nights again and see…

r339436 is when HEAD was renamed to 13-CURRENT. Would it work just setting the loader tunable? Nope. I played the same game as before to eventually figure out that r338537 was the offending commit here. It’s message says that it’s increasing two values because of ThunderX – but unfortunately actually that broke the platform.

On to more compiling… Finally here’s the other commit: r343764 changed those values again in preparation for ThunderX2 – and accidentally fixed ThunderX, too! Unfortunately that wasn’t until after 12.0 was branched off.

Again it’s a rather simple patch to make 12-STABLE work:

--- sys/arm/arm/physmem.c.orig  2019-02-17 08:47:05.675448000 +0100
+++ sys/arm/arm/physmem.c       2019-02-17 08:48:53.209050000 +0100
@@ -29,6 +29,7 @@
  #include 
  __FBSDID("$FreeBSD: stable/12/sys/arm/arm/physmem.c 341760  
2018-12-09 06:46:53Z mmel $");

+#include "opt_acpi.h"
  #include "opt_ddb.h"

  /*
@@ -48,8 +49,13 @@
   * that can be allocated, or both, depending on the exclusion flags  
associated
   * with the region.
   */
+#ifdef DEV_ACPI
+#define MAX_HWCNT       32      /* ACPI needs more regions */
+#define MAX_EXCNT       32
+#else
  #define        MAX_HWCNT       16
  #define        MAX_EXCNT       16
+#endif

  #if defined(__arm__)
  #define        MAX_PHYS_ADDR   0xFFFFFFFFull

Fixing 12?

After I had a working 12 system, I wrote to the mailing list again, asking if the change could be MFC’d from r343764 into 12-STABLE.

Rod Grimes was nice enough to ping the developer who had made the commit. So far he hasn’t replied but I really hope that it’ll be possible to MFC the change. Because otherwise that would mean all of FreeBSD 12.x would remain broken – and that would definitely not be cool… If it is possible we’re likely to have a working 12.1 image when that version is out. 12.0 will remain broken, I guess.

Until we have a working version again, you can either install using 11.2 and build from (currently patched) source. Or you can try a fixed image that I build (if you trust me). What I did to build was this:

  1. svnlite co svn://svn.freebsd.org/base/stable/12 /usr/src
  2. cd /usr/src
  3. patch -i patchfile
  4. make -j 49 buildworld
  5. make -j 49 buildkernel
  6. edit stand/defaults/loader.conf to set hw.syscons.disable
  7. cd release
  8. make memstick NOPKG=1 NOPORTS=1 NOSRC=1 NODOC=1 WITH_COMPRESSED_IMAGES=1

Conclusion

Tier 1? Obviously we’re not quite there, yet. Booting off of ZFS does not work. There are no binary updates. Even with 12-STABLE I’ve been unable to get the NICs working – and you wouldn’t want to run a server with 4x10Gbit and one 40Gbit NICs using just a 1Gbit USB to ethernet adapter.

On the plus side, FreeBSD generally works and seems stable with UFS. Building from source using 49 threads works and so does building software from ports. I even installed Synth and ran the “build-everything” for over a day to put some serious load on the machine without any problems.

I’d like to at least get the network working, too, before that machine will turn into a penguin for production. Maybe I can set aside a few more hours at night. If I end up with anything noteworthy, I’ll follow-up on this post. Oh, and if anybody has any information on making it work, please let me know!

ZFS and GPL terror: How much freedom is there in Linux?

There has been a long debate about whether man is able to learn from history. I’d argue that we can – at least to some degree. One of the lessons that we could have learned by now is how revolutions work. They begin with the noblest of ideas that many followers wholeheartedly support and may even risk their lives for. They promise to rid the poor suppressed people of the dreaded current authorities. When they are over (and they didn’t fail obviously) they will have replaced the old authorities with – new authorities. These might be “better” (or rather: somewhat less bad) than the old ones if we’re lucky, but they are sure to be miles away from what the revolution promised to establish.

Death to the monopoly!

Do you remember Microsoft? No, not the modern “cloud first” company that runs Azure and bought Github. I mean good old Microsoft that used to dominate the PC market with their Windows operating system. The company that used their market position with Windows 3.x to FUD Digital Research and their superior DR-DOS out of the market by displaying a harmless line of text with a warning about possible compatibility issues. The company that spent time and resources on strategies to extinguish Open Source.

Yes, due to vendor lock-in (e.g. people needing software that only runs on Windows) and to laziness (just using whatever comes installed on a pc), they have maintained their dominance on the desktop. However the importance of it has been on a long decline: Even Microsoft have acknowledged this by demoting their former flagship product and even thinking of making it available for free. They didn’t quite take that extreme step, but it’s hard to argue that Windows still has the importance it had since the 1990’s up to the early 2010’s.

They’ve totally lost the mobile market – Windows Phone is dead – and are not doing too well in the server market, either.

A software Golden Age with Linux?

In both areas Linux has won: It’s basically everywhere today! Got a web-facing server? It’s quite likely running some Linux distro. With most smart phones on the planet it’s Android – using a modified Linux kernel – that drives them. And even in space – on the ISS – Linux is in use.

All of us who have fought against the evil monopoly could now be proud of what was accomplished, right? Right? Not so much. There’s a new monopolist out there, and while it’s adhering to Open Source principles by the letter, it has long since started actually violating the idea by turning it against us.

For those who do not deliberately look the other way, Linux has mostly destroyed POSIX. How’s that? By more or less absorbing it! If software is written with POSIX in mind, today that means it’s written to work on Linux. However POSIX was the idea to establish a common ground to ensure that software runs across all of the *nix platforms! Reducing it basically to one target shattered the whole vision to pieces. Just ask a developer working on a Unix-like OS that is not Linux about POSIX and the penguin OS… You’re not in for stories about respect and being considerate of other systems. One could even say that they have repeatedly acted quite rude and ignorant.

But that’s only one of the problems with Linux. There are definitely others – like people acting all high and mighty and bullying others. The reason for this post is one such case.

ZFS – the undesirable guest

ZFS is todays most advanced filesystem. It originated on the Solaris operating system and thanks to Sun’s decision to open it up, we have it available on quite a number of Unix-like operating systems. That’s just great! Great for everyone.

For everyone? Nope. There are people out there who don’t like ZFS. Which is totally fine, they don’t need to use it after all. But worse: There are people who actively hate ZFS and think that others should not use it. Ok, it’s nothing new that some random guys on the net are acting like assholes, trying to tell you what you must not do, right? Whoever has been online for more than a couple of days probably already got used to it. Unfortunately its still worse: One such spoilsport is Greg Kroah-Hartman, Linux guru and informal second-in-command after Linus Torvalds.

There have been some attempts to defend the stance of this kernel developer. One was to point at the fact that the “ZFS on Linux” (ZoL) port uses two kernel functions, __kernel_fpu_begin() and __kernel_fpu_end(), which have been deprecated for a very long time and that it makes sense to finally get rid of them since nothing in-kernel uses it anymore. Nobody is going to argue against that. The problem becomes clear by looking at the bigger picture, though:

The need for functions doing just what the old ones did has of course not vanished. The functions have been replaced with other ones. And those ones are deliberately made GPL-only. Yes, that’s right: There’s no technical reason whatsoever! It’s purely ideology – and it’s a terrible one.

License matters

I’ve written about licenses in the past, making my position quite clear: It’s the authors right to choose whatever license he or she thinks is right for the project, but personally I would not recommend using pessimistic (copyleft) licenses since they do more harm than good.

While I didn’t have any plans to re-visit this topic anytime soon, I feel like I have to. Discussing the matter on a German tech forum, I encountered all the usual arguments and claims – most of which are either inappropriate or even outright wrong:

  • It’s about Open Source!
  • No it’s absolutely not. ZFS is Open Source.

  • Only copyleft will make sure that code remains free!
  • Sorry, ZFS is licensed under the CDDL – which is a copyleft license.

  • Sun deliberately made the CDDL incompatible with the GPL!
  • This is a claim supported primarily by one former employee of Sun. Others disagree. And even if it was verifiably true: What about Open Source values? Since when is the GPL the only acceptable Open Source license? (If you want to read more, user s4b dug out some old articles about Sun actually supporting GPLv3 and thinking about re-licensing OpenSolaris! The forum post is in German, but the interesting thing there is the links.)

  • Linux owes its success to the GPL! Every Open Source project needs to adopt it!
  • This is a pretty popular myth. Like every myth there’s some truth to it: Linux benefited from the GPL. If it had been licensed differently, it might have benefited from that other license. Nobody can prove that it benefited more from the GPL or would have from another license.

  • The GPL is needed, because otherwise greedy companies will suck your project dry and close down your code!
  • This has undoubtedly happened. Still it’s not as much of a problem as some people claim: They like to suggest that formerly free code somehow vanishes when used in proprietary projects. Of course that’s not true. What those people actually dislike is that a corporation is using free code for commercial products. This can be criticized, but it makes sense to do that in an honest way.

  • Linux and BSD had the same preconditions. Linux prospers while BSD is dying and has fallen into insignificance! You see the pattern?
  • *sign* Looks like you don’t know the history of Unix…

  • You’re an idiot. Whenever there’s a GPL’d project and a similar one that’s permissively licensed, the former succeeds!
  • I bet you use Mir (GPL) or DirectFB (LGPL) and not X.org or Wayland (both MIT), right?

What we can witness here is the spirit of what I’d describe as GPL supremacist. The above (and more) attacks aren’t much of a problem. They are usually pretty weak and the GPL zealots get enraged quite easy. It’s the whole idea to trade the values of Open Source for religious GPL worship (Thou shalt not have any licenses before me!) that’s highly problematic.

And no, I’m not calling everybody who supports the idea of the GPL a zealot. There are people who use the license because it fits their plans for a piece of software and who can make very sensible points for why they are using it. I think that in general the GPL is far from being the best license out there, but that’s my personal preference. It’s perfectly legitimate to use the GPL and to promote it – it is an Open Source license after all! And it’s also fine to argue about which license is right for some project.

My point here is that those overzealous people who try to actually force others to turn towards the GPL are threatening license freedom and that it’s time to just say “no” to them.

Are there any alternatives?

Of course there are alternatives. If you are concerned about things like this (whether you are dependent on modules that are developed out-of-kernel or not), you might want to make 2019 the year to evaluate *BSD. Despite repeated claims, BSD is not “dying” – it’s well alive and innovative. Yes there are areas where it’s lacking behind, which is no wonder considering that there’s a much smaller community behind it and far less companies pumping money into it. There are companies interested in seeing BSD prosper, though. In fact even some big ones like Netflix, Intel and others.

Linux developer Christoph Hellwig actually advises to switch to FreeBSD in a reply to a person who has been a Linux advocate for a long time but depends on ZFS for work. And that recommendation is not actually a bad one. A monopoly is never a good thing. Not even for Linux. It makes sense to support the alternatives out there, especially since there are some viable options!

Speaking about heterogenous environments: Have you heard of Verisign? They run the registry for .com and .net among other things. They’ve built their infrastructure 1/3 on Linux, 1/3 on FreeBSD and 1/3 on Solaris for ultra-high resiliency. While that might be an excellent choice for critical services, it might be hard for smaller companies to find employees that are specialized in those operating systems. But bringing in a little BSD into your Linux-only infrastructure might be a good idea anyway and in fact even lead to future competitive advantage.

FreeBSD is an excellent OS for your server and also well fit if you are doing embedded development. It’s free, ruled by a core team elected by the developers, and available under the very permissive BSD 2-clause license. While it’s completely possible to run it as a desktop, too (I do that on all of my machines both private and at work and it has been my daily driver for a couple of years now), it makes sense to look at a desktop-focused project like GhostBSD or Project Trident for an easy start.

So – how important is ZFS to you – and how much do you value freedom? The initial difficulty that the ZOL project had has been overcome – however they are just working around it. The potential problem that non-GPL code has when working closely with Linux remains. Are you willing to look left and right? You might find that there’s some good things out there that actually make life easier.