Cross-platform package building: Pkgsrc vs. Ravenports (2/2)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/cross-platform_package_building_pt2.gmi

The previous article (part 1) on cross-platform package management / package building covered the basics by taking a look at what some of the many difficulties are. It briefly described some common strategies, too.

Part 2 discusses two package building frameworks that were developed to allow for a cross-platform solution. It contains a short introduction of both, a somewhat detailed elaboration of how they compare as well as information on the test scenario and of course the results. It ends with a conclusion.

Contender 1: What is Pkgsrc?

Pkgsrc, pronounced package source, is NetBSD’s ports system. Since one of NetBSD’s primary goals is portability, it’s not much of a surprise that Pkgsrc is also portable across several platforms and architectures. It was originally derived from FreeBSD’s ports system but has later been redesigned in a complete rewrite. In a nutshell, you can think of a “port” as a buildsheet for some kind of software. You tell the system that you want it to build PostgreSQL, Firefox or whatever and if there is a port for that, the system knows what to do to accomplish that task.

The individual ports can be used directly by building the software on the host machine or to build the binary packages that can then be installed using a package manager. To build a package like Firefox, a lot of dependencies are required. If they are not already present on the system, they will be automatically built before it (there are ports for those programs or libraries as well).

If you want to know more about this, here are two articles that I wrote a couple of years ago: FreeBSD – building software from ports (part 1) and (part 2). They provide an introduction to the FreeBSD ports system. If you are unfamiliar with the concept, you probably want to skim over the first one and read it and the second one completely if you find it interesting.

Once you understand what FreeBSD’s ports tree is, you’ve got an idea of what Pkgsrc is, too: From the user’s perspective it has the same basic purpose.

Contender 2: What is Ravenports?

Ravenports is a much more recent take on the topic of package building. It was conceived by John Marino who had before contributed to the maintenance of thousands of ports in Pkgsrc and FreeBSD ports and who also was the inventor of dports (DragonFly BSD’s adaption of FreeBSD’s ports). After his attempts to modernize the old ports frameworks had failed, he decided to design a new one based on the lessons learned.

RP is meant to overcome limitations due to design decisions that looked reasonable more than two decades ago but have since proven to be obstacles. Modern-day tooling, high concurrency and a very high level of automation as well as convenient helpers for port maintenance is what it offers over others. It’s these features that allow it to reduce the army of porters usually required to maintain several thousand packages to a couple of capable people at most.

It is different from a classical ports system in it not being capable of building and installing software directly on the host system. It always uses a pristine “build jail” for each package in which only the relevant dependencies are installed before the actual build process begins. When the build is completed and the software packaged up, the build jail is torn down. FreeBSD ports and Pkgsrc allow for doing the same by using additional tools. With Ravenports this is the only supported (and built-in) mode of operation: Building packages properly, not installing software directly.

How do they compare?

I’ve used both Pkgsrc and Ravenports for private projects as well as for customers and thus benefited from them. While I like both, I won’t hide that I was so impressed by Ravenports that after trying it out I soon became involved in the project. So one could say that I’m biased and you cannot expect a fair comparison. I tried hard to come up with sensible criteria to put both to the test and I’ve asked other people (like the other two Advance!BSD members who are neutral) for their judgement.

I also deliberately treated Ravenports harshly during the comparison; there’s no bonus points from me. On the contrary: I proposed to evaluate Pkgsrc first, because I was under the impression that it would probably fit our project better (being already available for so many platforms and being very mature). It was only when we hit unexpected problems that I decided to go for a somewhat comprehensive evaluation and comparison.

A direct comparison of both candidates is actually pretty difficult to do. You could compare the platforms that both support. Pkgsrc wins this easily. However when it comes to support for bulk builds (building packages from all ports or a considerable subset), that’s broken on almost all of those platforms that Pkgsrc supports! And so when it comes to this kind of “fully supported” platforms, Ravenports takes it home.

One of the most obvious things you could compare is package count. Again this is not that simple because you’re comparing apples to oranges. Pkgsrc offers so many more packages than Ravenports; but among those are e.g. over 1,400 packages related to TeX – which Ravenports packages differently, ending up with only 15 packages for basically the same thing! Pkgsrc keeps various older compilers and versions of popular interpreters (Python, Ruby) in the tree even when they are no longer supported. Ravenports has a much stricter policy of trying hard to purge ports of software that has officially reached the status of EOL (End Of Life). Neither approach is inherently better than the other – it’s a question of what your preferences or requirements are.

On the other hand the number of Pkgsrc packages buildable for the various platforms (as listed below) is the minimum count (hence >=). The reason being that when not bulk-building, all dependencies for packages must be installed on the live system (and are not automatically deinstalled). Some dependencies will block others from being installed. And thus a considerable number of packages will fail to build as a result of it. So while I included the figures here to give an impression on how well each platform is supported, they have no absolute meaning and thus score neutral (total count is among the criteria in the “general” section, though).

So keep in mind that there are multiple problems that do not allow for simply comparing numbers. Think about what all the results mean and what is better for your use case. For example in our case a platform that does not support bulk builds is close to being worthless. However even partial support means that fixing bulk builds would be much less effort than having to bring it up on an entirely new platform.

Test conditions and environment

To make the already difficult tests at least compare as best as possible, a fresh VM was created for each system to test. Resources for those VMs were identical of course: 48 GB of RAM, 16 CPU cores (of a 3.0 GHz E5-2623 server) and 1 TB of disk space.

The tests started immediately after pkgsrc-2021-Q3 was published in late September and the Ravenports comparison is against the at that time latest released package set. So in this comparison newer additions (namely NetBSD support on Ravenports) are mentioned but treated absent for the score.

We are mostly interested in the *BSD operating systems and thus our focus is on those. As it might be of interest to a lot of people, we’ve included results for Linux and illumos, too, but gave the respective results a neutral score. The operating systems used in the tests were:

  • DragonFly BSD 6.0
  • FreeBSD 13.0
  • NetBSD 9.2
  • OpenBSD 7.0
  • Rocky Linux 8.4 (Community effort to replace what was CentOS before IBM… “repurposed” it)
  • OmniOSce r151038

For all criteria tested, a score is awarded. It can be very positive (++), positive (+), neutral (0), negative (-) or very negative (–). All of those added up lead to the final score.

Testing procedure

For all tests freshly installed operating systems were used. The installations were done as close to a standard / minimal installation as possible. This is true for DragonFly BSD and FreeBSD.

On NetBSD and OpenBSD all of the package sets were installed, since various X11 ports in Pkgsrc depend on those being present and would fail otherwise. Compared to many Linux distributions this is still a fairly slim installation, but it’s not minimal in the true sense of the word and thus should be noted here. On OpenBSD the standard partitioning scheme was also replaced with a simplistic one.

Since Linux does not know the concept of a base system, there’s also no base compiler. As one is needed to bootstrap Pkgsrc (Ravenports is self-contained in this case), ‘yum groupinstall “Development Tools”‘ was used to make the system capable of building software. On OmniOSce there is also no base compiler; ‘pkg install gcc10’ took care of making the system fit to continue on.

With all systems the first step after preparing the environment – i.e. making a compiler available if it’s missing as well as configuring SSH – was to fetch pkgsrc-2021Q3.tar.xz and decompress it to /usr. Then the typical bootstrap was run (except on NetBSD where this is not required) and the PATH environment variable modified to include /usr/pkg/sbin and /usr/pkg/bin.

The next step was always to build pkgtools/mksandbox as well as pkgtools/pbulk and to attempt a bulk build. Since this failed for one reason or another in almost all cases, plan b was to at least run (b)make in /usr/pkgsrc, letting the system build as many packages as possible. A list of packages was created and the build started over after declaring all the licenses acceptable (otherwise quite a number of packages that can be build are skipped).

With Ravenports it’s doing the bootstrap as explained in the documentation for each supported OS and eventually running ‘ravenadm build-everything’. The results for Solaris / illumos were taken from a list of the packages in the official repository at that time. The reason is that I did not have enough spare time to try and install the specific old version of Solaris (from 2009!) required in the VM.

Results

Here are the results of what is likely one of the more comprehensive evaluations of Pkgsrc ever done. Package building on all the platforms took almost two months’ time. The various criteria have been grouped together into the “OS support”, “General” and “Technical evaluation” groups. Unfortunately the tables did not fit well either in Gemtext or in blog-style HTML and are therefore available as images.

Operating System support

Pkgsrc vs. Ravenports: Operating system support

As mentioned above, keep in mind that the amount of packages that built with Pkgsrc is the minimum count of buildable ones. Where the numbers look particularly low, that’s usually due to a very basic dependency package being broken, cutting off a substantial part of the tree (e.g. Python on DragonFly BSD and pkgconf on illumos).

Disregarding Linux and illumos support (as those are out of scope for our use case), both score almost evenly. Pkgsrc comes out very slightly ahead [1:0] of Ravenports due to it’s excellent integration into one of our main platforms (NetBSD).

If both Linux and illumos are to be taken into account, too, Pkgsrc’s score remains the same as two additional supported platforms with two failed bulk builds neutralize each other. Ravenports however gains two points for Linux support including bulk builds and one more point for illumos support (since it can bulk-build for but not on illumos, I’d give a 0 instead of a + rating in this case). This would place Ravenports ahead [1:3].

General criteria

Pkgsrc vs. Ravenports: General

While the former section ended up very surprising for me (having taken for granted that full bulk build support was the rule rather than the exception!), this one is very much what you’d expect.

Pkgsrc, being an established project with a long history and a lot of contributors, can show its strengths to the fullest here. The newcomer does not perform too badly actually, but despite getting ahead in one test (package freshness) it doesn’t stand the slightest chance against the veteran. Pkgsrc wins with a huge lead [12:-1] and Ravenports leaves the field beaten and bruised in crushing defeat.

Technical evaluation

Pkgsrc vs. Ravenports: Technical evaluation

While it looked like the match was already over after the previous round, this one is where being slow and old-fashioned despite being mature puts you at risk. So here’s where the youngster can shine with its modern-day approaches and new tricks! I expected Ravenports to clearly beat Pkgsrc this time. It did – but it even managed to make this round look like roles being inverted compared to the previous one…

This time it’s Pkgsrc that can win only one test (popularity of languages used). Ravenports however manages to bring out the big guns and eventually leaves Pkgsrc utterly destroyed by an even higher margin [-3:11] than it lost the previous round.

Final results

Pkgsrc vs. Ravenports: Final results

For me this duel between Pkgsrc and Ravenports has been more interesting than I would have thought. After Pkgsrc messed up the jump start that I was expecting, things began to look like a more evenly matched comparison. I still would have lost my money as I’d have bet Pkgsrc to win at least by a tiny margin.

It’s a bit weird but at the same time probably fitting to see that both contenders have such a diverse set of strengths and weaknesses – and that their comparison eventually even ended [10:10] in a draw!

But that’s kind of delivering old news, since I’m writing about the comparison as of late September. In October, Ravenports gained official support for NetBSD, which would shift the results in Ravenports favor [10:14]. It could provide more insights to give Pkgsrc a try again when the Q4 release happens. I haven’t decided, yet. Maybe I’ll do that.

Conclusion

As stated above, Pkgsrc is NetBSD’s ports system. Our tests show that you need to stress this: It’s not simply a statement about its origin. It’s a statement about its main purpose.

NetBSD is where Pkgsrc development takes place. Other operating systems are invited to participate, but maintaining excellent support for any other OS is a huge task. Before I took a closer look at this, I didn’t know that even Joyent (the company behind SmartOS which uses Pkgsrc as its official means of package management) do maintain a downstream fork of Pkgsrc!

They have employees who are doing paid work on Pkgsrc – and obviously they still do not manage to keep upstream Pkgsrc in good enough shape for themselves (leading to the less than impressive number of packages that currently work on illumos).

Ravenports is more like the new kid on the block. It has proven to be a serious effort (being in active development for over 5 years now) but has yet to find its niche. Despite it being technically superior in many ways, it’s held back by the very low number of contributors.

Both solutions can be used for cross-platform package management. So which one should you choose? While they can be used for the same scenario, their goals are very different. Pkgsrc is meant to run even on the most exotic, limited and obsolete platforms and is willing to pay a price for that. Ravenports in contrast focuses entirely on modern hardware and can take advantage of their superior capabilities. Therefore there’s no “one size fits all” in this case, it really depends on what you want to do.

Need to build ports directly? Pkgsrc’s the answer here. Have to install software on AIX? Use Pkgsrc. Running a SPARC64 machine? Pkgsrc again. If however you need the latest software versions, go for Ravenports. Are reliable package builds across multiple platforms what you’re looking for? Give Ravenports a try. Don’t want to invest into build clusters for acceptable performance? Go with Ravenports if you can.

Consider your use case, compile a table of requirements. Then see which solution fits you best. But whichever you pick, always consider that you might have to do some work of your own (and please contribute it back!).

Cross-platform package building: Pkgsrc vs. Ravenports (1/2)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/cross-platform_package_building_pt1.gmi

This is the first of a two articles on cross-platform package management / package building. It covers the basics by discussing why it is actually surprisingly (to many people) difficult to do and what some of the problems are. It also takes a quick look at some strategies to solve the problem.

Package management: Closely platform-dependent

One of the strengths of today’s Unix-like operating systems is that they offer proper package management. This makes installing and maintaining software both simple and convenient. If we are talking homogeneous environments, this is! As soon as you have a heterogeneous environment to work in, things get complicated rather quickly. Packages and package systems are usually closely tied to their platform. It’s not a coincidence that people talk about e.g. “RHEL-like” distributions and “RPM-based” ones kind of interchangeably. The package manager is so close to the core of an OS or a distribution that it can be used to refer to that synonymously.

It’s usually not the package manager itself (unless it is a special one offering features that define a whole distribution that uses it like Gentoo’s portage that allows for fine-grained custom building of applications thanks to its USE flags among other things or NixOS’s nix which as a purely functional package manager that supports concurrent installations of multiple versions, atomic upgrades, etc.). Whenever an application (or something else like a theme pack, documentation, etc.) is to be packaged, the maintainer has to make a couple of decisions which maintainers of the same software for another OS / distribution might have another opinion on.

Different package naming schemes

One of the first things to mind is that package names may differ between operating systems / distributions. A popular example is the Apache webserver being available as httpd in Linux distributions based on Red Hat’s while in Debian-based ones it’s known as apache2. FreeBSD uses the name apache24 and OpenBSD calls the same webserver apache-httpd. But that’s only the names, any configuration management system (or even every admin by hand!) can handle that easily. The software is the same after all, right? Yes and no!

Different configuration options and structure

While it’s all built from the code that is released by the same upstream project, all the platforms organize the software differently. Sticking to the Apache example, it’s pretty well known that Debian-based distributions use a mechanism called “sites-enabled” while others like Red Hat do not. This means to either embrace multiple schemes that are native to the platforms which you use or having to create your own and bent the default installations on all platforms to use that. It’s not such an uncommon thing to harmonize configuration and doing it is not incredibly hard.

It comes at a price, though. Hired a new admin? You could probably expect him or her to be familiar with the standard scheme. But if you’re using a custom one, the new employee will need time to become familiar with it. You also don’t do it once and be done. The default configuration is likely to change over time. In case of our webserver for example the recommended ciphers for TLS encryption may change. If you use the default configuration you’ll probably get important changes like this for free when updating. Forsaking it and doing your own means more homework for you to keep the configuration in good shape.

Different paths

Speaking of which: On FreeBSD you will even find the configuration files in another place (/usr/local/etc/apache24) while on Linux it’s commonly something below /etc (like /etc/apache2 or /etc/httpd). Other things like databases are frequently in different place, too. On FreeBSD it’s /var/db/mysql for example while on Linux it’s usually /var/libexec/mysql. By itself that’s just another small detail to take into account. But those add up and should not be neglected entirely.

Different compile-time options

And even worse: The package maintainers for each platform make decisions on compile-time options! So the resulting software will differ even if you go the extra mile and configure them alike with your custom runtime configuration scheme! Even seemingly basic things should not be just taken for granted. Once upon a time, the Apache webserver often came _without_ SSL support – sometimes there was an extra package which had it enabled for people who needed that functionality! Sometimes you had to build the software yourself (or use ports that let you set or unset the option as you please). On Debian, the Apache package has Lua support enabled but not that for LuaJIT. On FreeBSD both are disabled by default. The FreeBSD port offers more than 130 (!) options – it’s not hard to see how much of a difference the choice of the package maintainer can make for more advanced software!

Patches

While probably any reader will understand pretty well by now that our topic is rather far away from all peaceful unicorns and sunny weather, it gets worse still. It’s not uncommon that package maintainers choose to apply patches instead of using the code as upstream provided it. This may be due to an incompatibility (maybe some dependency that this OS / distro ships is too old or too new for this software and so a patch is required to make it play along nicely). It may be because the maintainer feels that a fix that was not deemed important enough to warrant another release should still be applied. It could be because additional features not supported upstream are desired (many maintainers chose to ship a pretty popular but unofficial additional MPM called ITK for example). Or it could because of any number of other good or bad reasons. Therefore software might even differ between various platforms if the exact same compile-time options were chosen…

Versions

And because we of course saved the best for last, the biggest problem is an even more demanding one: Package versions… Not only do the various operating systems / distributions update to newer versions at their own pace, but they may or may not backport fixes or newer features into the packages that they release! Keeping track of this is already a major hassle for a couple of programs – and it becomes a downright daunting task if you need to do it for many! And you have to. It’s _not_ optional. Why? Because newer versions of your software might introduce newer features or configuration directives that you definitely want to use. However you cannot simply enable them for all of your servers as the older versions will probably refuse to even start due to invalid (unknown) settings!

Newer versions may also deprecate or remove previously supported features. There are features that may only be available on certain platforms (maybe additional dependencies are required which are not ported to every platform that you use). All this kind of fun stuff that can totally ruin your weekend when it eventually bites you despite you having been lucky for a long time before it.

Cross-platform package building strategies

The more diverse your environment is, the more the consequences of what was just scratched above are going to make your job look like one of the inner circles of hell. What’s the best way out of this misery?

Well, what about compiling the most important software yourself and deploying it to e.g. /opt? This is technically very much possible, but is it feasible in large scale? You’re almost guaranteed to drown eventually. Don’t give in to the temptations of going down this road! This way lies insanity.

If you’ve only got a few different systems in use and not too many complex programs, you might get away with careful planning and careful configuration management. It won’t be pretty but it’s possible to do. Got several different systems that you need to support? Do yourself a favor and find another solution.

There is in fact a proper solution to this: Using a package framework that supports multiple platforms. Doing so comes with its own set of challenges and pains, but they are much easier to bear. You probably have to learn to use a new package management tool. (Depending on your choice) you might need to understand the concept of a ports tree if you are not already familiar with it. But doing so you will be able to use packages of the same version, built with the same compile-time options (or at least very close if various platforms force diverging settings) and so on across your entire landscape!

Sounds too good to be true? Let’s put two options which claim to be able to do just that to the test! For the Advance!BSD project we plan to use at least four different operating systems (for more information see: Advance!BSD – thoughts on a not-for-profit project to support *BSD pt. 1 and pt. 2). Using the native packages on each one is basically out of question. Especially since we anticipate that we’ll need to add some software packages of our own to the mix and totally lack the manpower to maintain that across four package systems.

What’s next?

The next article will introduce Pkgsrc and Ravenports and present the results of a 2 months evaluation of Pkgsrc on four BSD operating systems + Linux and illumos. It will also compare the advantages and disadvantages of both contenders in heterogenous environments.

FreeBSD package building pt. 5: Sophisticated Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt5.gmi

In the previous posts of this series, there was an introduction to package building on FreeBSD and we discussed basic Synth usage. The program’s configuration, working with compiler cache and using the tool to update the installed applications was covered as well. We also discussed Synth web reports, serving repositories over HTTP and building package sets. We also took a brief look at the logs to find out why some ports failed.

In this article we are going to sign our package repository and explore make.conf options. We will also add an additional profile to build packages for an obsolete FreeBSD release that is 32-bit and automate package building with cron.

Changing the structure

So far we’ve only considered building basically one local package set (even though we’ve shared it). If we want to have a single build server manage multiple package sets, we will need a somewhat more complex directory structure to allow for separate repositories and such. Should you be building on a UFS system, it’s easy: Just create the additional directories. I’m using ZFS here, though, and need to think if I want to create the whole structure as several datasets or just two single datasets with custom mount? As usually there’s pros and cons for both. I’m going with the former here:

# rm -r /var/synth
# zfs create zroot/var/synth
# zfs create zroot/var/synth/www
# zfs create zroot/var/synth/www/log
# zfs create zroot/var/synth/www/log/13.0_amd64
# zfs create zroot/var/synth/www/packages
# zfs create zroot/var/synth/www/packages/13.0_amd64
# synth configure

Now we need to adapt the Synth configuration for the new paths:

B needs to be set to /var/synth/www/packages/13.0_amd64 and E to /var/synth/www/log/13.0_amd64. Normally I’d create a custom profile for that, but as I’m covering that a little later in this article, we’re going to abuse the LiveSystem default profile for now.

Next is re-configuring the webserver:

# vi /usr/local/etc/obhttpd.conf

Remove the block return directive in the location “/” block on the synth.local vhost and replace it with:

directory auto index

Then change the location to “/*”. I’m also removing the second location block. Create a new .htpasswd and bring over authentication to the main block if you want to.

# service obhttpd restart

Repository signing

To be able to use signing, we need a key pair available to Synth. Use the openssl command to create a private key, change permissions and then create a public key, too:

# openssl genrsa -out /usr/local/etc/synth/LiveSystem-private.key 2048
# chmod 0400 /usr/local/etc/synth/LiveSystem-private.key
# openssl rsa -pubout -in /usr/local/etc/synth/LiveSystem-private.key -out /usr/local/etc/synth/LiveSystem-public.key

Mind the filenames here! The LiveSystem part refers to the name of the profile we’re using. If you want to sign different repositories resulting from various profiles, make sure that you place the two key files for each of the profiles in /usr/local/etc/synth.

While you’re at it, consider either generating a self-signed TLS certificate or using Let’s Encrypt (if you have own a proper domain). If you opted to use TLS, change the webserver configuration once more to have it serve both the log and the package vhosts via HTTPS. There’s an example configuration (obhttpd.conf.sample) that comes with obhttpd in case you want to take a look. It covers HTTPS vhosts.

Alright! Since we changed the paths, we don’t currently have any repository to sign. Let’s build a popular browser now:

# synth build www/firefox

Firefox failed in the configure phase!

Firefox failed to build. This is what the log says:

DEBUG: Executing: `/usr/local/bin/cbindgen --version`
DEBUG: /usr/local/bin/cbindgen has version 0.18.0
ERROR: cbindgen version 0.18.0 is too old. At least version 0.19.0 is required.

Please update using 'cargo install cbindgen --force' or running
'./mach bootstrap', after removing the existing executable located at
/usr/local/bin/cbindgen.

===>  Script "configure" failed unexpectedly.
Please report the problem to gecko@FreeBSD.org [maintainer] and attach the
"/construction/xports/www/firefox/work/.build/config.log" including the output
of the failure of your make command. Also, it might be a good idea to provide
an overview of all packages installed on your system (e.g. a
/usr/local/sbin/pkg-static info -g -Ea).
*** Error code 1

Stop.
make: stopped in /xports/www/firefox



--------------------------------------------------
--  Termination
--------------------------------------------------
Finished: Friday, 11 JUN 2021 at 03:03:06 UTC
Duration: 00:03:06

Oh well! We’ve hit a problem in the ports tree. Somebody updated the Firefox port in our branch to a version that requires a newer cbindgen port than is available in the same branch! Breakage like this does happen sometimes (we’re all human after all). What to do about it? In our case: Ignore it as it’s only an example. Otherwise I’d advise you to update to a newer ports tree as these problems are usually quickly redeemed.

Synth is asking whether it should rebuild the repository. Yes, we want to do that. Then it asks if it should update the system with the newly built packages. And no, not now. Also note: The synth build command that we used here, is interactive and thus not well fit if you want to automate things:

Would you like to rebuild the local repository (Y/N)? y
Stand by, recursively scanning 1 port serially.
Scanning existing packages.
Packages validated, rebuilding local repository.
Local repository successfully rebuilt
Would you like to upgrade your system with the new packages now (Y/N)? n

What else do we have to do to sign the repository? Nothing. Synth has already done that and even changed the local repository configuration to make the package manager verify the signature:

# tail -n 3 /usr/local/etc/pkg/00_synth.conf
  signature_type: PUBKEY,
  pubkey        : /usr/local/etc/synth/LiveSystem-public.key
}

That wasn’t so hard now, was it? You might want to know that Synth also supports using a signing server instead of signing locally. If this is something you’re interested in, do a man 1 synth and read the appropriate section.

Global options with make.conf

FreeBSD has two main configuration files that affect the compilation process when using the system compiler. The more general one, /etc/make.conf and another one that is only used when building FreeBSD from source: /etc/src.conf. Since we’re talking about building ports, we can ignore the latter.

There’s a manual page, make.conf(5), which describes some of the options that can be put into there. Most of the ones covered there are only relevant for building the system. Do by all means leave things like CFLAGS alone if you don’t know what you’re doing! Regarding the ports tree, it’s most useful to set or unset common options globally. It’s very tedious to set all the options for your ports manually like this:

# make -C /usr/ports/sysutils/tmux config-recursive

You need to do this for specific ports that you want to change the options for. But if there’s some that you have a global policy for, it’s better to use make.conf. Let’s say we want to never include documentation and examples in our ports. This would be done by adding the following line to /etc/make.conf:

OPTIONS_UNSET+=DOCS EXAMPLES

This affects all ports whether built by Synth or not as well as each and every Synth profile. Let’s say we also want no foreign language support in our packages for the default Synth profile (but in all others), we’d create the file /usr/local/etc/synth/LiveSystem-make.conf and put the following in there:

OPTIONS_UNSET+=NLS

That setting will build packages without NLS in addition to building without DOCS and EXAMPLES – if “LiveSystem” is the active profile.

If you want to build all ports that support it with e.g. the DEBUG option, add another line:

OPTION_SET+=DEBUG

Some common options that you might want to use include:

  • X11
  • CUPS
  • GTK3
  • QT5
  • JAVA
  • MANPAGES

After unsetting DOCS and EXAMPLES globally as well as NLS for the default profile, we’re going to rebuild the installed packages next:

# synth prepare-system

Package rebuild done

Note that Synth only rebuilt the packages that were affected by the changed options either directly or via their dependencies. For that reason only 219 of the 344 packages actually installed were rebuilt. If we now use pkg upgrade, this is what happens (see screenshot):

pkg upgrade will free space due to removing docs, examples and nls files

Ignore the 3 packages getting updated; this is due to packages that were skipped due to the Rust failure originally. That port has successfully built when we were trying to build Firefox, so our latest package runs built three more packages that have not been updated to, yet.

More interestingly: There’s 31 reinstalls. Most of them due to the package manager detecting changed options and one due to a change to a required shared library. It’s not hard to do the math and figure out that 31 is quite a bit less than 219. It’s a little less obvious that build-time dependencies count towards that greater number while they don’t appear in the count of packages that are eventually reinstalled. Still it’s true that Synth takes a “better safe than sorry” approach and tends to rebuild some packages that pkg(8) will end up not reinstalling. But this is not much of a problem, especially if you’re using ccache.

Alternative profiles

If you want to use Synth to build more than one specific set of packages for exactly one platform, you can. One way to achive this would be to always change Synth’s configuration. But that’s tedious and error prone. For that reason profiles exist. They allow you to have multiple different configurations available at the same time. If you’re simply running Synth from the command line like we’ve always done so far, it will use the configuration of the active profile.

To show off how powerful this is, we’re going to do something a little special: Building 32-bit packages for the no longer supported version of FreeBSD 12.1. Since amd64 CPUs are capable of running i386 programs, this does not even involve emulation. We need to create a couple of new directories first:

# mkdir /var/synth/www/packages/12.1_i386
# mkdir /var/synth/www/log/12.1_i386
# mkdir -p /var/synth/sysroot/12.1_i386

The last one is something that you might or might not be familiar with. A sysroot is a somewhat common term for – well, the root of a system. The sysroot of our running system is /. But we can put the data for other systems somewhere in our filesystem. If we put the base system of 32-bit 12.1-RELEASE into the directory created last, that’ll be a sysroot for 12.1 i386. Technically we don’t need all of the base system and could cherry-pick. It’s easier to simply use the whole thing, though:

# fetch -o /tmp/12.1-i386-base.txz http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/12.1-RELEASE/base.txz
# tar -C /var/synth/sysroot/12.1_i386 -xf /tmp/12.1-i386-base.txz
# rm /tmp/12.1-i386-base.txz
# synth configure

Alright. Now we’re going to create an additional profile. To do so, press > (greater than key), then chose 2 (i.e. Create new profile) and give it a name like e.g. 12.1-i386. Then change the following three settings:

B: /var/synth/www/packages/12.1_i386
E: /var/synth/www/log/12.1_i386
G: /var/synth/sysroot/12.1_i386

That’s all, after you save the configuration you’re ready to go. Create a list of packages you want to build and let Synth do it’s thing:

# synth just-build /var/synth/pkglist.12.1_i386

The build will fail almost immediately. Why? Let’s take a look. Building pkg(8) failed and here’s why:

--------------------------------------------------------------------------------
--  Phase: check-sanity
--------------------------------------------------------------------------------
/!\ ERROR: /!\

Ports Collection support for your FreeBSD version has ended, and no ports are
guaranteed to build on this system. Please upgrade to a supported release.

No support will be provided if you silence this message by defining
ALLOW_UNSUPPORTED_SYSTEM.

*** Error code 1

Stop.
make: stopped in /xports/ports-mgmt/pkg

Ok, since we’re trying to build for a system that’s not supported anymore, the ports infrastructure warns us about that. We have to tell it to ignore that. How do we do that? You might have guessed: By using a make.conf for the profile we’re using for this set of packages:

# echo ALLOW_UNSUPPORTED_SYSTEM=1 > /usr/local/etc/synth/12.1-i386-make.conf

Then try again to build the set – and it will just work.

Successfully built all the packages using the i386 profile

Automation & Hooks

Last but not least let’s put everything we’ve done so far together to automate building two package sets. We can make use of cron(8) to schedule the tasks. Let’s add the first one to /etc/crontab like this:

0	23	*	*	7	root	env TERM=dumb /usr/local/bin/synth just-build /var/synth/pkglist-12.1-i386

What does it do? It will run synth at 11pm every sunday to build all of the packages defined in the package list referenced there. There’s two things to note here:

  1. You need to disable curses mode in all the profiles you’re going to use. Synth still expects to find the TERM environment variable to figure out the terminal’s capabilities. You can set it to dumb as done here or to xterm or other valid values. If you don’t set it at all, Synth will not run.
  2. The cron entry as we’re using it here will use the active profile for Synth. It’s better to explicitly state which profile should be used. Let’s add another line to crontab for building the amd64 packages for 13.0 on Friday night:
0	23	*	*	5	root	env TERM=dumb SYNTHPROFILE=LiveSystem /usr/local/bin/synth just-build /var/synth/pkglist-13.0-amd64

In general I’d recommend to consider not calling synth directly from cron but to write small scripts instead. You could for example backup the current package set before actually starting the new build or you could snapshot the dataset after the successful build and zfs-send it off to another system.

One last thing that you should be aware of is that Synth provides hooks like hook_run_start, hook_run_end, hook_pkg_failure and so on. If you’re considering using hooks, have a look at the Synth manpage, they are covered well there.

What’s next?

Next topic would be covering Poudriere. However I’m considering taking a little break from package building and writing about something else instead before returning to this topic.

FreeBSD package building pt. 4: (Slightly) Advanced Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt4.gmi

In the previous posts of this series, there was an introduction to package building on FreeBSD and we discussed basic Synth usage. The program’s configuration, working with compiler cache and using the tool to update the installed applications was covered as well.

This article is about some advanced functionality of Synth like using web reports, building package sets and serving repositories as well as taking a look at logs.

Reporting

Synth comes with very useful reporting reporting functionality. For the default LiveSystem profile, Synth writes its logs to /var/log/synth. There it also creates a subdirectory called Reports and puts an advanced web report in there. It looks like this:

% ls -1 /var/log/synth/Report 
01_history.json
favicon.png
index.html
progress.css
progress.js
summary.json
synth.png

We’re going to build and setup a web server to make it accessible. I will use OpenBSD HTTPd for a reason that we’re going to talk about in a minute (besides me liking it quite a bit). Let’s use Synth to install it first. Then we’re going to enable it and create a log directory for it to use:

# synth install www/obhttpd
# sysrc obhttpd_enable=YES
# mkdir /var/log/obhttpd

OpenBSD HTTPd successfully installed by Synth

Alright. Remember how I told you not to change Synth’s directory configuration unless you have a reason for it? Now we have: We’re going to serve both the web report and the packages over http and we’re using OpenBSD HTTPd for that. That browser is chrooted: Not just by default, BTW but by design! You cannot turn it off. Unless told otherwise, Synth has a directory structure that doesn’t fit this scenario well. So we’re going to change it.

First thing is creating a new directory for the logs and then changing the configuration so Synth uses that:

# mkdir -p /var/synth/www/log
> # synth configure

choc

Change setting E to /var/synth/www/log and save. The new directory will of course be empty. We could copy the old files over, but let’s just rebuild HTTPd instead. Using synth build or synth just-build doesn’t work, though; the tool will detect that an up to date package has already been built and that nothing needs to be done. That’s what the force command is handy for:

# synth force www/obhttpd

Web server setup for reports

Now that we have something to serve, we can edit the webserver’s configuration file. Simple delete everything and put something like this into /usr/local/etc/obhttpd.conf:

chroot "/var/synth/www"
logdir "/var/log/obhttpd"

server synth.local {
    listen on * port 80
    root "/log"
    log style combined
    location "/" {
        block return 302 "http://$HTTP_HOST/Report/index.html"
    }
}

This defines where the chroot begins and thus where the unprivileged webserver processes are contained. It also defines the log directory as well as a virtual host (simply called “server” in OpenBSD HTTPd) with the name “synth.local”. Either replace this with a proper domain name that fits your scheme and configure your DNS accordingly or use /etc/hosts on all machines that need to access it to define the name there.

The server part is pretty simple, too. It makes HTTPd bind on port 80 on every usable network interface. The web root is defined relative to the chroot, so in this case it points to /var/synth/www/log. I’ve grown a habit of using the more detailed combined log style; if you’re fine with the default common format, you can leave the respective line out. Finally the configuration block defines a special rule for location / which means somebody accesses the virtual host directly (i.e. http://synth.local in this case). It will make the browser be redirected to the report index instead. Getting a special file (like e.g. lang___python37.log in the log directory) will not trigger the rule and thus still work. This is just a convenience thing and if you don’t like it leave it out.

All that’s missing now is starting up the webserver:

# service obhttpd start

You should now be able to point your browser at the the vhost’s name (if you made it resolve). Just using the machine’s IP address is also going to work in this case since it’s the default vhost. But better make it reachable using the configured name as we’re adding another vhost in a moment.

Synth web report for the latest package run

Authentication

But for now what about security? Let’s say you don’t want to share your report with the whole world. One easy means of protecting it is by using HTTP basic auth. OpenBSD HTTPd uses standard .htpasswd files. These can however use various cryptographic hashes for the passwords – whereas HTTPd only supports one: bcrypt.

The first time I tried to do authentication with OpenBSD HTTPd, it drove me completely nuts as I couldn’t get it working. Fortunately I own Michael W. Lucas’ excellent book “Httpd and Relayd Mastery”. After digging it out the index indicated that I might want to read page 28. I did, banged my head against the table and immediately got it working using that hashing algorithm. Don’t be like me, skip trying to use foreign tools you may be used to and just do it right in the first place. HTTPd comes with its own htpasswd binary. Use that.

In this example I’m adding a user called “synth”. Use whatever you prefer. Then give the password two times. This leaves you with a valid .htpasswd file that HTTPd could use – if it was allowed to access it! Let’s fix that problem:

# chown root:wheel /var/synth/www/.htpasswd
> # chmod 0640 /var/synth/www/.htpasswd

Having the authentication information in place, we only need to add another location block to the webserver’s vhost configuration. Put the following in there after the line that closes the previous location block:

    location "/Report/* {
        authenticate with "/.htpasswd"
    }

Note the htpasswd file’s location! It’s within the chroot (or it couldn’t be accessed by the webserver), but outside the webroot directory. So HTTPd could never accidentally serve it to somebody who knew that it was there and requested the file.

The only thing that remains is restarting the webserver. Next time you visit the report page, you’ll be asked to authenticate first.

# service obhttpd restart

Package repository

So far all of our packages have been created in a directory outside of the webserver’s chroot. If we want to make them available via HTTP, we need to use another path for them. Therefore we’re going to create a directory and reconfigure Synth again:

# mkdir -p /var/synth/www/packages
# synth configure

This time it’s setting B. Change it to /var/synth/www/packages and save. Now let’s build a package that draws in a couple of dependencies:

# synth just-build chocolate-doom

We can watch it now via the web reports while it’s building. Since it’s a new directory where no packages exist, yet, Synth is first going to build the package manager again. During this early stage no report is available, but once that’s finished the reports work.

While we’re rebuilding all packages due to the new package directory, Synth can take advantage of ccache as we haven’t modified it’s path. Wonder how much of a difference that actually makes? Building llvm10 on its own, one time using the cache and one time (for testing purposes) without it will show you the difference:

Duration: 00:13:32 (with ccache)
Duration: 02:09:37 (uncached)

Synth web report while it’s building

It gives us all the information that the curses UI holds – and more. The number of entries for completed packages can be changed. You can browse those page-wise back to the very first packages. It’s possible to use filters to e.g. just list skipped or failed packages. You can view / download (whatever your browser does) the log files for all those packages. And there’s even a search (which can be very handy if you’re building a large package set).

Report with only 10 entries per page

As long as packages are being built, the report page also shows the builder information and automatically refreshes the page every couple of seconds. Once it completes, it removes builder info (which would only waste space) and stops the polling. You can always come back later and inspect everything about the latest package run. The next one will overwrite the previous information, though.

Synth report search function

Now that we have a bunch of newly built packages, let’s see what that looks like:

# ls -1 /var/synth/packages/All
autoconf-2.69_3.txz
autoconf-wrapper-20131203.txz
automake-1.16.3.txz
binutils-2.33.1_4,1.txz
bison-3.7.5,1.txz
ca_root_nss-3.63.txz
ccache-3.7.1_1.txz
celt-0.11.3_3.txz
chocolate-doom-3.0.1.txz
cmake-3.19.6.txz
curl-7.76.0.txz
db5-5.3.28_7.txz
docbook-1.5.txz
docbook-sgml-4.5_1.txz
docbook-xml-5.0_3.txz
docbook-xsl-1.79.1_1,1.txz
doom-data-1.0_1.txz
evdev-proto-5.8.txz
expat-2.2.10.txz
flac-1.3.3_1.txz
[...]

Showing only ignored packages in the report (none in this case)

The packages are there. But what’s in the main directory?

# ls -l /var/synth/www/packages
total 18
drwxr-xr-x  2 root  wheel  150 Jun  7 23:57 All
drwxr-xr-x  2 root  wheel    3 Jun  7 23:21 Latest

This is not a valid pkg(8) repository. Which is no wonder since we used just-build. So we’re going to have Synth create an actual repository from these packages next:

Searching in the report after the build was completed

# synth rebuild-repository
# ls -l /var/synth/www/packages
total 117
drwxr-xr-x  2 root  wheel    150 Jun  7 23:57 All
drwxr-xr-x  2 root  wheel      3 Jun  7 23:21 Latest
-rw-r--r--  1 root  wheel    163 Jun  8 00:02 meta.conf
-rw-r--r--  1 root  wheel    236 Jun  8 00:02 meta.txz
-rw-r--r--  1 root  wheel  40824 Jun  8 00:02 packagesite.txz

Here we go, that’s all that pkg(8) needs. Synth should have automatically updated your repository configuration to use the new location. Have a look at /usr/local/etc/pkg/repos/00_synth.conf – the URL should point to the new directory.

Serving the repository

The next step is to make the repository available in the network, too. So edit /usr/local/etc/obhttpd.conf once more and add another “server” (i.e. vhost):

server pkg.local {
    listen on * port 80
    root "/packages"
    log style combined
    location * {
        directory auto index
    }
}

One service restart later you should be able to access the repository via a web browser from any machine in the same subnet (if you got your DNS right):

# service obhttpd restart

Looking at the package repository with a browser

This is already it, but let’s prove that it works, too. I’m adding the “pkg.local” name to the machine’s 127.0.0.1 definition in /etc/hosts, then change the URL in the Synth repo to fetch packages via HTTP:

  url      : http://pkg.local,

I’ve also created a FreeBSD.conf to disable the official repository. Let’s stop the webserver for a second and then try to update the package DB:

# service obhttpd stop
# pkg update
Updating Synth repository catalogue...
pkg: Repository Synth has a wrong packagesite, need to re-create database
pkg: http://pkg.local/meta.txz: Connection refused
Unable to update repository Synth
Error updating repositories!

Ok, so there’s no other repository configured anymore and this one is not accessed via the filesystem. So we’re going to start the webserver once more (give it a sec) and then try again:

# service obhttpd start
# pkg update
Updating Synth repository catalogue...
Fetching meta.conf: 100%    163 B  0.2kB/s    00:01
Fetching packagesite.txz: 100%   40 KiB  40.8kB/s   00:01
Processing entries: 100%
Synth repository update completed. 148 packages processed.
All repositories are up to date.

Great! So now we can install DooM on this box or on any other machine running FreeBSD 13.0 which can reach it over the network.

Package sets

So far we’ve only either built all packages for what was already installed on the machine or for single ports that we selected at the command line. But now that we can serve packages over the network, it’s rather tempting to use a powerful build machine to build packages for various other FreeBSD machines, isn’t it? Let’s assume that you’re going to share packages with a KDE lover.

First we should prepare a list of packages that we need, starting with what is installed on our machine.

# pkg info | wc -l
345

Wow, that’s already quite some packages for such a pretty naked system! But we don’t need to consider them all as most of them are just dependencies. Let’s ask pkg(8) for the origin of all packages we explicitly installed (i.e. which were not recorded as automatically installed):

# pkg query -e %a=0 %o > /var/synth/pkglist
# cat /var/synth/pkglist
x11-wm/awesome
devel/ccache
graphics/drm-kmod
devel/git
www/obhttpd
ports-mgmt/pkg
ports-mgmt/portmaster
x11/sakura
x11/setxkbmap
security/sudo
ports-mgmt/synth
sysutils/tmux
x11/xfce4-screenshooter-plugin
x11/xorg-minimal

That’s better! But we certainly don’t need portmaster anymore, so we can take it off the list (and deinstall it actually). Let’s add www/firefox and x11/kde5 for our pal (and sort the list since it’s a bit ugly right now).

Once that’s done, we should be able to do a simple:

# synth build /var/synth/pkglist
Error: port origin 'devel/git' not recognized.
Perhaps you intended one of the following flavors?
   - devel/git@default
   - devel/git@gui
   - devel/git@lite
   - devel/git@svn
   - devel/git@tiny

Oh yes, right! We need to edit our list and specify the flavor to build! I’m going with the lite variant here, so the git line needs to be changed to this:

devel/git@lite

Then we can try again – and yes, it starts building after calculating the required dependencies.

Logs

Whoopsie! After about an hour stupid me removed the network cable for a moment. This has caused a couple of build failures (see screenshot). The report will display the phase that the build failed. In this case it’s the fetch phase (and we don’t have to look for the reason as we already know it). Sometimes a distfile mirror is temporary down or the distfile has been removed. In that case you will have to manually get the files and put them into the distfiles directory. Skipped ports also display the reason, i.e. which dependency failed previously.

Failed and skipped ports due to a connection problem

I better re-attach that cable right away and start the building over… Many hours later it has finished. But what’s this? Rust has failed again (and this time it wasn’t me)! And it failed at the stage phase. When this happens it’s usually because of a broken port got committed. Update your ports tree and hope that it has been fixed in the meantime. This is not the reason in our case, however.

Another phase, another failure!

But how do we find out what actually happened? Well, by looking at the logs, of course. Here’s the last 15 lines of the relevant log:

        finished in 183.141 seconds
  < Docs { host: TargetSelection { triple: "x86_64-unknown-freebsd", file: None } }
Install docs stage2 (Some(TargetSelection { triple: "x86_64-unknown-freebsd", file: None }))
running: "sh" "/construction/xports/lang/rust/work/rustc-1.51.0-src/build/tmp/tarball/rust-docs/x86_64-unknown-freebsd/rust-docs-1.51.0-x86_64-unknown-freebsd/install.sh" "--prefix=/construction/xports/lang/rust/work/stage/usr/local" "--sysconfdir=/construction/xports/lang/rust/work/stage/usr/local/etc" "--datadir=/construction/xports/lang/rust/work/stage/usr/local/share" "--docdir=/construction/xports/lang/rust/work/stage/usr/local/share/doc/rust" "--bindir=/construction/xports/lang/rust/work/stage/usr/local/bin" "--libdir=/construction/xports/lang/rust/work/stage/usr/local/lib" "--mandir=/construction/xports/lang/rust/work/stage/usr/local/share/man" "--disable-ldconfig"
install: creating uninstall script at /construction/xports/lang/rust/work/stage/usr/local/lib/rustlib/uninstall.sh
install: installing component 'rust-docs'
###  Watchdog killed runaway process!  (no activity for 78 minutes)  ###



--------------------------------------------------
--  Termination
--------------------------------------------------
Finished: Wednesday, 9 JUN 2021 at 00:04:47 UTC
Duration: 04:27:03

Ha! The build process was killed by the watchdog! Bad doggy? It does happen that the process would eventually have finished. Not this time. We have to dig a little deeper. In /var/log/messages of the build machine I can find the messages kernel: swap_pager: out of swap space and kernel: swp_pager_getswapspace(4): failed. This machine has 24 GB of RAM and 8 GB of swap space configured. And by building 6 huge ports concurrently, it exceeded these resources! Keep in mind that package building can be quite demanding, especially if you use tmpfs (which you should if you can).

So, there we are. We’ve configured our build server for web reports and serving the repository. We’ve looked at building package sets and covered a few examples of what can go wrong. And that’s it for today.

What’s next?

The last article about Synth will cover make.conf, signing repositories and using cron for automated builds. We’ll also take a brief look at profiles.

FreeBSD package building pt. 3: Intermediate Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt3.gmi

In this article we’ll continue exploring Synth. After covering a general introduction to package building as well as Synth’s basic operation in the previous articles, we’re going to look at the program’s configuration and using it for updating the system.

The first thing to do here is to get rid of the old ports tree and replace it with a newer one so that some updates become available:

# zfs destroy zroot/usr/ports
# zfs create zroot/usr/ports
# git clone --depth 1 -b 2012Q2 https://git.freebsd.org/ports.git /usr/ports
# mkdir /usr/ports/distfiles
# synth status

Synth regenerates the flavor index again, then gives the output. This time it will not just show all packages as “new” but most are either updates or rebuilds.

Synth status with new ports tree

Configuration

Let’s look at how you can change Synth’s behavior next:

# synth configure

Synth’s configuration is pretty straight-forward. Options A to G configure the various directories that the program uses. Should you change anything here? If you know what you are doing and have special needs: Maybe. If you actually ask the question whether to change any such directory, the answer is no. Option H (Compiler cache) is a special one. We’ll get to that in a minute.

With I you set the number of builders that Synth uses. J means how many threads each builder may use; think make -j $SOMENUMBER. When compiling a single piece of software, it’s usually recommended to set the number of jobs equal to the machines core count + 1. Take a look at the screenshot above: Doing the math, we’re configuring for 24 cores here – on a system that has 8 (with Hyper Threading).

Synth’s configuration menu

Why does Synth choose to over-provision the resources so much? The reason is simple: It’s only over-provisioned when more than three builders are active at the same time. Often enough not all builders will be at the build stage (where the compiling happens) at the same time. Most other stages are much lighter on CPU – which would mean that you’re wasting available resources (and thus prolong the total build time). Also in the previous post you’ve seen that LLVM and Rust took hours to build (all four remaining builders were idle most of the time!). If the cap would have been lower, build times would have increased even more.

So what’s the best setting for builder count and max jobs? There’s no single answer to that. It depends on both your machine and on the set of ports that you build. Play with both values a bit and get a feeling what works best for you. Or leave it at the default that Synth calculated for your system which will probably work well enough.

Options K and L are real speed boosters. They control whether the base system for the builder (“localbase”) and the directory where the program is built are using tmpfs or not. Tmpfs is a memory-backed filesystem. If you disable one or both options, compilation times increase a lot because all the data that needs to be copied on setting up a builder and compiling software will be written to your disk. For an extreme example: On one of my machines, building and testing a port increased from slightly over 30 seconds with tmpfs to over 4 minutes (!) without it. Yes, that machine uses an HDD and it was occupied with other things besides building packages. But there is more than a negligible impact if you disable tmpfs.

So when to do it in the first place? If you’re on a machine with little RAM you might have to do this. Some ports like LLMV or Firefox require lots of RAM to build. If your system starts swapping heavily, disable tmpfs. Ideally build those ports separately and leave tmpfs on for all the rest.

Then there’s option M which toggles the fancy colored text UI on or off. If you turn it off, you’ll get a simple monochrome log-like output of which builder started or finished building which port. Every now and then the info that’s in the top bar of the UI (e.g. number of packages completed, number of packages remaining, skips, etc) gets printed.

Finally we have option N which toggles using pre-built packages on or off. It’s off by default which means that Synth will build everything required for the selected package set. If you enable this option and have e.g. the official FreeBSD repository enabled as well, it will try to fetch suitable packages from there that can be used as the buildtime or runtime dependencies of the packages that still need to be built. If you’re mixing official and custom packages this could be a huge time saver. I admit that I have never used this option.

And what’s this profile thing? Well, ok. The configuration we’ve been looking at is for the default profile. You can create additional ones if you want to. And that’s where the directories that I told you to ignore come into play. You could for example create a profile to use a different ports tree (e.g. the quaterly branch) and switch between the profiles to build two different packages sets on one machine. While Synth can do this, that is the point where I’d advise you to try out Poudriere instead. The main benefit of Synth over Poudriere is ease of use and when you’re trying to do clearly advanced things with Synth you might as well go for the officially supported FreeBSD package builder instead.

Compiler cache

Let’s disable the curses UI just to see what Synth looks like without it and save the config change. If you plan to build packages regularly on your system, you will definitely want to set up the compiler cache. To be able to use it, we first need another package installed, though: ccache. We’re going to build it and then install it manually this time:

# synth just-build devel/ccache
# pkg add /var/synth/live_packages/All/ccache-3.7.1_1.txz

Then we’re going to create a directory for it and go back to Synth’s configuration menu:

# mkdir -p /var/tmp/ccache/synth
# synth configure

Now change H to point to the newly created directory and save. Synth will use ccache now. But what does it do? Ccache caches the results from compilation processes. It also detects if the same compilation is happening again and can provide the cached result instead of actually compiling once again. This means that the first time you compile it doesn’t make a difference but after that the duration of building packages will drop significantly. The only cost of this is that the cached results take up a bit of space. Keep ccache disabled if drive space is your primary concern. In all other cases definitely turn it on!

Ccache directory and config after the system upgrade

Updating the system

Next is using Synth to update the system.

# synth upgrade-system

After determining which packages need to be built / rebuilt, Synth will start doing so. We’ve turned off the text UI, so now we get only the pretty simplistic output from standard text mode.

Package building in pure text mode

It provides you with the most important information about builders: Which builder started / finished building which package when. You don’t get the nice additional info about which state it’s in and how long it has been busy building so far. As mentioned above, Synth will print a status line every couple of minutes which holds the most important information. But that’s all.

Status lines amidst the builder output

What happens if something goes wrong? I simulated that by simply killing one of the processes associated with the rust builder. Synth reports failure to build rust and prints a status line that shows 17 package skips. In contrast to the curses UI it does not tell you explicitly which ones were skipped, though!

Simulated package build failure

When Synth is done building packages, it displays the tally as usual and does some repository cleanup by removing the old packages. Then it rebuilds the repository.

Tally displayed after completion and repository cleanup

Since we asked Synth to upgrade the system, it invokes pkg(8) to do its thing once the repository rebuild is complete.

Package repository rebuilt, starting system upgrade

And here’s why I strongly prefer prepare-system over upgrade-system: The upgrade is initiated whether there were failed packages or not. And since pkg(8) knows no mercy on currently installed programs when they block upgrades, it will happily remove them by default! To be safe it makes sense to always review what pkg(8) would do before actually letting it do it. Yes, it’s an additional step. Yes, most of the time you’ll be fine with letting Synth handle things. But it might also bite you. You’ve been warned.

System upgrade in progress – it may remove packages!

Every now and then you may want to run synth purge-distfiles (unless you have unlimited storage capacity, of course). It will make the tool scan the distinfo files of all ports and then look for distfile archives of obsolete program versions to remove.

Cleaning up old distfiles with purge-distfiles

There might not be a large gain in our example case, but things do add up. I’ve had Synth reclaim multiple GB on a desktop machine that I regularly upgraded by building custom packages. And that’s definitely worth it.

What’s next?

The next article will cover some leftover topics like logs, web report and repository sharing.

FreeBSD package building pt. 2: Basic Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt2.gmi

The previous article was an introduction into package building on FreeBSD in general. It also included references to various other articles about package management and working with ports. Familiarity with those topics is assumed here. If you don’t feel confident, have a look there and do a bit of reading first. Or just go ahead and do some research if you hit something that you have trouble understanding – package building is not a beginner’s topic but it ain’t rocket science, either.

In the following section I’m following a “naive approach” of just installing and exploring Synth on a typical desktop system. Afterwards I’m switching to the test system that was described in the previous post so that I can show updating, too.

Synth basics

So let’s get started with Synth now, shall we? The first thing to do is of course installing it:

# pkg install synth

If you want to or need to build it from source instead, the origin is ports-mgmt/synth. It has one compile-time option: Enabling or disabling the watchdog feature. By default it is enabled which means that Synth is supervising the build processes and if one looks like it has stalled, the watchdog will kill it. When you are building on very slow hardware (why are you doing this in the first place?) it may be an anti-feature that causes very complex ports (e.g. Firefox) to be killed even though it would otherwise eventually finish. In that case you can disable the watchdog option and rebuild Synth. Or you go to the junkyard and get some more capable hardware!

Synth port options

As with every new piece of software, it’s probably a good idea to get an overview of what it can do:

% synth help
Summary of command line options - see synth.1 man page for more details
===============================================================================
synth status              Dry-run: Shows what 'upgrade-system' would build
synth configure           Brings up interactive configuration menu
synth upgrade-system      Incremental rebuild of installed packages on system.
                          Afterwards, the local repository is rebuilt and the
                          system packages are automatically upgraded.
synth prepare-system      Like 'upgrade-system' but ends when repo is rebuilt
synth rebuild-repository  Rebuilds local Synth repository on command
synth purge-distfiles     Deletes obsolete source distribution files
synth status-everything   Dry-run: Shows what 'everything' would build
synth everything          Builds entire ports tree and rebuilds repository
synth version             Displays version, description and usage summary
synth help                Displays this screen
synth status [ports]      Dry-run: Shows what will be rebuilt with given list
synth build [ports]       Incrementally build ports based on given list, but
                          asks before updating repository and system
synth just-build [ports]  Like 'build', but skips post-build questions
synth install [ports]     Like 'build', but upgrades system without asking
synth force [ports]       Like 'build', but deletes existing packages first

synth test [ports]        Just builds with DEVELOPER=yes; pre-deletes pkgs

[ports] is a space-delimited list of origins, e.g. editors/joe editors/emacs.
It may also be a path to a file containing one origin per line.

That’s a nice list and even better: The commands are explained in a way that mere mortals can understand. Basically it can compare installed program versions to those in ports for you with synth status. You can configure synth with synth configure (we’ll look at this later, the default config is fine for now). And it can bulk-build packages and use those to upgrade the system for you with synth upgrade-system. And those are the important ones that you’ll need to begin with.

There’s also synth prepare-system in case you want Synth to only build the packages (maybe you want to do the upgrade manually later). With synth rebuild-repository you can make the tool, well, rebuild the repository from the currently built packages. Since it does this after finishing package builds, you will need it only if you cancelled building somewhere in the middle and want it to rebuild the repo anyway. The synth purge-distfiles command is useful if you use Synth for a while. It will scan for no longer needed distfiles (for obsolete program versions) and potentially free quite a bit of space for you.

And that’s it. The synth status-everything and synth everything commands are only useful if you want to build packages for the entire FreeBSD ports collection. But that’s certainly not basic usage! The various options that act on a single port or a list of ports is also advanced usage. They’ll come in handy if you plan on developing ports, build packages for other machines to use or have special needs. If you plan to keep your machine’s installed packages consistent for production use, know that these commands exist but stay away.

Package status

So much for some theory and on to actually doing something. Let’s ask Synth about the status of our packages:

# synth status
It seems that a blank PORTSDIR is defined in /etc/make.conf
Configuration failed to load.

Alright, this message can be a little confusing as by default there is no such file in FreeBSD! What Synth actually wants to tell you is that it cannot find a ports tree on the system and thus cannot do anything. So let’s get the ports in place right now:

# portsnap auto

With the ports tree available on our system we can try again:

# synth status
Configuration invalid: [C] Distfiles directory: /usr/ports/distfiles
Rather than manually creating a directory at this location, consider
using a location outside of the ports tree. Don't forget to set
'DISTDIR' to this new location in /etc/make.conf though.

This time the problem is pretty straight-forward: The distfiles directory does not exist. Synth gives us some advice in case we want to use a custom location, however I prefer the standard one.

# mkdir /usr/ports/distfiles
# synth status
Regenerating flavor index: this may take a while ...
Scanning entire ports tree.
 progress: 3.32%

Depending on your build machine this can take anything from a couple of minutes to a quarter of an hour or so. Before FreeBSD introduced port flavors, it was a matter of seconds, but for a few years now Synth has to do some additional work up-front. When it’s done it will print something like this:

Querying system about current package installations.
> Installed package ignored, devel/py-evdev package unmatched
> Installed package ignored, devel/py-pyudev package unmatched
> Installed package ignored, devel/py-six package unmatched
> Stand by, comparing installed packages against the ports tree.
> Stand by, building pkg(8) first ... done!
> These are the ports that would be built ([N]ew, [R]ebuild, [U]pgrade):
>   N => print/indexinfo
>   N => devel/gettext-runtime
> [...]

For the first run, Synth has to build all of the packages. Therefore all are marked new. But what’s that “Installed package ignored” thing? That usually happens if a package is installed on the system that was built from a port that does no longer exist in the new tree. Sometimes it’s also weirdness that can happen with flavored ports. Disregard this if it’s only affecting packages that you don’t want to use directly. The correct ones will be pulled in as dependencies anyway. Now let’s build our first package set:

# synth prepare-system

After gathering some information, Synth will start building the packages that are currently installed on this machine (which for simplicity’s sake resemble those of the test system). Let’s have a look:

Synth started building packages

By default, Synth shows a nice curses-based text UI that displays a lot of information (see screenshot above). It shows the total count of packages to build, the number of packages that remain to be built and the count of already built packages. In case a port does not successfully build on this system (update ports and try again – if the problem persists either file a bug report or contact the maintainer), Synth displays that, too. Ignored packages are those that don’t work on your particular system; maybe the application is known to not build on the version of FreeBSD you are using or not on your architecture (if for example you’re building on ARM). Skipped count goes up if a failed port was a dependency for others. Obviously Synth cannot build them.

Synth also displays the current system load: A load of e.g. 2.0 means that two cores of your CPU are completely busy. If your CPU supports Hyper-threading that basically doubles the available core count. A load higher than 8.0 on a 4 core system with HT means that the system has currently more work than it’s able to fulfill concurrently. It’s not ideal but it’s not something to worry too much about, either. Watch swap usage. A little bit of swapping for large ports is not much of a problem. But if your system swaps a lot that will slow down the package building a lot. Should you manage to run out of swap, package builds will fail. You can adjust Synth’s configuration if you’re unhappy with either load or swap usage. But we’ll get to that.

Synth after about 20 minutes

There’s also the number of packages built per hour and the impulse as well as the elapsed time. Initially packages per hour and impulse are identical because the two are similar. However the former is the average number of packages built over the whole build time while the latter is the number of packages built within the last 500 seconds.

But that’s only the top bar. The next part of the UI is for the builders. Builders are clean build environments (“chroot jails”) that are created before building a package and then teared down afterwards. The next package gets a new fresh environment. The number of builders means how many packages can be building concurrently. On the screenshot you can see 6 builders which are color-coded to be easier to distinguish. This may look different on your machine and here’s why: Synth tries to guess reasonable defaults for your machine. If you have an old dual-core PC, it will use less than the six builders it deemed right for the quad-core i7 that I used as my test machine here. Expect to see a much higher number of builders on modern servers with a higher core count.

Idle builders after 5 hours

For each builder you see how long it has been working already, which phase it is currently in (remember the various build targets of the ports infrastructure?) which port the builder is occupied with and how many lines of log output the builder has has produced so far. Especially the last information is not going to help you a lot when you begin building packages. After a while you’ll know roughly how many lines of output some of the big ports like LLVM produce and can judge if it’s going to take another two hours to finish or more like half an hour. (And then a new version of LLVM comes out which takes even longer to build so that your previous idea of “how many lines” is no longer valid. That’s how things go.)

And finally there’s the rest of the screen made up of a list of recently finished packages. If you take a look at the second screenshot, you’ll see some ports where the origin ends with @py37. Here the builder is busy building a flavored port – and it’s building for Python 3.7. What’s @native you ask? Well, Python ports are a typical example of flavored ports but they are not the only ones. The binutils port for example is able to be built as part of a native toolchain or a cross toolchain in case you want to e.g. cross compile packages for riscv on your much more powerful amd64 machine.

First builder has shut down

What’s the deal with idle builders like those on screenshot 4? Idle builders are the ones that Synth has already prepared for another package to build in but has not been able to use so far. Take a look at screenshot 4: There’s 53 more packages to build but only two builders are occupied with work while four are idle. Can you guess why? The two ports currently building are LLVM10 and Rust. And the “problem” is that all other 51 packages that are still on the list depend on either LLVM or Rust directly or indirectly! So while there’s more work for the builders, Synth has to wait for the dependencies to finish building before it can start building the remaining packages.

At some point no further builders will be required. In that case Synth shuts them down (see screenshot 5).

Last package build starting after about 6 hours

When Synth has finished building all the packages requested, it will present you with the final statistics. Then it cleans up the package directory, removing obsolete old packages. Eventually it will build a valid pkg(8) repository for you – and then it’s finally done.

Synth has completed its task

Should you ever want to quit a package run early, try to avoid CTRL-C. If you press CTRL-Q instead, Synth will shutdown gracefully. This means that now new builders will be started and the tool exits properly once those that are already running complete.

Pkg(8) repositories

I’m covering the case where you want to use your own package repository instead of the official FreeBSD one. If you want to use both, make sure you read what I pointed my readers to in the previous article. Then configure your new repository as I do here but simply don’t disable the official repo.

The standard FreeBSD repository is configured in /etc/pkg/FreeBSD.conf. Do not make changes there, though! This file belongs to the base system and upgrades may overwrite it. If you want to change the settings, create /usr/local/etc/pkg/repos/FreeBSD.conf (and the missing directories). The latter file will override the former. In case you just want to disable the official package repository, simply put this single line into the file:

FreeBSD: { enabled: no }

Synth should automatically generate a config file there if you use it to upgrade the system. The file is called /usr/local/etc/pkg/repos/00_synth.conf and has the following content:

# Automatically generated.

Synth: {
  url      : file:///var/synth/live_packages,
  priority : 0,
  enabled  : yes,
}

Now you only need to update the repository information:

# pkg update

And that’s it. Pkg(8) will now happily use your local package repository.

What’s next?

The next article will feature Synth configuration, upgrading and advanced usage.

FreeBSD package building pt. 1: Introduction and test system

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt1.gmi

In 2017 I started to post a series of well-received articles that was meant to culminate in a tutorial on building packages on FreeBSD. I wrote about the history of package managers, about how to use FreeBSD’s pkg(8) package manager (part 1 and part 2) as well as an introduction to working with ports (part 1 and part 2).

Then I had to stop for technical reasons (after a major change to the ports infrastructure, the tool that I wanted to write about had not been updated to work with the new ports tree, yet!). In 2019 I eventually found the time to publish two more articles that were originally meant to come right after the 2017 posts. They covered using classic tools for ports management (part 1 and part 2).

They were meant to be the stepping stone to what I actually wanted to cover: Package building! The second article ended with:

I’ve planned for two more articles that will cover building from ports the modern way(tm) – and I hope that it will not take me another two years before I come to it…

That was… Yikes! In fall 2019… So I’d better hurry up now. I had hinted that using ports directly was not the recommended thing to do anymore and that while you should use packages unless you need ports, you really should use packages even in the latter case! And that’s what we’re going to do in the next few articles.

Why roll your own packages?

There are valid reasons to not use FreeBSD’s official packages:

  • Most frequently you need build-time options configured differently.
  • Or your organization uses a different default version of a scripting language (Perl, Python, Ruby, …) that what FreeBSD chose at the time being.
  • Maybe you’re running a customized version of FreeBSD that’s not officially supported.
  • Perhaps you need programs where the license forbids binary redistribution.
  • Or you use custom ports for some programs that of course cannot be available in official packages then.

Things like that. People choose ports when they need more control over their applications. There are good reasons to avoid using ports the traditional way, too, however:

  • It takes time and resources to build everything from source; especially if you do this on multiple machines, it’s a waste.
  • It makes updates much more complicated (see the second 2019 post mentioned above).
  • It clutters your system with build-time dependencies that need to be installed first so that the actual programs can be built.
  • Depending on what other software your machine has installed, the resulting programs might differ from that of other machines even if they built the same port!

While the former two points are mostly relevant if you manage multiple machines, I’d recommend rolling your own packages even for the single FreeBSD workstation that you use – if the official packages don’t suit you. Let me state this again: Try to go with the packages provided by the FreeBSD project first. Build your own packages if you have to (educating yourself is a completely valid reason to do it anyway, though).

Package builders

When you decide to roll your own packages, you have two options: Synth, the much easier and nicer package builder and Poudriere, the advanced build tool that FreeBSD uses and provides official documentation for.

Which one should you choose? I’m going to show how to work with both so you can compare them and make an informed decision. If you’re just getting started, you may want to give Synth a try. It is also a good choice when you use DragonFly BSD, too: The dsynth tool that they have in base was inspired by Synth (and if it wasn’t written in Ada they certainly would just have imported it instead of creating a re-implementation in C). You should also know that the Synth project is in maintenance mode. Its author still fixes bugs and takes pull requests on GitHub, but it’s feature-complete.

The main advantage of Synth is that it’s dead simple to setup and use, too, whereas Poudriere is a somewhat complex beast. Synth also shines when you want to use it to keep one machine up to date with packages as it can do those updates for you. Poudriere on the other hand allows you to do things like maintaining package repositories for multiple versions of FreeBSD as well as multiple architectures from one build machine. If you need that, forget Synth.

Ports and Git

One major change that was made in FreeBSD since the previous article was published is that the project migrated to using Git repositories. FreeBSD development started on CVS but in 2008 the src repository was successfully migrated to using Subversion. In mid 2012 docs and ports also made the switch. Subversion has been used ever since until December 2020 when doc and src transitioned to Git. After some remaining issues were solved, ports also migrated to Git in April 2021. While src changes get pushed back to Subversion for FreeBSD’s branches of 11 and 12, when it comes to ports, an era has ended.

Get rid of that habit of visiting svnweb.freebsd.org and start getting used to working with cgit.freebsd.org instead.

If you are unsure of your Git skills, you may want to at least skim over the Git-primer in FreeBSD’s documentation.

At least get familiar with the concepts. You should for example know that Git is able to do things like a shallow clone; looking things up when you need them is no problem but not being aware of them at all is.

While both Subversion and Git are used for version control and both use repositories, they are fundamentally different. Subversion is the best known version control system of the second generation (so-called (centralized) “networked VCS”). Git is the most popular one of the third generation (“decentralized VCS”).

If you use Subversion, there’s a central repository somewhere and you checkout a certain revision of the versioned files from there. The files that were checked out form your working directory. In Git you clone a remote repository and thus receive all the data needed to get a local copy of the whole repo. The files for the working directory are checked out from the local copy. The major difference is: You have the whole history available locally.

In case you have no need for all that data, you do a shallow clone instead of a regular full clone. To give you an example: If you do a shallow clone of the current ports tree today, the result is about 840 MB in /usr/ports – of which 85 MB in size is the Git repository. A full clone is about 1.7 GB in size of which about 920 MB is for the repo! So if you don’t need the history, save some space on your drive and save some donated bandwidth of the FreeBSD project.

Considerations before you start

While you can certainly start rolling your own packages on a fresh system, it’s also fine to begin doing so on a system that has used the standard FreeBSD ports or packages so far. There’s nothing wrong with that actually. Taking the opposite way and going back from custom packages to the official ones is also possible, of course. That latter case requires some more planning, though. Think about why you started building your own packages in the first place, find out if the standard packages fit your current needs and think about the consequences. If you’re uncertain, you may want to start over with regular packages. If you want to make the switch back, it’s best to re-install all the packages (use the flag -f with pkg upgrade). If you’re using ZFS in a way that supports Boot Environments, create one first.

In fact you can even choose to use both official and custom packages on the same machine: Instead of building the full set of packages that you need, you build just the packages yourself that you need to customize and use the packages from the standard FreeBSD package repositories for the rest. This works by configuring your package repository as an additional one for pkg(8) and giving it a higher priority. If you consider doing this and using multiple package repositories, be sure to

% man 5 pkg.conf

first. It doesn’t hurt to skim over the whole manpage, but make sure that you at least read the section REPOSITORY CONFIGURATION. I used that approach on a weaker machine in the past, but wouldn’t generally recommend it. If your hardware is recent enough, you can compile everything yourself. Otherwise it might make more sense to build on a somewhat beefy system and distribute the packages to your other machine(s). But that’s only me. Maybe mixing packages is the right solution for your use case.

Building a test system

The rest of this post is about building a test system to simulate an environment with pre-installed packages. I do this so that I can show off a few things regarding updates in the next post. If you want to follow along, make sure that you have Git installed and that your directory or dataset /usr/ports as well as /usr/src is empty. I’m assuming that you’re using FreeBSD 13.0.

The first thing to do is getting the ports tree in place. I’m doing a shallow clone of the 2021 first quarter branch (so I can update later). Then I clone operating system source repository as well:

# git clone --depth 1 -b 2021Q1 https://git.freebsd.org/ports.git /usr/ports
# git clone --depth 1 -b releng/13.0 https://git.freebsd.org/src.git /usr/src

Now we don’t need the packages anymore and forcefully remove them all (including pkg). The reason for this is that we want to build the older versions of the old ports tree that we just cloned.

# pkg delete -af

For convenience we’re going to build all the required programs using portmaster as discussed in a previous article (see the beginning of this post). Of course we need to build and install it first:

# make -C /usr/ports/ports-mgmt/portmaster install clean

Alright. Now we’re building some leaf ports to simulate a very simple development system. These ports draw in a number of dependencies so that we end up with about 350 packages (which is still a very low count for a desktop system). I’m building tmux and sudo simply because I always need to have them available and git just because we needed it before anyway. The rest is the graphics drivers, a simple subset of X11 + the command to set a different keyboard layout as well as the awesome window manager and the simple but nice GTK+ terminal emulator called sakura. Since I’ll need to take screenshots for the upcoming posts, I figured that I might include an application for that, too.

# portmaster -dG sysutils/tmux security/sudo devel/git graphics/drm-kmod x11/xorg-minimal x11/setxkbmap x11-wm/awesome x11/sakura x11/xfce4-screenshooter-plugin

And that’s it for today.

What’s next?

Next station is taking a look at building packages and updating the system using Synth.

CentOS killed by IBM – a chance to go new ways?

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2020/centos_killed.gmi

On December 8 2020 a Red Hat employee on the CentOS Governing Board announced that CentOS would continue only as CentOS Stream. For the classical CentOS 8 the curtain will fall at the end of 2021 already – instead of 2029 as communicated before! But thanks to “Stream” the brand will not simply go away but remain and add to the confusion. CentOS Stream is a rolling-release distribution taking a middle grounds between the cutting-edge development happening in Fedora and the stability of RHEL.

Many users feel betrayed by this action and companies who have deployed CentOS 8 in production trusting in the 10 years of support are facing hard times. Even worse for those companies who have a product based on CentOS (which is a pretty popular choice e.g. for Business Telephone Systems and many other things) or who offer a product targeted only at RHEL or CentOS. For those the new situation is nothing but a nightmare come true.

What do we do now?

IBM-controlled Red Hat obviously hopes that many users will now go and buy RHEL. While this is certainly technically an option I would not suggest going down that road. Throwing money at a company that just killed a community-driven distribution 8 years early? Sorry Red Hat but nope. And Oracle is never an option, either!

The reaction from the community has been overwhelmingly negative. Many people announced that they will migrate to Debian or Ubuntu LTS, some will consider SLES. Few said that they will consider FreeBSD now – being a happy FreeBSD user and advocate this is of course something I like to read. And I’d like to help people who want to go in that direction. A couple of weeks ago I announced that I’d write a free ebook called The Penguin’s Guide to Daemonland: An introduction to FreeBSD for Linux users. It is meant as a resource for somewhat experienced Linux users who are new to FreeBSD (get the very early draft here – if anybody is interested in this feedback is welcome).

But this post is not about BSD, because there simply are cases where you want (or need) a Linux system. And when it comes to stability, CentOS was simply a very good choice that’s very hard to replace with something else. Fortunately Rocky Linux was announced – an effort by the original founder of CentOS who wants to basically repeat what he once did. I wish the project good luck. However I’d also like to take the chance as an admin and hobby distro tinkerer to discuss what CentOS actually stood for and if we could even accomplish something better! Heresy? Not so much. There’s always room for improvement. Let’s at least talk about it.

Enterprise software

The name CentOS (which stood for Community Enterprise Operating System) basically states that it’s possible to provide an enterprise OS that is community-built. We all have an idea what a community is and while a lot could be written about how communities work and such I’d rather focus on the other term for now. What exactly is enterprise? It helps to make a distinction of the various grades of software. Here’s my take on several levels of how software can be graded:

  • Hobbyist
  • Semi-professional
  • Professional
  • Enterprise
  • Mission critical and above

Hobbyist software is something that is written by one person or a couple of people basically for fun. It may or may not work, collaboration may or may not be desired and it can vanish from the net any day when the mood strikes the decision maker. While this sounds pretty bad that is not necessarily the case. Using a nifty new window manager on your desktop is probably ok. If the project is cancelled tomorrow it just means that you won’t get any more bug fixes or new features and you can easily return to your previous WM. But you certainly don’t want to use such software in your product(s).

Semi-professional software is developed by one person or a team that is rather serious about the project and aiming for professionalism (but commonly falling short due to limitations of time and resources). Usually the software will at least have releases that follow a versioning scheme, source tarballs will not be re-rolled (re-using the same version number). There will be at least some tests and documentation. Patches as well as feedback and reporting issues are almost certainly welcome. The software will be properly licensed, come with things like change logs and release notes. If the project ends you won’t know because the repo on GitHub was deleted but because at least an announcement was made.

Professionally developed software does what the former paragraph talked about and more. It has a more complex structure with multiple people having commit rights so that a single person e.g. on holiday when a severe bug is found doesn’t mean nobody can fix things for days. The software has good test coverage and uses CI. There are several branches with at least the previous one still receiving bug fixes while the main development takes place in a newer branch. Useful documentation is available and there is a table with dates that show when support ends for which version of the software. There’s some form of support available. Such software is most often developed or at least sponsored by companies.

Enterprise products take the whole thing to another level. Enterprise software means that high-quality support options (most likely in various languages) are available. It means that the software is tested extensively and has a long life cycle (long-term support, LTS). There is probably a list of hardware on which the software is guaranteed to work well. And it’s usually not exactly cheap.

“Mission critical” software has very special requirements. For example it could be that it has to be written in Spark (a very strict Ada dialect) which means that formal verification is possible. Most of us don’t work in the medical or aerospace industries where lives and very expansive equipment may be at stake and fortunately we don’t have to give such hard guarantees. But without going deeper into the topic I wanted to at least mention that there’s something beyond enterprise.

Community enterprise?

Having a community-run project that falls into the professional category is quite an accomplishment even with some corporate backing. Enterprise-grade projects seem to be very much without reach of what a community is able to do. Under the right circumstances however it is doable. CentOS was such a project: They didn’t need to pay highly skilled professionals to patch the kernel or to backport fixes into applications and such. Thanks to the GPL Red Hat is forced to keep the source to the tools they ship open. The project can build upon the paid work of others and create a community-built distribution.

Whenever this is not the case the only option is probably to start as professional as possible and create something that becomes so useful to companies that they decide to fund the effort. Then you go enterprise. Other ways a project can go are aiming to become “Freemium” with a free core and a paid premium product or asking for donations from the community. Neither way is easy and even the most thoughtful planning cannot guarantee success as there’s quite a bit of luck required, too. Still, good planning is essential as it can certainly shift the odds in favor of success.

Another interesting problem is: How to create a community of both skilled and dedicated supporters of your project? Or really: Any community at all? How do you let people who might be interested know about your project in the first place? It requires a passionate person to start something, a person that can convince others that not only the idea is worthwhile but also that the goal is in fact achievable and thus worth the time and effort that needs to be invested.

A stable Linux OS base

As mentioned above, I’m a FreeBSD user. One of the things that I’ve really come to appreciate on *BSD and miss on Linux is the concept of a base system (Gentoo being FreeBSD-inspired kind of has “system” and “world”, though). It’s the components of the actual operating system (kernel and basic userland) developed together in one repository and shaped in a way to be a perfect match. Better yet, it allows for a clean separation of the OS and software from third party packages whereas on Linux the OS components are simply packages, too. On FreeBSD third party software has its own location in the filesystem: The operating system configuration is in /etc, the binaries are in /{s}bin and /usr/{s}bin, the libraries in /lib and /usr/lib, but anything installed from a package lives in /usr/local. There’s /usr/local/etc, /usr/local/bin, /usr/local/lib and so on.

Coming from a Linux background this is a bit strange initially but soon feels completely natural. And this separation brings benefits with it like a much, much more solid upgrade process! I maintain servers that were setup as FreeBSD 5.x in the mid 2000’s and still happily serve their purpose today with version 12.x. OS (and hardware) has been upgraded multiple times but the systems were never reinstalled. This is an important aspect of an enterprise OS if you ask me. I wonder if we could achieve it on Linux, too?

Doing things the BSD way would mean to download the source packages for the kernel, glibc and all the other libraries and tools that would form a rather minimal OS, extract them and put the code in a common repository. Then a Makefile would be written and added that can build all the OS components in order with a single command line (e.g. “make buildworld buildkernel”). Things like ALFS (Automated Linux From Scratch) already exist in the Linux world, so building a complete Linux system isn’t something completely new or revolutionary.

As vulnerabilities are found fixes are committed to the source repo. It can then be cloned and re-built. Ideally we’d have our own tool “os-update” which will do differential updates to bring the OS to the newest patch level. My suggestion would be to combine the components of a RHEL release – e.g. kernel 4.18, glibc 2.28, etc. following RHEL 8. So more or less like CentOS but more minimal and focused on the base system. That system would NOT include any package manager! This is due it being intended as a building block for a complete distribution which adds a package manager (be it rpm/dnf, dpkg/apt, pacman or something else) on top and consumes a stable OS with a known set of components and versions, allowing the distributors to focus on a good selection of packages (and giving them the freedom to select the means of package management).

Just to give it a working title, I’m going to call this BaSyL (Base System Linux). For the OS version I would prefer something like the year of the corresponding RHEL release, eg. BaSyL 2019-p0 for RHEL 8. The patch level increases whenever security updates need to be applied.

Enterprise packages

The other part of creating an enterprise-like distribution is the packages. Let’s call it the SUP-R (Stable Unix-like Package Repo) for now. So you’ve installed BaSyL 2019 and need packages. There’s the SUP-R 2019 repo that you can use: It contains the software that RHEL ships. Ideally a team forms that will create and maintain a SUP-R 2024 repo after five years allowing to optionally switch to newer package versions. The special thing here would be that this SUP-R 2024 package set would be available for both BaSyL 2019 and BaSyL 2025 (provided that’s when the next version is released). That way BaSyL 2019 users that already made the switch to SUP-R 2024 can say on that package set when updating the OS and don’t have to do both at the same time! A package repo change requires to consult the update documentation for your programs: E.g. Apache HTTPd, PostgreSQL database, etc. Most likely there are migration steps like changing configuration and such.

Maintaining LTS package sets is not an easy task, though. However there are great tools available to us today which makes this somewhat easier. Still it would require either a really enthusiastic community or some corporate backing. Especially the task of selecting software versions that work well together is a pretty big thing. It would probably make sense to keep the package count low to start with (quality over quantity) and think of test cases that we can build to ensure common workloads can be simulated beforehand.

In addition to experienced admins for software evaluation, package maintainers and programmers for some patch backporting, also people writing docs (both in general and for version migration) would be needed. Yes, this thing is potentially huge – much, much more involved than doing BaSyL alone.

Conclusion

I’d love to see a continuation of a community-built enterprise Linux distro worth the title. But at the same time getting to know BSD in addition to Linux made me think a little different about certain things. De-coupling the OS and the packaging efforts could open up interesting possibilities. At the same time it could even help both projects succeed: Other operating systems might also like (optional) enterprise package sets. And since they’d be installed into a different prefix (e.g. /usr/supr) they would not even conflict with native packages in e.g. Debian or any other glibc-based distribution.

If a portable means of packaging was chosen it would potentially also be interesting to the BSDs and Open-Solaris derivatives. And the more potential consumers the larger the group of people who could have the motivation to make something like this come true.

This article is meant to be giving food for thought. Interested in talking about going new ways? Please share your thoughts!

Illumos (v9os) on SPARC64 SunFire v100

Over the last month or so I’ve written a couple of articles on an old SunFire v100 machine that I own for a while now. First I took a look at the hardware of the machine and the LOM (Lights Out Management). Then I installed OpenBSD 6.0 from CD and updated all the way to 6.5. Finally I played a bit with OpenBSD to see what it can do and how well it supports SPARC64. This post will be the last SPARC64 one before I visit other topics again.

v9os?

While I was pretty happy with OpenBSD on the SunFire, there’s one reason that I wanted to try out something else, too. That reason has three letters: Z-F-S. The first thing that I tried out when I got the hardware, was FreeBSD – but I ran into problems. I’ve managed to overcome circumvent them (might be worth another story in the future), only to find that FreeBSD does not support ZFS on SPARC64!

One option that suggests itself, is just putting Solaris on there. I have a copy of Solaris 10 for Sparc, but I prefer to keep things Open-Source. Also there’s the problem, that my machine is old enough to not have a DVD drive and it doesn’t support booting from USB and the like.

So it’s illumos. Since I’m really just getting started with the broader Solaris universe, I had to do a little research first. And I was a little surprised that most illumos distros seem to not even support Sparc at all! Of the four that do

  • OpenSXCE seems dead (last release in 2014)
  • DilOS uses Debian packaging (which is not my cup of tea at all)
  • Tribblix sounds really interesting to me, but does not fit on a CD
  • v9os is a minimal Sparc distro that is small enough

As you can see, there wasn’t so much choice after all! While v9os is an experimental one-man project that you should probably stay away from for production use, it might be just right for my purposes of tinkering with an old machine.

Installing the OS – first try

There are not many preparations necessary: I downloaded the ISO image and burned it on a CD. Then I connected to my SunFire via serial, powered it on and put the CD into the drive. It takes quite some time, but after a while I can read that v9os is in fact starting.

Booting up v9os from the CD

After the system booted, it gives the user the option to select a keymap.

Keymap selection

Then it shows the installation menu. There you can choose if you want to install, load additional drivers, drop to a shell, change the terminal type or reboot. I go with the first option.

v9os installation menu

After a moment the installer has started an a welcome screen is printed. Unfortunately in my case there’s a problem with the CD, so that four lines of debug info overwrite important information: How to actually proceed with the installation! But this is an OpenSolaris derivative, and so it’s not that hard to figure out that F2 is the key to go on.

v9os installer: Welcome screen

Next it’s selecting the disk to install on. I thought that it all looked good – and didn’t pay much attention to the message “A VTOC label was not found.”. VTOC is the Volume Table Of Contents, the SPARC partition scheme (think MBR/GPT on amd64). We’ll come back to that a little later. 😉

v9os installer: Disk selection

I think that the installer is quite nice. It even offers help pages that give newcomers like me an idea of what they should do for the current step. Great work on that!

v9os installer: Disks help page

Then you can choose to either dedicate the whole disk to v9os or just use a slice. I decide to go the easy route and select the former.

v9os installer: Disk layout selection

Now the installer wants to know the hostname for the new system. The suggested default of v9os is fine for me since I don’t plan to add another machine with that OS to my network anytime soon.

v9os installer: Hostname selection

Finally you can select the time zone – or rather: the zone region.

v9os installer: Time zone selection

Unfortunately things went sideways after that choice and I had to reset the machine…

Ok, after going through the previous steps again, I decided to give the advanced setup a try and selected slicing up the drive.

v9os installer: Slice selection

Unfortunately the result was the same as before: The installer just died. I tried again a few times, playing with different slice setup, but didn’t have any luck.

The installer died… Time to reboot.

At this point I was out of ideas on what else I could try, so I removed the CD and powered down the system.

Writing the label manually

When I powered the system on again, I had forgotten that I removed the CD and to my surprise OpenBSD (the system that I had previously installed on the machine) booted up! This meant that the installer had not even changed anything on the disk, yet!

My next guess was (and still is) that the v9os installer might have problems with BSD disklabels being present on the drive. I took a look at the disklabel from OpenBSD, just to find out some information about the drive.

OpenBSD’s disklabel information of the system hard drive

Then I booted the v9os install medium again but this time selected the shell option. After a little research I found out how to get some drive information on Solaris with iostat.

v9os shell session: Collecting drive hardware info

Next I decided to give the format utility a try. I don’t know if v9os stripped down some hardware information and that together with the disk being really old, it wasn’t properly auto-detected. So I had to do something that I haven’t done in years (and never missed it): Typing in the geometry information by hand!

Typing in disk geometry information (Ah, the (bad!) memories…)

Once the drive has been described to the utility, it shows a menu of what it can do. I haven’t used that program before and judging from the name alone was a bit surprised at how powerful it seems to be. Things like being able to define profiles must have been pretty useful in the past.

Solaris’ format utility

Since I want to partition the drive, I select that. I’m presented with a sub-menu, giving me some more choices.

Partitioning menu of format

I have no clue what a Solaris partitioning scheme should look like (need to explore some older versions of that OS somewhen!).

Partitioning the drive for Solaris

So I look around a little but eventually accept the proposed default and just hope that this works.

Installing the OS – second try

After restarting the machine again and choosing the installer, it looks like this time there is no missing disklabel. At least! But will it make a difference?

Returning to the installer: Partitioning was detected

And yes! Now the installer continues and gets the data written to disk!

Finally installing the OS!

The process takes quite a while – but that’s due to the slow machine that I’m using. Eventually the installation is finished.

v9os installer: All done!

First steps with v9os

Another reboot and after removing the CD-ROM from the drive, the freshly installed system boots up. A moment later it displays the prompt where I can log in using the user root and the password solaris.

First start of v9os

The first thing that I want to do is to get rid of the serial console. So I set up networking and enable SSH.

Setting up networking and enabling SSH

Then I disable the automounter to make the home directory writable and create a user for remote SSH login. Finally I enable the machine to do name resolution and give the new user a password.

Adding a user and name resolution capabilities

That should suffice to SSH into the box from another machine.

Package management with IPS

Logging in remotely works just fine. As v9os does not have an online package repository, I have to download a compressed copy of the repository from SourceForge.

SSHing into the v9os box and downloading the package repository

I don’t know much about the IPS package system and thus really struggle to make it all work. There is no guide on the v9os site and so I try to put the downloaded file in various locations, decompress it and try everything again. Since that also doesn’t work, I unpack the contents of the archive but still cannot get it right…

Struggling to get the repo working…

After more than an hour of struggling with pkg, reading manpages, doing online research and trying to fit everything together, I finally manage to remove the default publisher that comes with the system and add a new one that eventually works!

Finally figured out how to deal with IPS publishers

The v9os operating system is one of the strangest Unices that I’ve ever touched in not providing the vi editor with the system! But now that I have the repository available, I can simply install vim to find out that using packages does work after all.

Installing packages (vim) works!

This is about how far I wanted to take this quick post on v9os. If I had a faster machine, I might have been tempted to try and build the system from source. But with my old SunFire… No.

While v9os might not be fit for production use, I accomplished one goal over OpenBSD: I have an operating system on the machine that is installed on ZFS!

ZFS on SPARC64 with v9os

Conclusion

The v9os operating system is an exotic one for sure. But it’s nice to see that somebody values SPARC64 machine and illumos enough to put the time required to built something like this into such a project. And actually I think it’s not half bad! I didn’t do too much with it, but it seemed stable and except for the installer problem (it would probably just have worked on an empty drive) everything worked fine.

Well, maybe some hints on how to get the package repo in place would have saved me some time… On the other hand Solaris veterans are likely to get it working with just a few commands. And while it has been kind of frustrating for a while, it has also lead to at least a basic understanding of what IPS is and how it works. I’m sure that I’d have missed at least some of that if I had just copied some lines from a guide.

I might not end up making v9os my primary operating system (for various obvious reasons). But it’s another nice little part in the mosaic of the illumos world that I’ve started exploring. Also I noticed that I’ve become a little bit more comfortable with using an OpenSolaris-derivative. Compared to my first encounter with OmniOS, it didn’t take me as long to figure out the very basics again. Which is always a good sign.

Using FreeBSD with Ports (2/2): Tool-assisted updating

In the previous post I explained why sometimes building your software from ports may make sense on FreeBSD. I also introduced the reader to the old-fashioned way of using tools to make working with ports a bit more convenient.

In this follow-up post we’re going to take a closer look at portmaster and see how it especially makes updating from ports much, much easier. For people coming here without having read the previous article: What I describe here is not what every FreeBSD admin today should consider good practice (any more)! It can still be useful in special cases, but my main intention is to discuss this for building up the foundation for what you actually should do today.

Building a desktop

Last time we stopped after installing the xorg-minimal meta-port. Let’s say we want a very simple desktop installed on this machine. I chose the most frugal *nix desktop, EDE (the Equinox Desktop Environment, looking kind of like Win95), because that’s drawing in things that I need for demonstrating a few interesting things here – and not that much more.

Unfortunately in the ports tree that we’re using, exactly that port is broken (the newer compiler in FreeBSD 11.2 is more picky than the older ones and not quite happy with EDE code). So to go on with it, we have to fix it first. I’ve uploaded an additional patch file from a later version of the port and also prepared a patch for the port’s Makefile. If you want to follow along, you can just copy the three lines below to your terminal:

# fetch http://www.elderlinux.org/files/patch-evoke_Xsm.cpp -o /usr/ports/x11-wm/ede/files/patch-evoke_Xsm.cpp
# fetch http://www.elderlinux.org/files/patch_ede_port -o /usr/ports/x11-wm/ede/patch_ede_port
# patch -d /usr/ports/x11-wm/ede -i patch_ede_port

Using Portmaster to build and install EDE

Thanks to build-time dependencies and default options in FreeBSD it’s still another 110 ports to build, but that’s fine. We could remove some unneeded options and cut it down quite a bit. Just to give you an idea: By configuring only one package (doxygen) to not pull in all the dependencies that it usually does, it would be just 55 (!) ports.

But let’s say we’re lazy. Do we have to face all of those configure dialogs (72 in cause you are curious)? No, we don’t. That’s why portmaster has the -G flag which skips the config make target and just uses the standard port options:

# portmaster -DG x11-wm/ede

EDE was successfully installed

Using this option can be a huge time-saver if you’re building something where you know that you don’t need to change the options for the application and its dependencies.

System update

Now that we have a simple test system with 265 installed but outdated packages. Let’s update it! Remember that unlike e.g. Linux, FreeBSD keeps third party software installed from packages or ports and the actual operating system separate. We’ll update the latter first:

# freebsd-update upgrade -r 11.3-RELEASE

With this command, we make the updater download the required files for the upgrade from 11.2-RELEASE to 11.3-RELEASE.

Upgrading FreeBSD to 11.3-RELEASE

When it’s done, and we’ve closed the three lists with the removed, updated and new files, we can install the new kernel:

# freebsd-update install

Once that’s done, it’s time to reboot the system so the new kernel boots up. Then the userland part of the operating system is installed by using the same command again:

# shutdown -r now
# freebsd-update install

Kernel upgrade complete

Preparations

Now in our fresh 11.3 system we should first get rid of the old ports tree to replace it with a newer one, right? Wait, hold that rm command for a second and let me show you something really useful!

If you take a look at the /usr/ports directory, you’ll find a file appropriately named UPDATING. And since that’s right what we were about to do, why not take a look at it?

So what is this? It’s an important file for people updating their systems using ports. Here is where ports maintainers document breaking changes. You are free to ignore it and the advice that it gives and there’s actually a chance that you’ll get away with it and everything will be fine. But sometimes you don’t – and fixing stuff that you screwed up might take you quite a bit longer than at least skimming over UPDATING.

# less /usr/ports/UPDATING

But right now it’s completely sufficient to look at the metadata of the first notification which reads:

20180330:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

The main takeaway here is the date. The last heads-up notice for our old ports tree was on 2018-03-30.

Checking out a newer ports tree

Now let’s throw it all away and then get the new ports tree. Usually I’d use portsnap for this, but in this case I want a special ports tree (the one that would have come with the OS if I got ports from a fresh 11.3 installation), so I’m checking it out from SVN:

# rm -rf /usr/ports/.* /usr/ports/*
# svnlite co svn://svn.freebsd.org/ports/tags/RELEASE_11_3_0 /usr/ports

If you’re serious about updating a production server that you care about, now is the time to read through UPDATING again. Search for the date string that you previously took a note of and then read the messages all the way up to the beginning of the file. It’s enough to read the AFFECTS lines until you hit one message that describes a port which you are using. You can ignore all the rest but should really read those heads-up messages that affect your system.

What software can be updated?

BTW… You know which packages you have installed, don’t you? A huge update like what we’re facing here takes some planning up-front if you want to do it in a professional manner. In general you should update much more often, of course! This makes things much, much easier.

Updating from ports

Ok, we’re all set. But which software can be updated? You can ask pkg(8) to compare installed packages to the respective distinfo from the corresponding port:

# pkg version -l "<"

If you pipe that into wc -l you will see that 165 of the 265 installed packages can (and probably should) be updated.

Updating software from ports

We’ll start with something really simple. Let’s say, we just want to update pkgconf for now. How do we do that with portmaster? Easy enough: Just like we would have portmaster install it in the first place. If something is already installed, the tool will figure out if an update is available and then offer to update it.

# portmaster -G devel/pkgconf

And what will happen if the port is already up to date? Well, then portmaster will simply re-build and re-install it.

Partial update finished

While partial updates are possible, it’s a much better idea to keep the whole system updated (if at all possible). To do that, you can use the -a flag for portmaster to update all installed software.

Another nice flag is -i, which is for interactive mode. In this mode, portmaster will ask for every port if it should be updated or not. If you’re leaving out critical ports (dependencies) this can lead to an impossible update plan and portmaster will not start updating. But it can be useful for cherry-picking.

Interactive update mode

Now let’s attempt to upgrade all the ports, shall we?

# portmaster -aG

As always, portmaster will show you its plan and ask for your confirmation. There are two things that probably deserve a word or two about them. Usually the plan is to update an application, but sometimes portmaster wants to re-install or even downgrade them (see picture)! What’s happening here?

Re-installs mostly happen when a dependency changed and portmaster figured out that it might be a good idea to rebuild the port against the newer version of the dependency, even though there is no newer version available for the actual application.

Upgrading, “downgrading”, re-installing

Downgrades are a different thing. They can happen when you installed something from a newer ports tree and then go back to an older one (something you usually shouldn’t do). But in this case it’s actually a false claim. Portmaster will not downgrade the package – it was merely confused by the fact that the versioning scheme changed (because of the 0 in 2018.4 it thinks that this version is older than the previous 2.1.3…).

Moved ports

If you’re paying close attention to all the information that portmaster gives you, you’ll have seen lines like the following one:

===>>> The x11/bigreqsproto port moved to x11/xorgproto

There’s another interesting file in the ports tree called MOVED. It keeps track of moved or removed ports. Sometimes ports are renamed or moved to another category if the maintainer decides it fits better. Portmaster for example started as sysutils/portmaster and was later moved when the ports-mgmt category was introduced. However you won’t find this information in the MOVED file – because it happened before the time that the current MOVED keeps records for (i.e. early 2007 in this case).

The example above is due to the fact that last year the upstream project (Xorg) decided to combine the protocol headers into one distribution package. Before that there were more than 20 separate packages for them (and before that, once upon a time, all of Xorg had been one giant monolithic release – but I digress…)

Problem with merged ports

The good news here is that portmaster is smart enough to parse the MOVED file and figure out how to deal with this kind of changes in the ports tree! The bad news is that this does not work for more complicated things like the merges that we just talked about…

So what now? Good thing you read the relevant UPDATING notification, eh?

20180731:
  AFFECTS: users of x11/xorg and all ports with USE_XORG=*proto
  AUTHOR: zeising@FreeBSD.org

Bulk-deleting obsolete ports and trying again

So let’s first get rid of the old *proto packages with the command that developer Niclas Zeising proposes and then try again:

# pkg version -l \? | cut -f 1 -w | grep -v compat | xargs pkg delete -fy
# portmaster -aG

Required options

Alright, we have one more problem to overcome. There are ports that will fail to build if we run portmaster with the -G flag. Why? Because they have mandatory ports options where you need to choose from things like a backend or a certain mechanic.

The “mandatory options” case

One such case is freetype2. Since this one fails, build it separately and do not skip the config dialog for this one:

# portmaster print/freetype2

Once that’s done, we can continue with updating all the remaining ports. After quite a while (because LLVM is a beast) all should be done!

Updating complete!

Default version changes

Did you read the following notice in UPDATING?

20181213:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

For the big update run we ignored this. And in fact, portmaster did update Perl, but only to the latest version in the 5.26 branch of the language:

# pkg info -x perl
perl5-5.26.3

Why? Well, because that was the version of Perl that was already installed. Actually this is ok, if you’re not using Perl yourself and thus can live without the latest features added. However if you want to (or have to) upgrade to a later Perl major version we have a little bit more work to do.

First edit /etc/make.conf and put the following line in there:

DEFAULT_VERSIONS+=perl5=5.28

This is a hint to the ports framework that whenever you’re building a Perl port, you want to build it against this version. If you don’t do that, you will receive a warning when building the other Perl. Because in this case you’re installing an additional Perl version but all the ports will use the primary one. So more likely than not you don’t want to do this.

Upgrading Perl

Next we need to build the newer Perl. The special thing here is that we need to tell portmaster of the changed origin so that the new version actually replaces the old one. We can do this by using the -o flag. Mind the syntax here, it’s new origin first and then old origin and not vice versa (took me a while to get used to it…)!

But let’s check the origin real quick, before we go on. The pkg command above showed that the package is called perl5. This outputs what we wanted to know:

# pkg info perl5
perl5-5.26.3
Name           : perl5
Version        : 5.26.3
Installed on   : Tue Sep 10 21:15:36 2019 CEST
Origin         : lang/perl5.26
[...]

There we have it. Now portmaster can begin doing its thing:

# portmaster -oG lang/perl5.28 lang/perl5.26

Rebuilding ports that depend on Perl

Ok, the default Perl version has been updated… But this broke all the Perl ports! So it’s time to fix them by rebuilding against the newer Perl 5.28. Luckily the UPDATING notice points us to a simple way to do this:

# portmaster -f `pkg shlib -qR libperl.so.5.26`

And that’s it! Congratulations on updating an old test system via ports.

At last: All done!

What if something goes wrong?

You know your system and applications, are proficient with your shell and you’ve read UPDATING. What could possibly go wrong? Well, we’re dealing with computers here. Something really strange can go wrong anytime. It’s not very likely, but sometimes things happen.

Portmaster can help you if you ask for it before attempting upgrades. Before deinstalling an old package, it creates a backup. However after installing the new version it throws it away. But you can make it keep the backup by supplying the -b flag. Sometimes the old package can come in handy if something goes completely sideways. If you need backup packages, have a look in /usr/ports/packages/portmaster-backup. You can simple pkg add those if you need the old version back (of course you need to be sure that you didn’t update the packages dependencies or you need the downgrade them again, too!).

If you want to be extra cautious when updating that very special old box (that nobody dared to touch for nearly a decade because the boss threatened to call down terrible curses (not the library!) upon the one who breaks it), portmaster will also support you. Use the -w flag and have it preserve old shared libs before deinstalling an old package. I wouldn’t normally do it (and think my example made it clear that it’s really special). In fact I’ve never used it. But it might be good to know about it should you ever need it.

That said, on the more complicated boxes I usually create a temporary directory and issue a pkg create -a, completely backing up all the packages before I begin the update process. Usually I can throw away everything a while later, but having the backups saved me some pain a couple of times already.

In the end it boils down to: Letting your colleagues call you a coward or being the tough guy but maybe ruining your evening / weekend. Your choice.

Anf if you need to know even more about the tool that we’ve been using over and over now, just man portmaster.

What’s next?

I haven’t decided on the next topic that I’m going to write about. However I’ve planned for two more articles that will cover building from ports the modern way(tm) – and I hope that it will not take me another two years before I come to it…