One year of flying with the Raven: Ready for the Desktop?

It has been a little over one year now that I’m with the Ravenports project. Time to reflect my involvement, my expectations and hopes.

Ravenports

Ravenports is a universal packaging framework for *nix operating systems. For the user it provides easy access to binary packages of common software for multiple platforms. It has been the long-lasting champion on Repology’s top 10 repositories regarding package freshness (rarely dropping below 96 percent while all other projects keep below 90!).

For the porter it offers a well-designed and elegant means of writing cross-platform buildsheets that allow building the same version of the software with (completely or mostly) the same compile-time configuration on different operating systems or distributions.

And for the developer it means a real-world project that’s written in modern Ada (ravenadm) and C (pkg) – as well as some Perl for support scripts and make. Things feel very optimized and fast. Not being a programmer though, I cannot really say anything about the actual code and thus leave it to the interested reader’s judgement.

If you’re interested in a more comprehensive introduction to Ravenports, I’ve written one half a year ago.

Platforms

Ravenports has initially been developed on DragonFly BSD. When I became aware of it, it had already been ported to work on Linux, too. I liked the idea of the project, but had no DF or Linux boxes available for tinkering and didn’t feel like setting one up. Thus I moved on.

As I checked back a little later, FreeBSD support had been added. Since I had just lost my excuse not to try it out right away, I started playing with it – and was pretty happy. At that time I had trouble to get a port that I wrote into FreeBSD’s Ports Collection and thought that Raven could be an excellent playground to learn something and get a bit of experience that might help me later with FreeBSD.

The Xfce4 desktop – installed via Raven

I’ve long changed my mind, though! Raven is rather similar to FreeBSD’s ports system in many ways but where it differs it’s clearly superior. Also I love the cross-platform aspect and thus Raven is simply the better place for me to make home.

This year saw the introduction of Solaris/Illumos support that I tried out on OmniOS. Also Darwin support landed, upping the count of supported platforms to 5 already! Not too bad for a young project, huh? While Raven does work on all five platforms now it does so to varying degrees. But more on that later.

General activity

The Ravenports project consists of multiple Git repositories hosted on GitHub. The first one is Ravensource which most importantly holds the “raw” ports as they are written by the porters. It’s the most busy repo with over 5.200 commits since March 2017 (including almost 500 by me).

Then there’s the actual Ravenports repo that mostly contains the buildsheets which are compiled from Ravensource. It has over 1.400 commits right now.

Installing the xfce-single-core meta-package

Finally there’s the repo for the Ravenadm command-line tool. It’s approaching 900 commits since February 2017.

There’s still more to Raven like the Pkg package manager from FreeBSD (that was modified to add Zstd compression support) or libbsd4sol, a portability library which allows building code on Solaris that uses BSDisms (which was needed to add support for that platform to Raven). Most of the work on all repos was done by John alone.

With over 100 pull requests and more than 20 issues it’s clear now that there’s some interest in the project. Raven is still very small, though, with 6 people haveing contributed ports so far. After learning the basics and opening pull requests for half a year, I’ve been granted write-access to the source repository. Just recently I was able to push my 100th active port (there have been ports that became obsolete and were removed).

In general I’d say that there could of course be more people around and that the project would benefit from being able to provide more packages – though more than 3.200 is not bad at all! Also it’s good that there seems to be a growing user base which is even more important than having more porters join in. From my point of view, Raven is a healthy and fast-moving project. Still young, but doing well and heading in the right direction.

Major changes

There have been some pretty big changes that happened with Raven over time. Initially John started with a GCC6-based toolchain, only to switch to GCC7 when that was released. That was before my time with the project, but I witnessed the switch to GCC8.

Changing the toolchain certainly is a major interruption and most people are advised to just wait for the official repository to be re-rolled and then update. I had some bad luck in this regard – literally the day after I finally completed a working (and almost complete) set of basic packages for the FreeBSD_i386 platform, I faced the change to GCC8. Due to a lack of time I still haven’t repeated the switch on i386 (but I still plan to do it sometime).

The thunar file manager

Other changes that always have a huge impact (causing lots and lots of packages to be rebuilt) is adopting a new version (as well as dropping an old one) of the popular interpreter languages like Python, Perl and Ruby. Ravenports always supports two versions of Perl and Ruby and two versions of Python 3 (as well as 2.7 for now). So when Python 3.7 was released, 3.5 was removed and Perl 5.24 had to go when 5.28 was added.

Recently the former LLVM port that included everything regarding LLVM was split (LLVM, Clang, lld, openmp). Also now and then new statements are added to Ravenadm, so that old versions cannot work with a new release of the buildsheet repository (which is called “conspiracy”). But this is pretty easy to work around compared to the changes mentioned before.

So on the whole, Raven has proven that it can easily stand even big changes. For me this is essential to build faith in a project. And Raven is doing well in this regard.

Desktop-ready?

There are lots of people who will want to use Raven on servers. That’s totally fine of course. But for a project as ambitious as Ravenports, it’s necessary to provide a somewhat comfortable environment for the developers and the users alike. If it doesn’t manage to become a daily driver for people it cannot succeed.

For that reason I decided to work towards good desktop support for the little dev machine that I dedicated to my work on the project. When I started, X11 was already working and Openbox had freshly landed in the repos. So I had a simplistic environment to work with: Openbox + Xterm. However I could not even change my keyboard layout! Therefor I wrote a port for setxkbmap and eventually it was accepted as the first outside contribution to the project.

The Surf web browser

Next I did some work to get the FLTK toolkit and the EDE desktop in. Then I added my favorite terminal emulator, Sakura. This worked out pretty well and the biggest shortcoming at the end of 2017 was that there was no real graphical browser available. A lot has changed since then!

Desktop choices

Today you can choose between multiple window managers, both floating and tiling:

  • twm
  • cwm
  • openbox
  • fluxbox
  • xfwm4
  • pekwm
  • i3

And in case you prefer a real desktop environment, there are also several available:

  • Lumina (moderate, Qt-based)
  • Xfce4 (somewhat light-weight, GTK-based)
  • EDE (extremely frugal and minimalistic, FLTK-based)

Two graphical web browsers are available, Surf (which is deliberately simplistic and does not even support tabs) as well as an old version of Firefox (the last one that builds without Rust). This is certainly not perfect but much better than a year before.

Also other important programs are available, including LibreOffice! Last month the Apache webserver landed – which is a pretty complex port compared to many others.

Shortcomings

Are there packages you’ll miss? Most certainly. However there’s a wishlist now with ports that people would like to see created (please feel free to add more requests there). And that’s another good step ahead. Currently it’s almost 120 items long. Fortunately there’s been some success, too, and 26 requested ports have been created and taken of the list so far.

There are some future ports that will require lots of effort (hint: Help wanted!). The most important one that blocks some other important ports is the Rust compiler. There has been some work done on this but it’s not done, yet. Another real beast is TeX. This totally must be supported at some point. Current versions of Firefox and Chromium are often asked for. And somebody even requested Eclipse (which needs Java!). So there’s definitely more than enough work to do.

Using Raven on Linux works, but there are some flaws. Initially the Pkg package manager used to crash quite often. John traced that back to a bug in the version of SQlite that’s used internally by Pkg: The problem only struck on Linux and was fixed by using a newer version instead. While it’s much better now, there’s still the occasional problem with it.

While the packages from the repo work finde on Solaris 10u8 and above as well als Illumos, the exact version 10u8 is currently required to build packages. This is due to Solaris not being able to work with older system libraries in the build chroot. It would be great to haven an alternative ravensys-root for any Illumos distribution (OmniOS, SmartOS, Tribblix, …) available so that interested people without access to that specific closed-source Solaris version can develop Raven on that platform.

I don’t know how well Raven works on Darwin. Since I don’t have access to any macOS machines and PureDarwin is not really ready, yet, there’s currently no chance for me to test it. I intend to buy an older MacBook or something in the future, though, if I come across a fair offer and have some money available to spend on my hobby.

Some ports are not available on one platform or the other: Illumos mostly because they’d require patches to build and Linux often because it relies on additional libraries that have not yet been added to Raven. And then there’s a lot of packages that are mostly untested. All of these issues can be fixed, of course. All of those require a larger user-base, though. So it’s probably the best strategy to keep working on making Raven attractive to more users and address things when the right people show up.

What’s to come?

Currently Raven uses the primordial X11 input drivers (xf86-input-keyboard and xf86-input-mouse) on all platforms. In 2013 Linux pioneered support for generic input drivers by exposing the kernels “event devices”. Not too much later many Linux distributions adopted xf86-input-evdev. In 2014 there was a GSOC project to add evdev support for FreeBSD. Like many projects it came along a good part of the way but eventually was left unfinished. It was picked up and completed by a FreeBSD developer in 2016.

Xfce’s settings and applications menu

To use it, a special kernel had to be built so it would expose /dev/input device nodes. Then a sysctl had to be set – and eventually X11 had to be patched for emulated udev support… Why would anybody want to do all this just for different input drivers? Multi-touch support is just one valid reason. Another one is that having evdev-based input drivers is half the way to eventually support libinput, too. And that is one of the prerequisites for Wayland!

This month FreeBSD has finally enabled evdev support in the GENERIC kernel in both -CURRENT and 12-STABLE. That means the upcoming FreeBSD 12.0 will not support it out of the box, but most likely a future 12.1 will. Dragonfly BSD has also grown support for event devices and people are interested in working towards Wayland. I hope that we’ll be able to get xf86-input-evdev working with our X11 (on Dragonfly, FreeBSD and Linux) next year,

I’m taking a little break from Xfce now (but plan to port most of the remaining components later to make it a well-supported DE in Raven). There are a few things I have planned like adding Linux support for OpenVPN (it depends on some libraries and programs that are Linux only which are not yet in Raven). Also I intend to take a look at adding some more Qt5 components and write a few requested ports. And finally I want to write another post next year – a tutorial on using Ravenports and creating new ports.

So keep flying with us – it’s exciting times!

Advertisements

Ravenports: A modern, cross-platform package solution

This post is about Ravenports, a universal package system und building framework for *nix systems (DragonflyBSD, FreeBSD, Linux and Solaris at the time of this writing). It’s a relatively young project that begun in late February 2017 after a longer period of careful planning. The idea is to provide a unified, convenient experience in a cross-platform way while putting focus on performance, scalability and modern tooling.

What exactly is it and why should you care? If you’ve read my previous post, you know that I consider the old package systems lacking in several ways. For me Raven already does a great job at solving some problems existing with other systems – and it’s still far from tapping its full potential.

Rationale

A lot of people will think now: “We already have quite capable package systems. What’s the point in doing it again?” Yes, in many regards it’s “re-inventing the wheel”… And rightfully so! Most of the known package systems are pretty old now and while new features were of course added, this is sometimes problematic. There is a point where it’s an advantage to start fresh and incorporate modern ideas right from the start. Being able to benefit from the experience and knowledge gained by using the other systems for two decades when designing a new system is invaluable.

Ravenadm running on FreeBSD, OmniOS, Ubuntu Linux and DragonflyBSD

Ravenports was designed, implemented and is primarily maintained by a veteran in packaging software. John Marino at a time maintained literally thousands of ports for FreeBSD and DragonflyBSD. In addition to that, he wrote an alternative build tool called Synth. Aiming for higher portability, he modified Synth to work with Pkgsrc (which is available for many platforms) and also ported the modern Pkg package manager from FreeBSD to work with it.

In the end he had too many ideas about what could be improved in package building that would not fit into any existing project. Eventually Ravenports was born when he decided to give it a try and create a new framework with the powerful capabilities that he wanted to have and without the known weaknesses of the existing ones.

How does it compare to xyz?

It probably makes sense to get to know Ravenports by comparison to others. Let’s take a look at some of them first:

1) FreeBSD’s ports system is the oldest one such framework. It’s quite easy to use today, very flexible and since the introduction of Pkg (or “pkg-ng”) it also has a really nice package manager.
2) NetBSD adopted the ports system and developed it according to their own needs. It’s missing some of the newer features that FreeBSD added later but has gained support for an incredible amount of operating systems. Unfortunately it still uses the old pkg_* tools that really show their age now.
3) OpenBSD also adopted the early FreeBSD ports system. They took a different path and added other features. OpenBSD put the focus on avoiding users having to compile their own packages. To do so, they added so-called package flavors. This allows for building packages multiple times with different compile-time options set. Their package tools were re-written in Perl and do what they are meant to. But IMO they don’t compare well to a modern package manager.
4) Gentoo Linux with its portage system has taken flexibility to the extreme. It gives you fine-grained control over exactly how to build your software and really shines in that. The logical consequence is that, while it supports binary packages, this support is rudimentary in comparison.

EDE desktop, pekwm with Menda theme and brand-new LibreOffice

FreeBSD gained support for flavors in December 2017 and NetBSD did some work to support subpackages in a GSoC project in the same year. It’s hard to retrofit major new features into an existing framework, tough. When Ravenports started in the beginning of 2017, it already had those two features: Variant packages (Raven’s name for flavors) and subpackages. As a result they feel completely natural and fit well into the whole framework (which is why they are used excessively).

Ravenports knows ports options that can be set before building a package. Like with NetBSD or OpenBSD there’s generally fewer options available compared to FreeBSD. This is because Raven is more geared towards building binary packages than being a ports framework to build on the target machine (which would defeat the goal of always providing a clean building environment). For that reason the options mostly exist to support the variants for the packages. Compared to NetBSD’s Pkgsrc, Ravenports supports much fewer operating systems right now but has a much easier bootstrap process (binary!) for all supported platforms. It also offers a much superior package manager. When comparing against FreeBSD, OpenBSD and Gentoo, Ravenports is much more portable and supports multiple operating systems and – with the exception of FreeBSD – comes with a more modern package manager for binary packages.

Strong points

As Ravenports is not tied to a single operating system, it didn’t have to take into account specific needs that are for one OS only. In general there are no second-class citizens among the supported platforms. Also it was made to be agnostic of the package manager used. Right now it’s using Pkg only but other formats could be supported and thus binary packages be installed via pacman, rpm, dpkg, you-name-it.

Repology: Raven’s package freshness in percent (06/25/2018)

It allows for different versions of some software to be concurrently installed. If you e.g. want PHP 7.2 while some of your projects are stuck with 5.6 this is not a problem. It’s also possible to define a default version for databases like MySQL and Postgres as well as languages like Perl, Python and Ruby. Speaking of MySQL: Raven knows about Oracle MySQL, MariaDB, Percona and Galera. Only the first one is currently available (the ports for the others are missing) but the selection of which product to install is already present and the others can be easily added as needed.

If you build packages yourself you’ll notice that the whole tooling is fully integrated. Everything was planned right from the beginning to interact well and thus plays together just great. Also performance is something where Raven shines: Thanks to being programmed for high concurrency, operations like port scans are amazingly fast (if you know other frameworks).

Repology: Raven’s outdated package count (06/25/2018)

Raven follows a rolling-release model with extremely current package versions. In Repology, a fine tool for package maintainers and people interested in package statistics, Ravenports is the clear leader when it comes to freshness of the package repository: It rarely falls below 98% of freshness (while no other repo has managed to even reach 90% – and Repology lists almost 200 repositories!). If it does, it’s usually for less than a day until updates get pushed.

This is only possible because much of ports maintenance is properly automated. This saves a lot of work and allows for keeping the software version current without the need for dozens of maintainers. Custom port collections are supported if you have special needs like sticking to specific program versions. This way Raven can e.g. support legacy versions that should not be part of the main tree. It might also be interesting for companies that want to package their product for multiple platforms but need to keep the source closed. Ravenports supports private GitHub repositories for cases like this. All components of project itself are completely open-source, though, and are permissively licensed.

Also Raven is not the jealous kind of application. Packages are installed into /raven by default (you can choose to build your packages with a different prefix if you wish) and thus probably separate from the default system location for software. This makes it possible to use raven in addition to your operating system’s / distribution’s package manager instead of being forced to replace it.

Shortcomings

If you ask me about permanent problems with Raven: I don’t really see any. However there’s definitely a couple of things where it’s currently way behind other package systems. Considering how young the project is this is probably no wonder.

It’s a “needs more everything” situation. In fact it has the usual “chicken egg problem”: More available ports would be nice and potentially attract more users. With more users probably more people would become porters. And with more porters there’d surely be more ports available… But every new project faces problems like this and with resolve, dedication and perseverance as well as a fair amount of work, it’s possible to achieve the goal of making a project both useful and appealing enough for others to join in. Once that happens things get easier and easier.

KeePassXC, Geany and the EDE application menu

The Ravenports catalog has over 3,000 entries right now. It’s extremely hard to compare things like the package count, though. John provided an example: FreeBSD has 8 ports for each PostgreSQL version. With 5 supported versions that’s 40 ports. Ravenports has 5 ports with 8 subpackages each. In this case the package count is comparable, but not the port count. Taking flavors and multiversions into account, all repositories look much bigger than they actually are in case of available software. Also how to measure the quality of packages? What’s with ports that are used by less than a handful of people? What with those that are extremely outdated? Do you think they should count? It’s probably best to take a look and see if the software that you need is available. It is true though, that there’s of course still many important packages missing. IMO the most important one being Rust – which is not only needed for current versions of Firefox but increasingly important to build other software, too.

Also Linux support is not perfect, yet, and Solaris support even less so. On Solaris systems Raven is currently mostly binary-only because the Solaris kernel is unable to work with system libraries other than the ones matching exactly in version. Packages built on older releases of the OS work fine on newer ones, but for each OS release, a specific build environment would need to be created before building packages is possible. This is an issue that needs to be resolved in the future (I guess some help from the Illumos/Solaris community wouldn’t hurt). Also there are packages that don’t build on Solaris without patches which are not currently available. In case of important packages this leads to blockers since all other ports which depend on one such package also cannot be built: On FreeBSD there are 3,559 packages (including variants and metapackages) available from the repository at the present time. In the Solaris repo it’s only 2,851 packages. That’s certainly a nice start – but don’t expect to run a full-fledged desktop (or even X11 at all) there, yet!

In Linux land, distributions that come with glibc version 2.23 or newer work best. On distributions with older glibc versions (e.g. CentOS 7), software will not run as the standard C library is missing some required symbols. Raven will need to be bootstrapped again to support those distros. This is likely to happen before too long, but we’re not there, yet.

Current Firefox ESR version (+ sakura and pcmanfm in the panel)

MacOS (which might be supported soon), OpenBSD and NetBSD are not currently supported, nor is Linux with musl-libc or ╬╝clibc. Also currently Raven is amd64 only. ARM64 support is planned and i386 might or might not happen but are not available now.

Current status

At this time Raven is probably most interesting for people who love tech and enjoy tinkering on *nix systems as well as those who like the features and are ok with being early adopters. Yes, in general it’s ready for the latter. At least two people (including me) use Raven’s packages exclusively on one of their machines. I’d say it is ready as a daily driver (if you can live with the limited set of software available – or consider adding more ports). In fact I built a laptop that I use e.g. for on-call duty with it. Since that one is critical, it probably needs to be considered as “in production use”.

It’s possible to install various text mode applications with Raven, but X11 is also available. You can choose from multiple window managers or from at least two desktop environments (Lumina and the ultra-light EDE). Xfce4 is partially available (i.e. the panel is already ported). If you’re looking for web browsers, a current version of Firefox ESR (called “rustless-firefox”) can be installed as well as Surf, a simple webkit-based browser. The LibreOffice suite is available in its latest version, too. The same is true for the just released Perl 5.28 and Python 3.7.

Running Chocolate DooM and Chocolate Heretic

Oh, and if you’re into gaming… It’s not all just serious stuff. Yes, you can install and play DooM!

Conclusion

Ravenports is a fascinating project with lots and lots of possibilities. I wanted to get into porting with FreeBSD for quite a while but hesitated as I’m not a programmer. Then again I had been interested in package building for a long time and had played around with it on Arch Linux quite a bit. After my submissions to FreeBSD had been rotting in bug tracker for months (and still are after almost a year), I chose to give Raven a try in the meantime.

I was already familiar with Pkg and had used Synth before, too. Bootstrapping Raven’s pkg and then installing stuff was as easy as expected. The same was true for building the ports myself. Then I did quite a bit of reading and wrote my first port. It didn’t take more than 5 minutes after I opened my pull request on GitHub, before John responded – and the port was committed not much later. This was such a huge contrast that I decided to do more with Raven.

There was a learning curve, yes, but I received lots of help in getting started. I obviously liked the project enough to become a regular contributor and even got commit access to the ravensource repo later. Currently I’m maintaining just over 80 ports and I hope to write many more in the future. There have been some hard ports along the way (where I learned a lot about *nix), but lots of things are actually pretty easy once you get the hang of it.

Tongue-in-cheek: Make chaos or “make sense”!

If this post got you interested, just give it a try. Feel free to comment here and if you run into problems I’ll try to help. After this general overview of Raven the next post I plan to write will be on actually using it.

Modern-day package requirements

A little rant first: Many thanks to the EU (and all the people who decide on topics related to tech without having any idea on how tech stuff actually works). Their GDPR is the reason for me having been really occupied with work this month! Email being a topic that I’m teaching myself while writing the series of posts about it, I have to get back to it as time permits. This means that for May I’m going to write about a topic that I’m more familiar with.

Benefits of package management

I’ve written about package management before, telling a bit about the history of it and then focusing on how package management is done on FreeBSD. The benefits of package management are so obvious that I don’t see any reason not to content myself with just touching them:

Package management makes work put into building software re-usable. It helps you to install software and to keep it up to date. It makes it very easy to remove things in a clean manner. And package management provides a trusted source for your software needs. Think about it for just a moment and you’ll come up with more benefits.

Common package management requirements

But let’s take a look at the same topic from a different angle. What do we actually require our package systems to do? What features are necessary? While this may sound like a rather similar question, I assure you that it’s much less boring. Why? Because we’re looking at what we need – and it’s very much possible that the outcome actually is: No, we’re not using the right tool!

Yes, we need package management, obviously. While there’s this strange, overly colorful OS that cannot even get the slashes in directories right, we can easily dismiss that. We’re talking *nix here, anyway!

Ok, ok, there’s OmniOS with its KYSTY policy. That stands for “keep your software to yourself” and is how the users of said OS describe the fact that there’s no official packages available for it. While it’s probably safe to assume that the common kiddies on the web don’t know their way around on Solaris, I’m still not entirely convinced that this is an approach to recommend.

Going down that road is a pretty bold move, though. Of course it’s possible to manage your software stack properly. With a lot of machines and a lot of needed programs this will however turn into an abundance of work (maybe there are companies out there who enjoy paying highly qualified staff to carefully maintain software while others rarely spend more than a couple of minutes per day to keep their stuff up-to-date).

Also if you’re a genius who uses the method that’s called “It’s all in my head!” in the Linux from Scratch book, I’m not going to argue against it (except that this is eventually going to fail when you have to hand things over to a mere mortal when you’re leaving).

But enough of those really special corner cases. Let’s discuss what we actually require our package systems to provide! And let’s do so from the perspective not of a hobby admin but from a business-orientated one. There are three things that are essential and covered by just about any package system.

Ease of use

One of the major requirements we have today is that package management needs to be easy to use. Yes, building and installing software from source is usually easy enough on *nix today. However figuring out which configure options to use isn’t. Build one package without some feature and you might notice much later that it’s actually needed after all. Or even find that you compiled something in that’s getting in the way of something else later! Avoiding this means having to do some planning.

Reading (and understanding!) the output of ./configure –help probably isn’t something you’re going to entrust the newly employed junior admin with. Asking that person to just install mysql on the new server will probably be ok, though. Especially since package managers will usually handle dependencies, too.

Making use of package management means that somebody else (the package maintainer) has already thought about how the software will be used in most of the cases. For you this means that not having to hire and pay senior admins for work that can be done by a junior in your organization, too.

Fast operations

Time is money and while “compiling!” is a perfectly acceptable excuse for a dev, it shouldn’t be for an admin who is asked why the web server still wasn’t deployed on the new system.

Compiling takes time and uses resources. Even if your staff uses terminal multiplexers (which they should), thus being able to compile stuff on various systems at the same time, customers usually want software available when they call – and not two hours later (because the admin was a bit confused with the twenty-something tmux sessions and got stuck with one task while a lot of the other compile jobs have been finished ages ago).

Don’t make your customers wait longer than necessary. Most requests can be satisfied with a standard package. No need to delay things where it doesn’t make any sense.

Regular (security) updates

It’s 2018 and you probably want that new browser version that mitigates some of the Spectre vulnerabilities on your staff’s workstations ASAP. And maybe you even have customers that are using Drupal, in which case… Well, you get the point.

While it does make sense to subscribe to security newsletters and keep an eye on new CVEs, it takes a specialist to maintain your own software stack. When you got word of a new CVE for a program that you’re using that doesn’t mean the way you built the software makes it vulnerable. And perhaps you have a special use-case where it is but the vulnerability is not exploitable.

Again this important task is one that others have already done for you if you use packaged software from a popular repository. Of course those people are not perfect either and you may very well decide that you do not trust them. Doing everything yourself because you think you can do better is a perfectly legitimate way of handling things. Chances are however that your company cannot afford a specialist for this task. And in that case you’re definitely better off trusting the package maintainers than carelessly doing things yourself that you don’t have the knowledge for.

Special package management requirements

Some package managers offer special features not found in other ones. If your organization needs such a feature this can even mean that a new OS or distribution is chosen for some job because of that. Also repositories vary greatly in the number of software they offer, in the software versions that they hold and in the frequency of updates taking place.

“Stability” vs. “freshness”

A lot of organizations prefer “stable”, well-tested software versions. In many cases I think of “stable” as a marketing word for “really old”. For certain use-cases I agree that it makes sense to choose a system where not much will change within the next decade. But IMO this is far less often the case than some decision makers may think.

The other extreme is rolling-release systems which generally adapt the newest software versions after minimal testing. And yes, at one point there was even the “Arch server project” (if I remember the name correctly), which was all about running Arch Linux on a server. In fact this is not as bad an idea as it may seem. There are people who really live Arch and they’ll be able to maintain an Arch server for you. But I think this makes most sense as a box for your developers who want to play with new versions of the software that you’re using way before it hits your actual dev or even prod servers.

Where possible I definitely favor the “deliver current versions” model. Not even due to the security aspect (patches are being backported in case of the “stable” repositories) but because of the newer features. It’s rather annoying if you want to make use of the jumphost ability of OpenSSH (for which a nice new way of doing it was introduced not too long ago) and then notice you can’t use it because there’s that stupid CentOS box with its old SSH involved!

Number of packages

If you need one or a couple of packages that are not available (or too old) in the package repository of your OS or distribution, chances are that external repos exist or that the upstream project provides packages. That may be ok. However if you find that a lot of the software that you require is not available this may very well be a good reason to think about using a different OS or distribution.

A large number of packages in the repository increases the chance that you may get what you need. Still it can very well be the case where certain packages that you require (and which are rather costly to maintain yourself) are available on another repo.

Package auditing

Some package systems allow you to audit the installed packages. If security is very important for your organization, you’ll be happy to have your package tool recommend to “upgrade or deinstall” the installed version of some application because it’s known to be vulnerable.

Flexibility

What if you have special needs on some servers and require support for rarely needed functionality to be compiled into some software? With most package systems you’re out of luck. The best thing that you can do is roll your own customized package using a different name.

The ports tree on *BSD or portage on Gentoo Linux really show their power in this case, allowing you to just build the software easily and with the options that you choose.

Heterogeneous environments

So most of the time it makes perfect sense to stick to the standard repository for your OS or distribution. If you have special needs you’d probably consider another one and use the standard repo for that one. But what about heterogeneous environments?

Perhaps your database product only runs on, say, CentOS. You don’t have much choice here. However a lot of customers want their stuff hosted on Linux but they demand newer program versions. So a colleague installed several Ubuntu boxes. And another colleague, a really strange guy, slipped in some FreeBSD storage servers! When the others found out that this was not even Linux and started protesting (because “BSD is dying”), they were already running too damn well to replaced with something that does not have as good ZFS support.

A scenario like that is not too uncommon. If you don’t do anything about it, this might lead to “camps” among the employees; some of them are sure that CentOS is so truly enterprise that it’s the way to go. And of course yum is better than apt-get (and whatever that BSD thing offers – if anything). Some others laugh at that because Ubuntu is clearly superior and using apt-get feels a lot more natural than having to use yum (which is still better than that BSD thing which they refuse to even touch). And then there’s the BSD guy who is happy to have a real OS at his hand rather than “kernel + distro-chosen packages”.

In general if you are working for a small organization, every admin will have to be able to work with each system that is being used. Proper training for all package systems is probably expansive and thus managers will quite possible be reluctant to accept more than two package systems.

Portability

There’s a little known (in the Linux community) solution to this: Pkgsrc (“package source”). It’s NetBSD’s package management system. But with probably the most important goal of the NetBSD project being portability, it’s portable, too!

Pkgsrc is available for many different platforms. It runs on NetBSD, of course. But it runs on Linux as well as on the other BSDs and on Solaris. It’s even available for commercial UNIX platforms and various exotic platforms.

For this very nature of it, Pkgsrc may be one answer for your packaging needs in heterogeneous environments. It can provide a unified means of package management across multiple platforms. It rids you of the headache of version jungle if you use different repositories for different platforms. And it’s free and open source, too!

Is it the only solution out there? No. Is it the best one? That certainly depends on what you are looking for specifically. But it’s definitely something that you should be aware of.

What’s next?

The next post will be about a relatively new alternative to traditional package management systems that tries to deliver all the strong points in one system while avoiding their weaknesses!

FreeBSD: Building software from ports (2/2)

My previous post discussed what ports are, where they can be found on FreeBSD and what the files of which a port is composed of look like. This post will now detail how to use ports to build software on FreeBSD (the other BSDs have ports trees that work somewhat similar but are not identical. There are important differences!).

Packages and ports: A word of warning

The ports system works hand in hand with FreeBSD’s package manager Pkg. It makes little difference if some software on your machine was installed via a package or directly from ports – packages are in fact actually built from ports! Still it is not really recommended to mix packages and ports. In past times it was strongly discouraged. Things have changed since then. I’ve done it a lot – and mostly got away with it. Don’t rely on it, though, especially if you’re new to the whole topic. Feel free to do it on a test system and be completely happy – or face subtle and annoying breakage. You cannot know up front.

What’s the deal here? Modern software is a complex thing. Most programs rely on other programs or external libraries. A lot of programs can be configured at run time in certain ways. There are however decisions about program functionality that have to be made at compile time. The ports system allows you to build software with compile-time options other than the default. Pre-compiled packages have no chance to know that you choose to deactivate an option when you built a library yourself that they make use of. They assume that this feature is present (it was available on the system the package was built on after all!). And what can one poor program do in that case? Crash, explode, malfunction… A lot of things.

And then there’s the problem of mixing versions which can lead to all kinds of fun. If you stick with either ports or packages, you always have a consistent system with versions that are known to play together well (as long as the maintainers do their job well – we’re all humans and errors do occur).

Just keep that in mind when thinking about mixing programs installed from packages and ports on one system. You can do that. But it doesn’t mean you should. Enabling more options is generally safer than removing ones set by default. It can still have consequences. This is Unix though. Do whatever you see fit – and claim the responsibility. Your choice.

Most basic ports building

Building a software from ports is extremely easy. Go to the directory of a port and type make. Yes, that’s all! Let’s assume the port has no unsatisfied dependencies. The ports system will then check to see if the source code tarball is present in /usr/ports/distfiles. If it isn’t, it will automatically download it. Then it extracts the source code, prepares everything for the compilation and compiles it.

Building the ‘pkg’ port

On my fresh example system I build the Pkg manager from ports first – it’s needed for every other port anyway. Once everything has finished I get my shell back.

Building of Pkg completed

Installing the program is just as easy: Use make install

Installing the newly built port

That’s it, Pkg is now installed. We’re basically done with that port. However there’s still the “work” directory left over from the building process. To tidy up our port’s directory we can issue make clean.

Cleaning up after the build

Dependency handling

On to a just slightly more complex example. I want to build and install an old version of the LUA interpreter which depends on another port, libedit. Of course I could build devel/libedit first and then lang/lua51. In that case it wouldn’t be so bad. But if you think of larger programs with hundreds of dependencies that approach would be a nightmare.

So what to do about it? Well, nothing actually. The ports system takes care of it automatically! Just have it build LUA and it will figure out that it has to build the dependency first.

Building, installing and cleaning up in one command

The parameters to make that we used above are called make targets, BTW, and can be combined. That means it’s perfectly fine to issue make install clean together as you can see in the picture above.

Dependencies are handled automatically

The clean make target is also applied to all ports that were built as a dependency for the current port. Things like this make ports very convenient to use.

More on make and targets

Make targets can depend on other make targets. When you issue make install these are the targets that are actually run:

  • make config (more on that in a minute)
  • make fetch (fetch all files needed to build the port)
  • make checksum (check integrity of downloaded file(s))
  • make depends (check for missing dependencies and build/install those)
  • make extract (extract distfile(s) for the port)
  • make patch (apply patches for this port, if any)
  • make build (actually build the port)
  • make install (install the newly-built program)

If you type make checksum for example, all targets up to and including that one will run (that is config, fetch and checksum in that case). Running just make without any target will assume the default target which is equivalent to make build.

Also make will take an argument to look for the Makefile in another directory if you wish. So instead of doing e.g. this:

# cd /usr/ports/archivers/bzip2
# make install clean

you could also simply do this:

# make -C /usr/ports/archivers/bzip2 install clean

You’re in control: Ports options

So far it’s all nice and well but there’s no real advantage to using ports instead of packages. May I introduce ports options? Let’s say you we want to build BASH. If issue make in shells/bash, this is what happens:

Build options for BASH

The port ports-mgmt/dialog4ports is fetched and installed. It’s so small that you might miss it but it’s quite important. It’s needed to display the menu in the picture above which lets you set various options for the port.

You can now e.g. choose to not install the documentation if you’re short on space on a small or embedded system (sure, you wouldn’t actually compile on such a system, but that’s only an example, right?). If you don’t want BASH to support any foreign languages, deselect NLS. In case you feel that BASH’s built-in help is useless (did you ever issue the help command when you ran BASH?), you can cut that feature. Things like that.

If you see the option configuration for a port the first time, you see the default configuration. In general it’s a good idea to leave options alone if you’re in doubt what they do (do a little research if you have the time). Of course you’re also free to experiment with them. It’s your system.

Once you’re happy, accept your selection and the source tarball is being fetched, extracted, etc. You know the score.

Build options for bison

But what’s that? Another configuration menu (for bison)? And another (m4) and another (texinfo), etc… It’s 8 menus for a rather basic program like BASH! And worse: The building process will run and build dependencies and when a port with options is reached, the process is interrupted and prompts the user.

Now imagine you’re building a whole graphical desktop like MATE… Currently even the basic desktop would build no less that 338 dependency packages on a fresh system! And there’s quite a few ports on the list which build rather heavy software that takes it’s time compiling. It would totally make sense to let it build over night or at least not require you to keep staring at the screen, waiting for the next options selection to confirm, right?

Recursive operations

Exactly that’s why recursive operations are supported by the ports system. The standard make target that was implicitly run to open the options dialog is make config. The recursive option which would run the same on each and every port that’s listed as a dependency for the current port is make config-recursive.

If you want to build MATE as mentioned in the previous example, that would start a true marathon of options for you to configure. However it’s still a lot better to be able to do this up front so that the build process can run uninterruptedly afterwards.

Oh, and don’t be surprised if you went through it all only to find that still another configuration dialog pops up later! Why? Most likely you enabled an option on some package that made it depend on another package that’s not a dependency by default. And that package may need to have its options configured, too. So if you changed any options it makes sense to run make config-recursive again until no more new option dialog windows are displayed!

Recursively fetching distfiles for security/sudo

You can also do make fetch-recursive to fetch the distfiles for the current port and all dependencies. Again: Keep in mind that enabling more options may lead to new dependencies. If you want to make sure that you have all the distfiles, you might want to run make fetch-recursive again after changing ports options.

Other things to know

Wonder where the all the options are saved? They are stored in text files in /var/db/ports/category_portname. But there’s no need to edit or delete them; if you want to get rid of them, there’s make rmconfig to do that. Also make rmconfig-recursive exists if you feel like blowing away a huge amount of them.

Ports options in /var/db/ports

Another thing that comes in handy is make build-depends-list which will show you a list of ports that will be built as build dependencies for your current port. If you want to see the runtime dependencies you would use make run-depends-list. And then there’s also make all-depends-list which will show you each and every port that would be installed if you chose to build the current port.

Showing port dependencies

You should also know that you can deinstall a port by using make deinstall. Yes, it is also possible to remove the package using pkg delete but that will lead to a problem. The ports infrastructure keeps track of installed ports separately and Pkg does not know anything about this. So even if your package is removed, the Ports infrastructure will insist that it is still installed and there’s something very wrong with your system!

Now what to do if you have that case? Use make reinstall to install the package again even though ports thinks that it’s already installed.

More on ports?

To be honest, there’s quite a bit more to ports than I could cover here. You may want to man 7 ports to see what other targets are available and what they do. Also we haven’t even touched how to keep your system updated when using ports!

The ports infrastructure is a great means of installing customized programs on your system. It’s quite easy to use as you’ve seen. But things can be made even easier – which is why there are helper tools available. I will write a follow-up article covering those (not the next one, though). But for now enjoy all of those new possibilities with software on your FreeBSD machines!

FreeBSD: Building software from ports (1/2)

In my previous two (link) posts (link) I wrote about using Pkg, FreeBSD’s package manager.

Pre-built binary packages are convenient to use but sometimes you need some more flexibility, want an application that cannot not be distributed in binary form due to license issues (or have some other requirements). Building software by hand is certainly possible – but with all the things involved, this can be a rather tedious process. It’s also slow, error-prone and there’s often no clean way to get rid of that stuff again. FreeBSD Ports to the rescue!

This first part is meant as a soft introduction to FreeBSD’s ports, assuming no prior knowledge (if I fail to explain something, feel free to comment on this post). It will give you enough background information to understand ports enough to start using them in the next article.

What “Ports” are

When programmers talk about porting something over, what they originally meant is this: Take an application that was written with one processor architecture in mind (say i386) and modify the source so that it runs on another (arm64 for example) afterwards. The term “porting” is also used when modifying the source of any program to make it run on another OS. The version that runs on the other architecture/OS is called a port of the original program to a different platform.

FreeBSD uses the term slightly differently. There’s a lot of software written e.g. for Linux that will build and work on FreeBSD just fine as it is. Even though it does not require any changes, that software might be ported to FreeBSD. So in this case “porting” does not mean “make it work at all” but make it easily available. This is done by creating a port for any program. That term doesn’t mean a variant of the source code in this case but rather a means to give you easy access to that software on FreeBSD.

So what is a port in FreeBSD? Actually a port is a directory with a bunch of files in it. The heart of it is one file that basically is a recipe if you will. That recipe contains everything needed to build and install the port (and thus have the application installed on your machine in the end). Following this metaphor you could think of all the ports as a big cookbook. Formally it is known as the Ports collection. All those files in your filesystem related to ports are refered to as the Ports tree.

How to get the Ports tree

There are several options to obtain a copy of the ports tree. When you install FreeBSD you can decide whether or not to install it, too. I usually don’t do that because on systems that use binary packages only. It wastes only about 300 MB of space, but more importantly consists of almost 170.000 files (watch your inodes on embedded devices!). Take a look at /usr/ports: If that directory is empty your system is currently missing the ports tree.

The simplest way to get it is by using portsnap:

# portsnap fetch extract

If you want to update the tree later, you can use:

# portsnap fetch update

Another way is to use Subversion. This is more flexible: With portsnap you always get the current tree while Subversion also allows you to checkout older revisions, too. If you plan to become a ports developer, you will probably want to use Subversion for tools like svndiff. If you just want to use ports, portsnap should actually suffice. All currently supported versions of FreeBSD contain a light-weight version of Subversion called svnlite.

Here’s how to checkout the latest tree:

# svnlite checkout https://svn.freebsd.org/ports/head /usr/ports

If you want to update it later run:

# svnlite update /usr/ports

Old versions of the tree

You normally shouldn’t need these but it’s good to know that they exist. Using Subversion you can also retrieve old trees. Be sure that /usr/ports is empty (including for Subversion’s dot directories) or Subversion will see that there’s already something there and won’t do the checkout. If for example you want the ports tree as it existed in 2016Q4, you can retrieve it like this:

# svnlite checkout https://svn.freebsd.org/ports/branches/2016Q4 /usr/ports

There are also several tags available that allow to get certain trees. Maybe you want to see which ports were available when FreeBSD 9.2 was released. Get the tree like this:

# svnlite checkout https://svn.freebsd.org/ports/tags/RELEASE_9_2_0 /usr/ports

And if you need the last tree that is guaranteed to work with 9.x there’s another special tag for it:

# svnlite checkout https://svn.freebsd.org/ports/tags/RELEASE_9_EOL /usr/ports

Keep in mind though that using old trees is risky because they contain program versions with vulnerabilities that have since been found! Also mind that it’s NOT a smart thing to simply get the tree for RELEASE_7_EOL because it still holds a port for PHP 5.2 and you thought that it would be cool to offer your customers as many versions as possible. Yes, it may be possible that you can still build it if you invest some manual work. But no, that doesn’t make it a good idea at all.

Oh, and don’t assume that old ports trees will be of any use on modern versions of FreeBSD! The ports architecture changed quite a bit over time, the most notable change being the replacement of the old pkg_* tools with the new Pkg. Ports older than a certain time definitely won’t build in their old, unmodified state today (and I say it again: You really shouldn’t bother unless you have a very special case).

Port organization

Take a look at the contents of /usr/ports on a system that has the tree installed. You will find over 60 directories there. There are a few special ones like distfiles (where tarballs with program’s source code get stored – might be missing initially) or Mk that holds include files for the ports infrastructure. The others are categories.

If you’re looking for a port for Firefox, that will be in www. GIMP is in graphics and it’s probably no surprise that Audacious (a music player) can be found in audio. Some program’s categories will be less obvious. LibreOffice is in editors which is not so bad. But help2man for example is in misc and not in converters or devel as at least I would expect if I didn’t know. In general however after a while of working with ports you will have a pretty good chance to guess where things are.

Say we are interested in the port for the window manager Sawfish for example. It’s located in /usr/ports/x11-wm/sawfish. Let’s take a closer look at that location and take it apart:

/usr/ports is the “ports directory”.
x11-wm (short for X11 window managers) is the category.
sawfish is the individual port’s name.

When referring to where a port lives, you can omit the ports directory since everybody is assumed to know where it is. The important information when identifying a port is the category and the name. Together those form what is known as the port origin (x11-wm/sawfish in our case).

How to find a port in the tree

There are multiple methods to find out the origin for the port you are looking for. Probably the simplest one is using whereis. If we didn’t know that sawfish is in x11-wm/sawfish we could do this:

% whereis sawfish
sawfish: /usr/ports/x11-wm/sawfish

This does however only work if you know the exact name of the port. And there’s a little more to it: Sometimes the name of a port and a package differ! This is often the case for Python-based packages. I have SaltStack installed, for example. It’s a package called py27-salt:

% pkg info -x salt
py27-salt-2017.7.1_1

If we were to look for that, we wouldn’t find it:

% whereis py27-salt
py27-salt:

So where is the port for the package?

% pkg info py27-salt
py27-salt-2017.7.1_1
Name           : py27-salt
[...]
Origin         : sysutils/py-salt
[...]

Here you can see that the port’s name is py-salt! The “27” gets added when the package is created and reflects the version of Python that it’s build against. You may also see some py3-xyz ports. In those cases the name reflects that the port cannot be built with Python 2.x. The package will still be called py36-xyz, though (or whatever the default Python 3.x version is at that time)!

When discussing package management I recommended FreshPorts and when working with ports it can be useful, too. Search for some program’s name and it might be easier for you to find the package name and the port origin for it!

What a port looks like

Let’s take a look at the port for the zstd compression utility:

% ls /usr/ports/archivers/zstd/
distinfo	Makefile	pkg-descr	pkg-plist

So what have we here? The simplest file is pkg-descr. Each package has a short and a long package description – this file is what contains the latter: A detailed description that should give you a good idea whether this port would satisfy your needs:

% cat /usr/ports/archivers/zstd/pkg-descr
Zstd, short for Zstandard, is a real-time compression algorithm providing
high compression ratios.  It offers a very wide range of compression vs.
speed trade-offs while being backed by a very fast decoder.  It offers
[...]

Then there’s a file called distinfo. It lists all files that need to be downloaded to build the port (usually the program’s source code). It also contains a checksum and the file’s size to make sure that the valid file is being used (an archive could get corrupted during the transfer or you could even get an archive that somebody tempered with!):

% cat /usr/ports/archivers/zstd/distinfo 
TIMESTAMP = 1503324578
SHA256 (facebook-zstd-v1.3.1_GH0.tar.gz) = 312fb9dc75668addbc9c8f33c7fa198b0fc965c576386b8451397e06256eadc6
SIZE (facebook-zstd-v1.3.1_GH0.tar.gz) = 1513767

There’s usually also pkg-plist. It lists all the files that are installed by the port:

% cat /usr/ports/archivers/zstd/pkg-plist 
bin/unzstd
bin/zstd
bin/zstdcat
[...]
lib/libzstd.so.%%PORTVERSION%%
libdata/pkgconfig/libzstd.pc
man/man1/unzstd.1.gz
man/man1/zstd.1.gz
man/man1/zstdcat.1.gz

And finally there’s the Makefile. This is where all the magic happens. If you’re a programmer or you have built software from source before, there’s a high chance that you’re at least somewhat familiar with a tool called make. It processes Makefiles and then does as told by those. While it’s most often used to compile software it can actually be used for a wide variety of tasks.

If you don’t have at least some experience with them, Makefiles look pretty much obscure and creating them seems like a black art. If you’ve ever looked at a complicated Makefile, you may be worried to hear that to use ports you have to use make. Don’t be. The people who take care of the Ports infrastructure are the ones who really need to know how to deal with all the nuts and bolts of make. They’ve already solved all the common tasks so that the porters (those people who create the actual ports) can rely on it. This is done by including other Makefiles and it manages to hide away all the scariness. And for you as a user things are even simpler as you can just use what others created for you!

Let’s take a look at the Makefile for our example port:

% cat /usr/ports/archivers/zstd/Makefile 
# Created by: John Marino <marino@FreeBSD.org>
# $FreeBSD: head/archivers/zstd/Makefile 448492 2017-08-21 20:44:02Z sunpoet $

PORTNAME=	zstd
PORTVERSION=	1.3.1
DISTVERSIONPREFIX=	v
CATEGORIES=	archivers

MAINTAINER=	sunpoet@FreeBSD.org
COMMENT=	Zstandard - Fast real-time compression algorithm

LICENSE=	BSD3CLAUSE GPLv2
[...]
post-patch:
	@${REINPLACE_CMD} -e 's|INSTALL_|BSD_&|' ${WRKSRC}/lib/Makefile ${WRKSRC}/programs/Makefile

.include <bsd.port.mk>

Now that doesn’t look half bad for a Makefile, does it? In fact it’s mostly just defining Variables! The only line that looks somewhat complex is the “post-patch” command (which is also less terrifying than it first looks – if you know sed you can surely guess what it’ll do).

There can actually be more files in some ports. If FreeBSD-specific patches are required to build the port, those are included in the ports tree. You can find them in a sub-directory called files located in the port’s directory. Here’s an example:

% ls /usr/ports/editors/vim/files/
patch-src-auto-configure        vietnamese_viscii.vim
patch-src-installml.sh          vimrc

The patches there are named after the files that they apply to. Every patch in the files directory is automatically applied when building the port.

What’s next?

Alright. With that we’ve got a basic overview of what Ports are covered. The next post will show how to actually use them to build and install software.

FreeBSD package management with Pkg (2/2)

The previous article covered basic operations with FreeBSD’s Pkg tool. This second part will deal with some more advanced (or rather intermediate, actually) functionality.

Good code travels well

My previous two articles have been linked to from the DragonFly Digest (a very valuable resource for topics in BSD and the IT in general that I’ve been reading for years now and would like to say “thanks!”) again. Justin Sherrill pointed out that everything applies to DragonFlyBSD as well – they have adopted Pkg quite a while ago. And in fact you benefit from knowing your way around with Pkg in a lot of places:

FreeBSD obviously and a lot of FreeBSD-derived operating systems like OPNsense and HardenedBSD as well as desktop-oriented offspring like GhostBSD and TrueOS.

But as mentioned before, DragonFlyBSD uses it, too. And thanks to the new (and extremely exciting, IMO!) Ravenports project it has already come to Linux and will be available on even more platforms in the future! So getting familiar with it is certainly not a waste of time.

Package versioning

Before we start updating packages, let’s take a look at the versioning scheme. The way FreeBSD versions its packages can be a bit confusing if you first see it. Here’s a sample package with a rather complicated version string:

# pkg search opensmtpd | grep OpenBSD
opensmtpd-5.9.2p1_3,1          Security- and simplicity-focused SMTP server from OpenBSD

opensmtpd-5.9.2p1_3,1 – what does that all mean? Well, first we have the package name: opensmtpd, followed by a minus. Then there’s the upstream version of the program, 5.9.2p1 in this case.

Then there’s the underscore and another number: _3 in this case. This indicates our package is “revision 3”. Any new package starts with a revision of 0. If a port is revised (probably to correct a mistake, add more configure options, etc), the revision number is bumped. So this port has been revised three times without changing the actual upstream version.

And finally, separated by a comma, we have what is called the “epoch”. It is used in such cases where the upstream versioning changes. Any package with an epoch of 1 is considered newer than a package without any epoch. Even higher epoch numbers are considered even newer but this is rare. When do you need this? Let’s assume some project released a version of 7.2017 but decided that it would be a good idea to release the next version as 5.0. For Pkg it looks like the first one is newer (as it has a higher version number). In such a case you’d set an epoch to make Pkg understand that in fact the other one is the more up-to-date package.

Updating packages

I covered updating the repository information before. Update the actual packages with pkg upgrade:

# pkg upgrade
Updating Synth repository catalogue...
Synth repository is up to date.
All repositories are up to date.
Checking for upgrades (30 candidates): 100%
Processing candidates (30 candidates): 100%
Checking integrity... done (0 conflicting)
The following 30 package(s) will be affected (of 0 checked):

Installed packages to be UPGRADED:
        xinit: 1.3.4,1 -> 1.3.4_1,1
        xerces-c3: 3.1.4 -> 3.2.0_2
        virtualbox-ose: 5.1.26 -> 5.1.26_1
        vim: 8.0.0962 -> 8.0.1035
        sudo: 1.8.20p2_3 -> 1.8.21p1
        sqlite3: 3.20.0_2 -> 3.20.1
        rubygem-net-ssh: 4.1.0,2 -> 4.2.0,2
        rubygem-multi_json: 1.12.1 -> 1.12.2
        ruby23-gems: 2.6.12 -> 2.6.13
        pulseaudio: 10.0_4 -> 11.0
        pciids: 20170727 -> 20170825
        p11-kit: 0.23.7 -> 0.23.8
        open-vm-tools: 10.1.5_1,2 -> 10.1.10,2
        nano: 2.8.6 -> 2.8.7
        mesa-libs: 17.1.7 -> 17.1.8
        mesa-dri: 17.1.7 -> 17.1.8
        libreoffice: 5.3.5_1 -> 5.3.6
        libidn2: 2.0.3 -> 2.0.4
        libgcrypt: 1.8.0 -> 1.8.1
        libdrm: 2.4.82,1 -> 2.4.83,1
        hunspell: 1.6.1_1 -> 1.6.2
        harfbuzz-icu: 1.4.8 -> 1.5.1
        harfbuzz: 1.4.8 -> 1.5.1
        gdk-pixbuf2: 2.36.6 -> 2.36.9
        e2fsprogs: 1.43.5 -> 1.43.5_1
        doas: 6.0p0 -> 6.0p1
        chromium: 60.0.3112.101 -> 60.0.3112.113
        atril: 1.18.0_1 -> 1.18.1

Installed packages to be REINSTALLED:
        keybinder-0.3.1 (options changed)
        apache-xml-security-c-1.7.3 (needed shared library changed)

Number of packages to be upgraded: 28
Number of packages to be reinstalled: 2

The process will require 2 MiB more space.

Proceed with this action? [y/N]:

The packages to be UPGRADED section is pretty obvious: There’s a newer version available. But there are also two packages in this example that are being reinstalled even though no new version is available. Pkg gives the reason for this in parentheses:

Keybinder will be reinstalled because it was compiled with other compile-time options than before (more about this in the next post). The second one depends on xerces-c3, a package in the list of upgradable packages, which is why apache-xml-security-c was rebuilt against the new version of the library.

There are other reasons that packages are to be reinstalled; if you upgraded your OS from one major version to another, the reason might be “ABI has changed”. It’s also possible that some packages will be deinstalled for an upgrade. This is usually because they conflict with another package that is to be installed. This also means: Do look at what the update is going to do! There is the chance that it would do something that you didn’t intend to.

Will this update cause me trouble?

You can never know for sure. But there is a means to learn about known issues beforehand. For your important applications it is a good idea to read the so-called “UPDATING information”. This is a short text (or some of them) which might contain a heads-up that can be critical to know. To view it, use pkg updating. Here’s an old example showing how bad it could be to have missed it:

# pkg updating apache22                 
20140713:                                          
  AFFECTS: users of www/apache22                   
  AUTHOR: ohauer@FreeBSD.org                       
                                                   
  The default version was changed from www/apache22 to www/apache24,                                   
  pre-build apache modules and web applications will also reflect this!                                
                                                   
  In case ports are build by yourself and apache22 is required                                         
  use the following command to keep apache22 as default.                                               
                                                   
  # echo "DEFAULT_VERSIONS+=apache=2.2" >> /etc/make.conf

Having missed that one would have had very bad effects… For such reasons it’s good practice to read the UPDATING info. You don’t actually have to read it and will probably get away with it for quite some time. But it’s there for your benefit. So if you choose to ignore it, don’t complain if an update finally bites when it finds you off guard!

Blocking updates

Let’s stick to the previous example and say that we want to do the update – but LibreOffice should not be touched because we’re working on an important document currently and don’t want to risk layout breakage (minor updates should be no problem but bigger updates are known to sometimes cause trouble). What to do in that case?

Let’s lock the package using pkg lock:

# pkg lock libreoffice
libreoffice-5.3.5_1: lock this package? [y/N]: y
Locking libreoffice-5.3.5_1

Attempting the upgrade again, Pkg should now show only 27 candidates and leave LibreOffice alone. There are a few good reasons to lock a package – and a lot of bad ones. Resort to locking packages when necessary but don’t trifle with it because you’re effectively cutting yourself off from updates on some packages. Those could have dependencies. Probably dependencies that they share with other packages. You can see how this gets a lot bigger than “just that one package” rather quickly.

Also if you decide to use locking, make sure to look for locked packages now and then and think over if the lock is still needed! If not, release the lock. But how to find out which packages are locked? Pkg info can help us out:

# pkg info -k -a | grep yes             
libreoffice-5.3.5_1            yes

Unlocking works just like you’d probably expect it to:

# pkg unlock libreoffice                                                                                                                                                                           
libreoffice-5.3.5_1: unlock this package? [y/N]: y
Unlocking libreoffice-5.3.5_1

Package comments

We’ve locked LibreOffice above – but how do we remember in four months or so why it was locked? This is what we can use an annotation for. Set one with pkg annotate:

# pkg annotate -A libreoffice locked-pkgs "This package was locked on 09/10 until I finally finish the manuscript for my fantasy novel!"                                                           
libreoffice-5.3.5_1: Add annotation tagged: locked-pkgs with value: This package was locked on 09/10 until I finally finish the manuscript for my fantasy novel!? [y/N]: y
libreoffice-5.3.5_1: added annotation tagged: locked-pkgs

The argument “-A” is to set an annotation to the following package. “locked-pkgs” is a tag – you could call it whatever you want. And finally the last field is the actual comment string.

Using pkg info and the package name will display the comment among a lot of other information. But it might make more sense to look for all packages that have an annotation with a certain tag:

# pkg annotate -a -S locked-pkgs
libreoffice-5.3.5_1: Tag: locked-pkgs Value: This package was locked on 09/10 until I finally finish the manuscript for my fantasy novel!

If you no longer need the annotation, delete it like this:

# pkg annotate -D libreoffice locked-pkgs                                                                                                                                                          
libreoffice-5.3.5_1: Delete annotation tagged: locked-pkgs? [y/N]: y
libreoffice-5.3.5_1: Deleted annotation tagged: locked-pkgs

Are those updates important?

Some updates mean new features, others mean fixing of critical security holes. How are you supposed to know? The easy way is to ask Pkg! Use pkg audit and it will tell you about known vulnerabilities of the software installed on your system:

pkg audit
libgcrypt-1.8.0 is vulnerable:
libgcrypt -- side-channel attack vulnerability
CVE: CVE-2017-0379
WWW: https://vuxml.FreeBSD.org/freebsd/22f28bb3-8d98-11e7-8c37-e8e0b747a45a.html

chromium-60.0.3112.101 is vulnerable:
chromium -- multiple vulnerabilities
CVE: CVE-2017-5120
CVE: CVE-2017-5119
CVE: CVE-2017-5118
CVE: CVE-2017-5117
CVE: CVE-2017-5116
CVE: CVE-2017-5115
CVE: CVE-2017-5114
CVE: CVE-2017-5113
CVE: CVE-2017-5112
CVE: CVE-2017-5111
WWW: https://vuxml.FreeBSD.org/freebsd/e1100e63-92f7-11e7-bd95-e8e0b747a45a.html
[...]

No way back?

Are you not feeling completely confident about an update? Does your customer demand “a way back” in case something goes wrong? You can use pkg create to package already installed software:

# pkg create chromium-60.0.3112.101
Creating package for chromium-60.0.3112.101

In this example I’ve packaged Chromium before updating so that I could reinstall the old version. Keep in mind, though, that this is just an example. If dependencies changed as well, you might not be able to use the old version, even when you reinstalled it! If you want to be really, really cautious, you can use pkg create -a to create packages of all the software currently installed on your system!

The package(s) is/are created in the current directory. I just deinstalled Chromium after creating the package and now want it back. To install software directly from a package (and not a repo), use pkg add:

# pkg add chromium-60.0.3112.101.txz
Installing chromium-60.0.3112.101...
Extracting chromium-60.0.3112.101: 100%
Message from chromium-60.0.3112.101:
For correct operation, shared memory support has to be enabled
in Chromium by performing the following command as root :

sysctl kern.ipc.shm_allow_removed=1

To preserve this setting across reboots, append the following
to /etc/sysctl.conf :

kern.ipc.shm_allow_removed=1

Finding the package a file belongs to

In many cases you can probably tell from the path and name of a file which package it belongs to. But sometimes you may wonder: Where does this come from? This is where pkg which is really helpful. Let’s pick a file with a non-obvious name and pretend we don’t know what it is. We better ask pkg where it belongs:

% pkg which /usr/local/etc/drirc     
/usr/local/etc/drirc was installed by package mesa-dri-17.1.7

Ah, mesa! We better leave that one alone.

Repositories

If you looked closely at the output of my upgrade command, you have seen mention of a repo called Synth. I’ll cover that in a later post. But there is something you might want to know about the ordinary repos, too. Modern FreeBSD provides two package repositories: quarterly and latest. The later always holds the newest packages, the former gets version updates every three months and only security fixes in-between. The quarterly repository is a good choice for people who don’t want the newest software at all times but prefer a slower-moving environment. Since version 10.2 quarterly is the default.

If you want to use the packages from latest, you have to configure pkg to use it. Take a look at the file /etc/pkg/FreeBSD.conf to get an idea of how repo configuration looks like. Then create the necessary directory and another configuration file to overwrite the default:

# mkdir -p /usr/local/etc/pkg/repos
# vi /usr/local/etc/pkg/repos/FreeBSD.conf

Put the following lines in that file:

FreeBSD: {
  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest"
}

Now use pkg update to refresh the repository database.

What’s next?

There’s a lot more that Pkg can do – and we haven’t even touched its main configuration! But the two posts were just meant to introduce you to FreeBSD package management (and chances are that you already know more now than many admins who occasionally use FreeBSD). I might or might not write about more features of Pkg in the future. But next stop now: The ports tree.

Edit: I got the patch level wrong with the version schema as leper pointed out in the comments. The wrong claim was removed.

FreeBSD package management with Pkg (1/2)

FreeBSD is a server operating system popular among more experienced administrators. It’s also often used in appliances or embedded products and it makes a nice desktop, too, if you are a bit more proficient with it. Sure, you don’t read about it all the time. This is because of two things: 1) Linux is usually absorbing most of the attention 2) FreeBSD just silently and reliably does its job. FreeBSD is also the basis of some special-purpose open source operating systems.

Are you new to FreeBSD? You’ve picked a good time to take a look: Version 11.1 has been released recently and a lot of people who have wanted to give it a spin for a while are getting an ISO and try it out. Also OPNsense 17.7, a popular router/firewall OS has been released, also not too long ago.

If you’re coming e.g. from OPNsense, you have a nice WebGUI at your hands that makes controlling and maintaining easy. That does not mean that you wouldn’t benefit from knowing more about how to do package management by hand, however. OPNsense is built on top of FreeBSD which means that you have all the power of that operating system available if you know what to do if the WebGUI doesn’t provide a simple option to do it!

Pkg – where is it?

If you’re using a FreeBSD-based project like OPNsense or a desktop spin like GhostBSD or TrueOS, it’s located in /usr/local/sbin/pkg. With vanilla FreeBSD it’s almost certainly in the same place of course, if you’ve been using the system for a while. What do I mean by that? Well, in the previous post about the history of *nix package management I made the claim that modern FreeBSD comes without a package manager. And that’s actually true! Take a look at the path mentioned above. See what I mean?

FreeBSD uses the directory tree under /usr/local for programs that are installed as add-on packages, i.e. software that is not part of the base system. Yes, that means that even the package manager is a package! How does that work? Well, there’s also /usr/sbin/pkg, which is part of the base system. If pkg is not present on the system, it’s able to bootstrap it as the first package on the system. If the package manager is already installed, it simply acts as a wrapper (thanks to its location and the default PATH variable it takes precedence over the actual pkg). Bootstrapping is as easy as pressing ‘y’, so we don’t really need to cover it beyond this.

Sounds a bit strange, right? Yes, but there’s a good reason for this. Pkg’s life begun as pkg-ng when FreeBSD still used the old pkg_* tools, so it made sense to develop it outside of the base system. But why wasn’t it imported into the base system when the old tools were retired? Actually, FreeBSD maintains ABI stability for the base system for the life-cycle of a release (e.g. 11). The last point release of the 10 branch, FreeBSD 10.4, is just about a month away from now. If pkg had been part of the base, that would mean the new release would have to ship with pkg 1.2 since 1.2.4 was the version available when 10.0 was released! The 11 branch would ship with 1.8, as 1.8.7 was the current version when FreeBSD 11.0 was released.

That would have meant slower progress, since no package would have been allowed to depend on features introduced after 1.2 until 10.4 went EOL – which might be as far away as somewhere near the end of 2019! Fortunately pkg is a package and thus could be improved rapidly: All versions of FreeBSD have pkg 10.1 today instead of 1.8 – or even 1.2.

How to get help?

If you have no idea how to use pkg, you can use pkg help. This will show a list of supported options and commands. And if you want to know even more, you can always type man pkg. There are also man pages for most of the pkg commands, usually named pkg-command. As a shortcut to those, you could also type pkg help command which will display the correct man page.

On the long run it’s not a bad idea to read a bit more about pkg, but if you are just getting started, all the possibilities might overwhelm you. So let’s discuss a few practical examples to get you up to speed easily and quickly, shall we?

Finding packages

As long as you just use pkg to gather information, you can run it as any user. Modifying anything needs root privileges, of course.

By default pkg operates on remote a repository. A repo (common abbreviation) is simply a place where packages are stored and kept accessible together with an index file. To be of any use, pkg needs that index and (with the default configuration) will fetch (or update if it deems the local copy too old) it before performing any action you might want it to do. If you just want to get/update the index, you can run pkg update:

# pkg update
Updating FreeBSD repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
Fetching packagesite.txz: 100%    6 MiB 548.9kB/s    00:11    
Processing entries: 100%
FreeBSD repository update completed. 26602 packages processed.
All repositories are up to date.

Provided you can connect to the repo, this will fetch the latest index and process it. Pkg uses a local sqlite database created from the index so that it can quickly and nicely get information from it.

To see if a package in any configured repository is available for installation, use pkg search:

% pkg search bash
bash-4.4.12_2                  GNU Project's Bourne Again SHell
bash-completion-2.5,1          Programmable completion library for Bash
bash-static-4.4.12_2           GNU Project's Bourne Again SHell
bashc-3.2.33.0_1               GNU bash shell extended with visual two-panel file browser
checkbashisms-2.15.10          Check for the presence of bashisms
erlang-mochiweb-basho-2.9.0p2  Erlang library for building lightweight HTTP servers (Basho fork)
mybashburn-1.0.2_4             Ncurses CD burning bash script
p5-Bash-Completion-0.008_1     Extensible system to provide bash completion
p5-Term-Bash-Completion-Generator-0.02.8_1 Generate bash completion scripts

Pkg returns a list of hits, each with package name and version as well as a short comment to give you an idea what the package actually is. If you know either the package name or part of the name, searching is easy enough.

Now let’s assume you are looking for a light-weight web browser but you forgot the name! Since it’s not very likely that the package’s name contains “browser”, how do you search for it? You could simply search the comments e.g. for “web browser”, but that would lead to quite a list. Do you remember anything else about the browser? Let’s say we know that it uses the FLTK toolkit. Let’s see if we can find that package:

% pkg search -c "web browser" | grep -i FLTK                            
dillo-3.0.5                    Fast, small graphical Web browser built upon fltk

There we are, the browser’s name is dillo! If even searching in the comments doesn’t yield what you are searching for, you might even resort to pkg search -D keyword. This will search in the detailed description that each package comes with. Just be prepared for a lot of hits and a wall of text if you’re using common keywords.

Web-based package search

In many cases it’s not a bad idea to use a web-based search. That is what the site FreshPorts provides among other things.

If you’re trying to get into FreeBSD, this is a site that you might want to bookmark. Quite possibly it’ll come in handy rather often. Also spend a little while exploring what it offers. Getting familiar with it is in fact time will spent.

Installing packages

To install a package, just issue pkg install pkgname:

# pkg install chocolate-doom
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 7 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        chocolate-doom: 2.3.0
        sdl_net: 1.2.8_3
        doom-data: 1.0_1
        sdl_mixer: 1.2.12_12
        smpeg: 0.4.4_14
        timidity: 0.2i_1
        libmikmod: 3.3.8

Number of packages to be installed: 7

The process will require 22 MiB more space.
11 MiB to be downloaded.

Proceed with this action? [y/N]:

Just answer y to make pkg fetch and install those packages (any PC without DooM installed just isn’t quite complete after all – I still stick to that even though I haven’t found time to actually fire up that game in years!).

What’s installed?

To query the package database, use pkg info. Without any further arguments this will return a list of all installed packages (so you probably want to pipe it into a pager like less). Searching for a specific package? You can definitely do that using your standard Unix tools, but hold that grep right now! There’s a better way:

% pkg info -x mate-t
ghostbsd-mate-themes-1.4
mate-terminal-1.18.1
mate-themes-3.22.12

The -x switch enables searching with regular expressions. The search in my example shows all packages that contain “mate-t”. You may even want to get used to adding -i, too, which enables case-insensitive search – just in case (there aren’t that many packages in FreeBSD that contain uppercase letters, but some do).

If you found your package, just use pkg info [packagename] (without the square brackets, of course) to get a whole lot of information about the package! But that’s probably more than you wanted and it makes sense to know a few more switches that just give you some specific information.

A nice one is -D. Forget what that post-install message was that told you how to actually get your package to work? This option prints it again:

% pkg info -D chromium
chromium-60.0.3112.101:
Always:
For correct operation, shared memory support has to be enabled
in Chromium by performing the following command as root :

sysctl kern.ipc.shm_allow_removed=1
[...]

Use -d to query information about a package’s dependencies:

% pkg info -d galculator
galculator-2.1.4:
	pango-1.40.6
	gtk3-3.22.15
	gtk-update-icon-cache-2.24.29
	gdk-pixbuf2-2.36.6
	cairo-1.14.8_1,2
	glib-2.50.2_4,1
	gettext-runtime-0.19.8.1_1
	atk-2.24.0

With -l you can list all the files that a package installed in your filesystem:

% pkg info -l mksh
mksh-56_1:
        /usr/local/bin/mksh
        /usr/local/man/man1/mksh.1.gz
        /usr/local/share/examples/mksh/dot.mkshrc
        /usr/local/share/licenses/mksh-56_1/ISCL
        /usr/local/share/licenses/mksh-56_1/LICENSE
        /usr/local/share/licenses/mksh-56_1/ML
        /usr/local/share/licenses/mksh-56_1/catalog.mk

Deleting packages

The last one of the basic operations is removing packages. Here in my example I spotted a package called “tracker”, a filesystem indexer that comes with GNOME. If you don’t use that desktop and find it on your system: Kill it with fire! Nuke it from the system with pkg delete:

# pkg delete tracker
Checking integrity... done (0 conflicting)
Deinstallation has been requested for the following 2 packages (of 0 packages in the universe):

Installed packages to be REMOVED:
        tracker-1.6.1_9
        nautilus-3.18.5

Number of packages to be removed: 2

The operation will free 21 MiB.

Proceed with deinstalling packages? [y/N]: y

Deinstalling this package will also get rid of Nautilus as well (since tracker is a dependency for that). Since the GNOME team pretty much ruined that once decent file manager, I’m not going to shed any tears over that loss and press enter. Pkg will then delete all the files associated with the package and un-register it in the package database.

What’s next?

That’s it for the basics in my opinion. The next post will show off a few of the more advanced features that Pkg offers.