Rusted ravens: Ravenports march 2019 status update

It’s been a couple of months since I last wrote about Ravenports, the universal *nix application building framework. Not exactly being a slowly moving project, a lot has happened since then.

Platform support

Raven currently supports DragonFly BSD, FreeBSD, Linux and Solaris/Illumos, the latter being only in the form of binary packages (except for when you have access to an installation of Solaris 10u8 – which can be used to build packages, too).

People following the project will notice the lack of macOS/Darwin support mentioned here. This is not a mistake as support for that platform has been put on hold for now. While Raven has successfully been bootstrapped on macOS before, the developers have lost access to any macOS machines and thus cannot continue support for it.

This does not mean that platform is gone forever. It might be resurrected at a later point in time if given access to a Mac again. The adventurous can even try to bootstrap Raven on different platforms now as the process has been documented (with macOS as the example).

I intended to do some work on bootstrapping Raven on FreeBSD/ARM64 – only to find that FreeBSD unfortunately still has a long way before making that platform tier 1. At work I had access to server-class ARM64 hardware, but current versions of FreeBSD have trouble booting up and I could not get the network running at all (if you’re interested in the details see my previous post). I’m still hoping for reactions on the mailing list but until upstream FreeBSD is fixed on ThunderX trying to bootstrap does not make much sense.

Toolchain and package updates

The toolchain used by Ravenports has been updated to GCC 8.3 and Binutils 2.32 on all four supported platforms (access to Mac was lost before the toolchain update).

As usual, Solaris needed a bit of extra treatment but up to date compiler and tools are available for it now, too. Even better: The linker from the LLVM project (lld) is available and usable on Solaris/Illumos now as well. Since it takes several hours (!) to link on Solaris, a mostly static lld executable was added to the sysroot package for that platform. This way this long-building package does not have to be rebuilt as often.

Packages have been rebuilt with this bleeding-edge toolchain (plus the usual fallout has been collected and fixed). So if you are using Raven, you are making use of the latest compiler technology with the best in optimization. Of course a lot of effort went into providing the most current versions of the packaged software, too (at least where that is feasible).

On the desktop side of things I’ve added the awesome window manager to the list of available software. It’s actually my WM of choice, but not too many people are into tiling so I postponed this one for after making Xfce available. Work on bringing in more Lua-related ports for an advanced configuration it is ongoing, but the WM is already usable as it is now.

I’ve also done a bit of less visible work, going back to many ports that I created previously and added in missing license info. This work is also not completed, yet, but the situation is improving, of course.

Rust!

One of the big drawbacks of Ravenports as stated last time, was the lack of the Rust compiler. This effectively meant a showstopper for things like current versions of Firefox, Thunderbird, librsvg, etc. The great news is that this blocker has been mostly removed: Rust is available via Raven for Dragonfly, FreeBSD and Linux! Solaris/Illumos support is pending, I think that any helping hand would be greatly appreciated.

Bringing in Rust was a big project on its own. Adding an initial bootstrap package for Dragonfly alone took months (thank you, Mr. Neumann!). The first working version of the port made Rust 1.31 available. It has since been updated to version 1.32 and 1.33 and John has added functionality to the Raven framework to work with Rust’s crates as well as scripts to assist with future updates. Taking all of that into consideration, Rust support in Raven is already pretty good for the short time that we have it.

Eventually even a port for Firefox landed – as of now it’s marked broken, though. The reason is that while it does compile just fine, the program crashes when actually started. The exact cause for this is yet unknown. If anybody with some debugging abilities has a little time on his hands, nailing down what happens would be a task that a lot of people will be benefit from for sure!

Updated ravenadm

Ravenadm, the Ravenports administration tool, has seen several updates with new features. Some have brought internal changes or new features necessary for new or updated packages. One example is a project-wide fix for ports that build with Meson: Before the change many programs needed special treatment to make Meson honor the rpath for the resulting binaries. Now Raven can automatically take care of this, saving us a whole bunch of sed commands in the specification file. Another new feature is the “Solaris functions” mechanism which can automatically fix certain functions that required generating patches before. Certainly also very nice to have!

Probably my favorite new feature is that Ravenadm now supports concurrent processes in several cases: While you cannot start a second set of package builds at the same time for obvious reasons, it is now possible to ask Ravenadm in which bucket a certain port lives, sort manifests, and such while building packages! I cannot say how much the previous behavior got in my way while doing porting work… This makes porting much, much more pleasant.

A last improvement that I want to mention here is a rather simple one – however one that has a huge impact. Newer versions of Ravenadm put all license-related texts into the logs! This means you can simply look at the log and see if e.g. the terms got extracted correctly. Before you had to use the ENTERAFTER option to enter an interactive build session and look at the extracted file. This is a huge improvement for porters.

SSL

Another big and most likely unique feature added to Raven recently is SSL autoselection. Raven has had autoselection facilities for Python, Ruby and Perl for about a year now. The latter allow for multiple versions of the interpreters to be installed in parallel and take care of calling the actual binary with the same parameters, preferring the newest version over older ones (until configured differently).

Raven supports LibreSSL, OpenSSL as well as LibreSSL-devel and OpenSSL-devel. Before the change, you could select the SSL library to use in the profile and it would be used to link all packages against it. Now we have even more flexibility: You can e.g. build all the packages against LibreSSL by default and just fall back to OpenSSL for the few packages that really require it!

And in fact Raven takes it all one step further: You can have OpenSSL 1.0.2 and OpenSSL 1.1.1 (which introduced braking changes) installed in parallel and use packages on the same system where some require the new version and some that cannot use it, yet! Pretty nice, huh?

Future work

Of course there are still enough rough edges that require work. Probably the most pressing issue is to get Firefox working so Raven’s users can have access to a convenient and modern browser. There are also quite some programs which need ports created for them. The goal here is to provide the most critical programs to allow Dragonfly to make the switch from Dports to Ravenports for the official packages.

On FreeBSD Filezilla does not currently work: It cannot be built with GCC due to a compiler bug in GCC 7.x and 8.x. Therefore it is a special port that get’s build with Clang instead. The problem is that libfilezilla needs to be built with the same toolchain – and that cannot currently be built with Clang using Raven…

Raven on Linux has some packages not available due to additional dependencies on that platform. I begun adding some Linux-specific ports but lost motivation to do so pretty fast (there are enough other things after all). Also the package manager is still causing pain, randomly crashing.

Solaris is also missing quite some packages. This is due to additional patches being required for a lot of software to build properly. Ravenports tries to support this platform as good as possible; however this could surely be improved if anybody using Solaris or an Illumos distribution as his or her OS of choice would start using Raven and giving feedback or even contribute.

Get in touch!

Interested in Raven? Get in touch with us! There is an official IRC channel now (#ravenports on Freenode) which is probably the best place to talk to other Raven users or the porters and developers. You can of course also send an email.

If you want to contribute, there is now a new “Customsource” repository on GitHub that you can create pull requests against. Feel free to contribute anything from finished ports that might only need polish to WIP ports that are important for you but you got stuck with.

There are many other means of helping with the than project then doing porting work, though. Open issues if you find problems with packages or have an idea. Also tell us if you read the wiki and found something hard to understand. Or if you could use a tutorial for something – just ping me. Asking doesn’t hurt and chances are that I can write something up.

Got something else that I didn’t talk about here? Tell us anyway. We’re pretty approachable and less elitist than you might think people who work on new package systems would be! 😉

ZFS and GPL terror: How much freedom is there in Linux?

There has been a long debate about whether man is able to learn from history. I’d argue that we can – at least to some degree. One of the lessons that we could have learned by now is how revolutions work. They begin with the noblest of ideas that many followers wholeheartedly support and may even risk their lives for. They promise to rid the poor suppressed people of the dreaded current authorities. When they are over (and they didn’t fail obviously) they will have replaced the old authorities with – new authorities. These might be “better” (or rather: somewhat less bad) than the old ones if we’re lucky, but they are sure to be miles away from what the revolution promised to establish.

Death to the monopoly!

Do you remember Microsoft? No, not the modern “cloud first” company that runs Azure and bought Github. I mean good old Microsoft that used to dominate the PC market with their Windows operating system. The company that used their market position with Windows 3.x to FUD Digital Research and their superior DR-DOS out of the market by displaying a harmless line of text with a warning about possible compatibility issues. The company that spent time and resources on strategies to extinguish Open Source.

Yes, due to vendor lock-in (e.g. people needing software that only runs on Windows) and to laziness (just using whatever comes installed on a pc), they have maintained their dominance on the desktop. However the importance of it has been on a long decline: Even Microsoft have acknowledged this by demoting their former flagship product and even thinking of making it available for free. They didn’t quite take that extreme step, but it’s hard to argue that Windows still has the importance it had since the 1990’s up to the early 2010’s.

They’ve totally lost the mobile market – Windows Phone is dead – and are not doing too well in the server market, either.

A software Golden Age with Linux?

In both areas Linux has won: It’s basically everywhere today! Got a web-facing server? It’s quite likely running some Linux distro. With most smart phones on the planet it’s Android – using a modified Linux kernel – that drives them. And even in space – on the ISS – Linux is in use.

All of us who have fought against the evil monopoly could now be proud of what was accomplished, right? Right? Not so much. There’s a new monopolist out there, and while it’s adhering to Open Source principles by the letter, it has long since started actually violating the idea by turning it against us.

For those who do not deliberately look the other way, Linux has mostly destroyed POSIX. How’s that? By more or less absorbing it! If software is written with POSIX in mind, today that means it’s written to work on Linux. However POSIX was the idea to establish a common ground to ensure that software runs across all of the *nix platforms! Reducing it basically to one target shattered the whole vision to pieces. Just ask a developer working on a Unix-like OS that is not Linux about POSIX and the penguin OS… You’re not in for stories about respect and being considerate of other systems. One could even say that they have repeatedly acted quite rude and ignorant.

But that’s only one of the problems with Linux. There are definitely others – like people acting all high and mighty and bullying others. The reason for this post is one such case.

ZFS – the undesirable guest

ZFS is todays most advanced filesystem. It originated on the Solaris operating system and thanks to Sun’s decision to open it up, we have it available on quite a number of Unix-like operating systems. That’s just great! Great for everyone.

For everyone? Nope. There are people out there who don’t like ZFS. Which is totally fine, they don’t need to use it after all. But worse: There are people who actively hate ZFS and think that others should not use it. Ok, it’s nothing new that some random guys on the net are acting like assholes, trying to tell you what you must not do, right? Whoever has been online for more than a couple of days probably already got used to it. Unfortunately its still worse: One such spoilsport is Greg Kroah-Hartman, Linux guru and informal second-in-command after Linus Torvalds.

There have been some attempts to defend the stance of this kernel developer. One was to point at the fact that the “ZFS on Linux” (ZoL) port uses two kernel functions, __kernel_fpu_begin() and __kernel_fpu_end(), which have been deprecated for a very long time and that it makes sense to finally get rid of them since nothing in-kernel uses it anymore. Nobody is going to argue against that. The problem becomes clear by looking at the bigger picture, though:

The need for functions doing just what the old ones did has of course not vanished. The functions have been replaced with other ones. And those ones are deliberately made GPL-only. Yes, that’s right: There’s no technical reason whatsoever! It’s purely ideology – and it’s a terrible one.

License matters

I’ve written about licenses in the past, making my position quite clear: It’s the authors right to choose whatever license he or she thinks is right for the project, but personally I would not recommend using pessimistic (copyleft) licenses since they do more harm than good.

While I didn’t have any plans to re-visit this topic anytime soon, I feel like I have to. Discussing the matter on a German tech forum, I encountered all the usual arguments and claims – most of which are either inappropriate or even outright wrong:

  • It’s about Open Source!
  • No it’s absolutely not. ZFS is Open Source.

  • Only copyleft will make sure that code remains free!
  • Sorry, ZFS is licensed under the CDDL – which is a copyleft license.

  • Sun deliberately made the CDDL incompatible with the GPL!
  • This is a claim supported primarily by one former employee of Sun. Others disagree. And even if it was verifiably true: What about Open Source values? Since when is the GPL the only acceptable Open Source license? (If you want to read more, user s4b dug out some old articles about Sun actually supporting GPLv3 and thinking about re-licensing OpenSolaris! The forum post is in German, but the interesting thing there is the links.)

  • Linux owes its success to the GPL! Every Open Source project needs to adopt it!
  • This is a pretty popular myth. Like every myth there’s some truth to it: Linux benefited from the GPL. If it had been licensed differently, it might have benefited from that other license. Nobody can prove that it benefited more from the GPL or would have from another license.

  • The GPL is needed, because otherwise greedy companies will suck your project dry and close down your code!
  • This has undoubtedly happened. Still it’s not as much of a problem as some people claim: They like to suggest that formerly free code somehow vanishes when used in proprietary projects. Of course that’s not true. What those people actually dislike is that a corporation is using free code for commercial products. This can be criticized, but it makes sense to do that in an honest way.

  • Linux and BSD had the same preconditions. Linux prospers while BSD is dying and has fallen into insignificance! You see the pattern?
  • *sign* Looks like you don’t know the history of Unix…

  • You’re an idiot. Whenever there’s a GPL’d project and a similar one that’s permissively licensed, the former succeeds!
  • I bet you use Mir (GPL) or DirectFB (LGPL) and not X.org or Wayland (both MIT), right?

What we can witness here is the spirit of what I’d describe as GPL supremacist. The above (and more) attacks aren’t much of a problem. They are usually pretty weak and the GPL zealots get enraged quite easy. It’s the whole idea to trade the values of Open Source for religious GPL worship (Thou shalt not have any licenses before me!) that’s highly problematic.

And no, I’m not calling everybody who supports the idea of the GPL a zealot. There are people who use the license because it fits their plans for a piece of software and who can make very sensible points for why they are using it. I think that in general the GPL is far from being the best license out there, but that’s my personal preference. It’s perfectly legitimate to use the GPL and to promote it – it is an Open Source license after all! And it’s also fine to argue about which license is right for some project.

My point here is that those overzealous people who try to actually force others to turn towards the GPL are threatening license freedom and that it’s time to just say “no” to them.

Are there any alternatives?

Of course there are alternatives. If you are concerned about things like this (whether you are dependent on modules that are developed out-of-kernel or not), you might want to make 2019 the year to evaluate *BSD. Despite repeated claims, BSD is not “dying” – it’s well alive and innovative. Yes there are areas where it’s lacking behind, which is no wonder considering that there’s a much smaller community behind it and far less companies pumping money into it. There are companies interested in seeing BSD prosper, though. In fact even some big ones like Netflix, Intel and others.

Linux developer Christoph Hellwig actually advises to switch to FreeBSD in a reply to a person who has been a Linux advocate for a long time but depends on ZFS for work. And that recommendation is not actually a bad one. A monopoly is never a good thing. Not even for Linux. It makes sense to support the alternatives out there, especially since there are some viable options!

Speaking about heterogenous environments: Have you heard of Verisign? They run the registry for .com and .net among other things. They’ve built their infrastructure 1/3 on Linux, 1/3 on FreeBSD and 1/3 on Solaris for ultra-high resiliency. While that might be an excellent choice for critical services, it might be hard for smaller companies to find employees that are specialized in those operating systems. But bringing in a little BSD into your Linux-only infrastructure might be a good idea anyway and in fact even lead to future competitive advantage.

FreeBSD is an excellent OS for your server and also well fit if you are doing embedded development. It’s free, ruled by a core team elected by the developers, and available under the very permissive BSD 2-clause license. While it’s completely possible to run it as a desktop, too (I do that on all of my machines both private and at work and it has been my daily driver for a couple of years now), it makes sense to look at a desktop-focused project like GhostBSD or Project Trident for an easy start.

So – how important is ZFS to you – and how much do you value freedom? The initial difficulty that the ZOL project had has been overcome – however they are just working around it. The potential problem that non-GPL code has when working closely with Linux remains. Are you willing to look left and right? You might find that there’s some good things out there that actually make life easier.

Ravenports explained: Why not just join XYZ?

As the year comes to an end, I’ve seen quite some interest in my previous post. There has been a question on Reddit what the benefit(s) of Raven over Pkgsrc might be and why the developers don’t simply join an existing effort instead of building something new.

I’ve touched on this topic about half a year ago, but I think the question is worth a detailed reply that fully covers both parts of it. So I’ll try to answer 1) why Ravenports exists in the first place and 2) what sets it apart from Pkgsrc and other ports systems.

Why maintain Ravenports instead of working on Pkgsrc?

Well, obviously because its author felt it was worthwhile to start and maintain the project! Of course that leads to another and more important question – why didn’t John Marino just join e.g. Pkgsrc instead? The answer to that is: Well… He did.

John got his NetBSD commit bit and became a Pkgsrc developer back in the day when DragonflyBSD still used Pkgsrc by default. He maintained a ton of ports there and made sure that other people’s ports still worked on DF after they had been updated. DragonflyBSD had been considered a first-class citizen by Pkgsrc. However there had been two big problems:

1) Being primarily a NetBSD project, Pkgsrc development takes place mostly on NetBSD of course. Things were tested on NetBSD and then committed. There was no testing done on the other supported platforms – which is a completely comprehensible decision given the amount of ports available and the number of supported platforms as well as the need to get software updated in a somewhat timely manner! However this lead to frequent breakage. A few suggestions that made sense from the Dragonfly perspective could not be agreed upon taking the whole of Pkgsrc into account. In the end the policy was: “If things in the tree break for your platform, go ahead and fix it.” So basically the answer to problem 1 was: “Throw more manpower at it.”

2) As the small project that DragonflyBSD is, there simply were not too many people available for this task however. In fact it was largely John alone who did most of the work with some help here and there. It’s impossible to spend resources that you don’t have available!

As you can see problem 1 causes problem 2 – and that one proved to be unfixable. Thus the problems with Pkgsrc grew and there was really not much that could have been done about it. And as the suggestions to somewhat relieve the worst impact were turned down, Dragonfly had to give up Pkgsrc. Please keep in mind that there’s a major difference between how Dragonfly used Pkgsrc and how some other platforms do. Sure, it’s great that you can use Pkgsrc on AIX to obtain some current software. Same thing for many other systems. Dragonfly used Pkgsrc just as NetBSD does, though: As the primary means to get software installed. Large-scale breakage of packages is a no-go in such a case, especially if it happens somewhat often and was bound to happen again and again.

Ok – another project then. Adapt the FPC maybe?

John then brought the new FreeBSD package manager as well as the FreeBSD ports collection over to Dragonfly with a system called “delta ports” or Dports. It’s basically an overlay with patches that Dfly requires to build those ports. Even though the FPC is meant for FreeBSD only and Pkgsrc – being cross-platform – might seem like the more logical candidate, this worked out a lot better and John maintained Dports for years.

In maintaining so many ports for both Pkgsrc and Dports he had a quite few ideas on how to do things better. They wouldn’t fit into the projects as they were organized, though. So he begun playing with various things on his own. Then… FreeBSD introduced flavored ports.

Don’t get me wrong here: I’m a FreeBSD user and I’m glad that flavored ports are finally available. However from a technical point of view they are implemented in a way that’s far from perfect. This is no wonder, though: When the ports tree was first introduced, nobody thought of flavors. What we have today is a fine example of a feature implemented as an afterthought. It works, yes, but it meant a disrupting change and broke expectations of all ports-related programs. It also made maintaining Dports much, much more time-intensive – to the point where it becomes no longer feasible to keep it up.

What does Ravenports have to offer over Pkgsrc?

Just like every younger project, Ravenports has the considerable advantage of starting fresh without the burden of choices that seemed right in the past but were probably regretted later. If this is combined with the will to learn from previous attempts to get packaging right as well as considerable experience with those, this has a lot of potential.

Think about it for a moment: FreeBSD’s ports collection shipped with the 1.0 release of the OS – and thus was created back in 1993. Pkgsrc began as a fork of it in 1997. So both were originally designed in a decade that has long passed (and in fact not even in this millennium!). Yes, both have been modernized over time. There are limits to this, however. It can be pretty hard to integrate new features into a structure that never meant to support anything like that. Do you think anybody in the mid 90’s could have thought about the needs of today? Ravenports deliberately does not support some old cruft. It’s meant for the coming decade of the 2020’s.

Here’s some strong points where Raven is ahead of Pkgsrc:

  • Tooling:
  • It offers a modern, integrated solution. There’s one control program (“ravenadm”) that deals with everything regarding Ravenports: It’s used to configure the package building system, it fetches the buildsheets (ports) and keeps them up to date, it builds all the packages or a subset thereof, …

  • Pristine package builds:
  • Everything is built in a chroot sandbox specifically assembled for that build process. There is no way that build dependencies clutter your build system (chances are you don’t want to use m4 or automake yourself and thus don’t need them installed on the OS). There’s also no way that installed packages of your system pollute the packages that Raven builds: The isolation prevents e.g. linking against additional stuff that you didn’t mean to.

  • It’s fast:
  • Did you ever run a bulk-build for Pkgsrc packages? Ravenports optimizes build times on modern systems by taking advantage of memory disks and such. The port scan alone makes a huge difference.

  • Potentially package manager agnostic:
  • Currently Raven supports only the Pkg package manager but as all it does is build packages, it was designed to support additional package managers if needed. You actually want it to generate rpm or pacman packages? Not currently implemented but certainly possible if desired.

  • Powerful default package manager:
  • Pkg, a modern tool for package management, is quite capable. If you read the manpages for it you will find out that it’s loaded with useful features. The old pkg_tools that Pkgsrc still use totally pale in comparison – and rightfully so.

  • Easy administration of multiple repos:
  • Need multiple repositories? No problem. Just create profiles for them. E.g. one that uses LibreSSL and another one that links against OpenSSL instead. Also you can choose the default version of Perl, Python, Ruby, ect. to use. And you can choose if MySQL should be Oracle’s MySQL, MariaDB, Galera, ect.

  • Convenient use of custom ports:
  • Can you use custom ports that are not in the official buildsheet collection? Sure thing. You can create directories for your custom ports and even use different ones in different profiles. Want to change an existing port? Just place one with the same name in your custom port directory and it will override the original one. Buildsheets from custom ports are generated automatically so there’s no hassle there. It probably doesn’t get much more convenient!

  • Variants and subpackages:
  • Package variants (i.e. “flavors”) and subpackages are not an afterthought and are thus used excessively right from the beginning. This makes package management with Raven very flexible.

  • Testing:
  • The Ravenports system has very strict rules for buildsheets. If the ravenadm tool considers a port to be valid, it is almost guaranteed that it is actually fine. Also packages can not only be mass-built but they can also be tested automatically as well (Is the RPATH ok? Are all required shared objects available? Is the manifest file complete? Are the required descriptions in place? Is the license ok or lacking? Things like that).

  • Automation:
  • Ravenports tries to automate many things that do not actually need human attention. For example quite often Python-related ports can be auto-generated. This saves time and effort of the maintainers that can be better spent on other things.

  • Modern day development:
  • Want to contribute something? It’s extremely easy. If you have a GitHub account you’re all set: Fork the git repo, make your changes, then commit and push them. Now all that’s left is opening a Pull Request. Yes, that’s all. If you don’t have a GH account, create one. Or send us patches as it was traditionally done. Ravenadm will happily create a template for you to assist you if you want to contribute a new port.

  • No ports ownership:
  • In Ravenports nobody “owns” a port. If you submitted one you become a contact for it. If somebody wants to make major changes to the port, that person is expected to contact you and communicate the proposals. Small or trivial changes however (like a simple version upgrade) can be done by anybody. This ensures rapid development and very fast adoption of new versions even if the original porter does not currently have the time to maintain everything in a timely manner.

  • Fast releases:
  • Ravensource provides new releases quite often. This way you can get pretty fresh software early on. There is no fixed time frame for it, though: Releases are made when it makes sense. If there have been major changes to the tree the next release might be delayed for testing.

  • Binary bootstrap:
  • Ravenports has a very simple and fast bootstrap process that makes use of binary packages for the respective platform. No system compiler required! Raven brings in its own full toolchain.

There are of course cases where it makes sense to use Pkgsrc and it’s not too hard to find any: E.g. if you need packages for a platform that’s unsupported in Raven or if you need software not yet available there. In the end this is Open Source: We’re all friends and using the right tool for the job makes sense.

Couldn’t Ports/Pkgsrc be modernized?

I’ve used Pkgsrc both in private and at work and I’m pretty happy that it’s available when I need it. But I don’t like the old pkg_tools much. They do their job but they are far from modern programs and really feel like relics today. And while I’m pretty happy with FreeBSD’s ports, those aren’t portable (and for some reason I’ve never been completely happy with Poudriere, FreeBSD’s package builder).

Before finally creating Ravenports, John wrote Synth, a very nice package builder for FreeBSD and DragonflyBSD that supports Ports/Dports. It has been put on hold in favor of Raven, but it is still maintained and I continue to use it on FreeBSD to build my packages.

John also created Pkgsrc-synth. It’s a version of Pkgsrc that uses the Pkg package manager. I’ve never tried it out – but it was stopped exactly two month ago as there seems to not have been any interest from the Pkgsrc people. I think this is a pitty, as pkg is really nice and has the right license for any BSD project. It could have been a chance to move Pkgsrc into a more modern direction. But meh.

Conclusion

Raven does not exist because everything else sucks. It exists because all the other candidates proved to not quite fit the needs of Ravenport’s author. As such it is a chance to keep the good parts of its various precursors that it heavily draws inspiration from. It’s a chance to combine these good parts to make something awesome. And it’s a chance to implement a lot of new ideas that should make sense in modern-day *nix package building which – for various reasons – cannot have a place in the old projects.

There’s still a lot of work to do, but we’re getting there. In my previous post I wrote that one of the big shortcomings was the lack of Rust. In the meantime Rust support has landed for DragonflyBSD, FreeBSD and Linux.

If there are any more questions feel free to post them here. I’m not on Reddit and I just saw the above question by accident. So I cannot promise to answer anywhere else than here.

Happy new year everyone!

One year of flying with the Raven: Ready for the Desktop?

It has been a little over one year now that I’m with the Ravenports project. Time to reflect my involvement, my expectations and hopes.

Ravenports

Ravenports is a universal packaging framework for *nix operating systems. For the user it provides easy access to binary packages of common software for multiple platforms. It has been the long-lasting champion on Repology’s top 10 repositories regarding package freshness (rarely dropping below 96 percent while all other projects keep below 90!).

For the porter it offers a well-designed and elegant means of writing cross-platform buildsheets that allow building the same version of the software with (completely or mostly) the same compile-time configuration on different operating systems or distributions.

And for the developer it means a real-world project that’s written in modern Ada (ravenadm) and C (pkg) – as well as some Perl for support scripts and make. Things feel very optimized and fast. Not being a programmer though, I cannot really say anything about the actual code and thus leave it to the interested reader’s judgement.

If you’re interested in a more comprehensive introduction to Ravenports, I’ve written one half a year ago.

Platforms

Ravenports has initially been developed on DragonFly BSD. When I became aware of it, it had already been ported to work on Linux, too. I liked the idea of the project, but had no DF or Linux boxes available for tinkering and didn’t feel like setting one up. Thus I moved on.

As I checked back a little later, FreeBSD support had been added. Since I had just lost my excuse not to try it out right away, I started playing with it – and was pretty happy. At that time I had trouble to get a port that I wrote into FreeBSD’s Ports Collection and thought that Raven could be an excellent playground to learn something and get a bit of experience that might help me later with FreeBSD.

The Xfce4 desktop – installed via Raven

I’ve long changed my mind, though! Raven is rather similar to FreeBSD’s ports system in many ways but where it differs it’s clearly superior. Also I love the cross-platform aspect and thus Raven is simply the better place for me to make home.

This year saw the introduction of Solaris/Illumos support that I tried out on OmniOS. Also Darwin support landed, upping the count of supported platforms to 5 already! Not too bad for a young project, huh? While Raven does work on all five platforms now it does so to varying degrees. But more on that later.

General activity

The Ravenports project consists of multiple Git repositories hosted on GitHub. The first one is Ravensource which most importantly holds the “raw” ports as they are written by the porters. It’s the most busy repo with over 5.200 commits since March 2017 (including almost 500 by me).

Then there’s the actual Ravenports repo that mostly contains the buildsheets which are compiled from Ravensource. It has over 1.400 commits right now.

Installing the xfce-single-core meta-package

Finally there’s the repo for the Ravenadm command-line tool. It’s approaching 900 commits since February 2017.

There’s still more to Raven like the Pkg package manager from FreeBSD (that was modified to add Zstd compression support) or libbsd4sol, a portability library which allows building code on Solaris that uses BSDisms (which was needed to add support for that platform to Raven). Most of the work on all repos was done by John alone.

With over 100 pull requests and more than 20 issues it’s clear now that there’s some interest in the project. Raven is still very small, though, with 6 people haveing contributed ports so far. After learning the basics and opening pull requests for half a year, I’ve been granted write-access to the source repository. Just recently I was able to push my 100th active port (there have been ports that became obsolete and were removed).

In general I’d say that there could of course be more people around and that the project would benefit from being able to provide more packages – though more than 3.200 is not bad at all! Also it’s good that there seems to be a growing user base which is even more important than having more porters join in. From my point of view, Raven is a healthy and fast-moving project. Still young, but doing well and heading in the right direction.

Major changes

There have been some pretty big changes that happened with Raven over time. Initially John started with a GCC6-based toolchain, only to switch to GCC7 when that was released. That was before my time with the project, but I witnessed the switch to GCC8.

Changing the toolchain certainly is a major interruption and most people are advised to just wait for the official repository to be re-rolled and then update. I had some bad luck in this regard – literally the day after I finally completed a working (and almost complete) set of basic packages for the FreeBSD_i386 platform, I faced the change to GCC8. Due to a lack of time I still haven’t repeated the switch on i386 (but I still plan to do it sometime).

The thunar file manager

Other changes that always have a huge impact (causing lots and lots of packages to be rebuilt) is adopting a new version (as well as dropping an old one) of the popular interpreter languages like Python, Perl and Ruby. Ravenports always supports two versions of Perl and Ruby and two versions of Python 3 (as well as 2.7 for now). So when Python 3.7 was released, 3.5 was removed and Perl 5.24 had to go when 5.28 was added.

Recently the former LLVM port that included everything regarding LLVM was split (LLVM, Clang, lld, openmp). Also now and then new statements are added to Ravenadm, so that old versions cannot work with a new release of the buildsheet repository (which is called “conspiracy”). But this is pretty easy to work around compared to the changes mentioned before.

So on the whole, Raven has proven that it can easily stand even big changes. For me this is essential to build faith in a project. And Raven is doing well in this regard.

Desktop-ready?

There are lots of people who will want to use Raven on servers. That’s totally fine of course. But for a project as ambitious as Ravenports, it’s necessary to provide a somewhat comfortable environment for the developers and the users alike. If it doesn’t manage to become a daily driver for people it cannot succeed.

For that reason I decided to work towards good desktop support for the little dev machine that I dedicated to my work on the project. When I started, X11 was already working and Openbox had freshly landed in the repos. So I had a simplistic environment to work with: Openbox + Xterm. However I could not even change my keyboard layout! Therefor I wrote a port for setxkbmap and eventually it was accepted as the first outside contribution to the project.

The Surf web browser

Next I did some work to get the FLTK toolkit and the EDE desktop in. Then I added my favorite terminal emulator, Sakura. This worked out pretty well and the biggest shortcoming at the end of 2017 was that there was no real graphical browser available. A lot has changed since then!

Desktop choices

Today you can choose between multiple window managers, both floating and tiling:

  • twm
  • cwm
  • openbox
  • fluxbox
  • xfwm4
  • pekwm
  • i3

And in case you prefer a real desktop environment, there are also several available:

  • Lumina (moderate, Qt-based)
  • Xfce4 (somewhat light-weight, GTK-based)
  • EDE (extremely frugal and minimalistic, FLTK-based)

Two graphical web browsers are available, Surf (which is deliberately simplistic and does not even support tabs) as well as an old version of Firefox (the last one that builds without Rust). This is certainly not perfect but much better than a year before.

Also other important programs are available, including LibreOffice! Last month the Apache webserver landed – which is a pretty complex port compared to many others.

Shortcomings

Are there packages you’ll miss? Most certainly. However there’s a wishlist now with ports that people would like to see created (please feel free to add more requests there). And that’s another good step ahead. Currently it’s almost 120 items long. Fortunately there’s been some success, too, and 26 requested ports have been created and taken of the list so far.

There are some future ports that will require lots of effort (hint: Help wanted!). The most important one that blocks some other important ports is the Rust compiler. There has been some work done on this but it’s not done, yet. Another real beast is TeX. This totally must be supported at some point. Current versions of Firefox and Chromium are often asked for. And somebody even requested Eclipse (which needs Java!). So there’s definitely more than enough work to do.

Using Raven on Linux works, but there are some flaws. Initially the Pkg package manager used to crash quite often. John traced that back to a bug in the version of SQlite that’s used internally by Pkg: The problem only struck on Linux and was fixed by using a newer version instead. While it’s much better now, there’s still the occasional problem with it.

While the packages from the repo work finde on Solaris 10u8 and above as well als Illumos, the exact version 10u8 is currently required to build packages. This is due to Solaris not being able to work with older system libraries in the build chroot. It would be great to haven an alternative ravensys-root for any Illumos distribution (OmniOS, SmartOS, Tribblix, …) available so that interested people without access to that specific closed-source Solaris version can develop Raven on that platform.

I don’t know how well Raven works on Darwin. Since I don’t have access to any macOS machines and PureDarwin is not really ready, yet, there’s currently no chance for me to test it. I intend to buy an older MacBook or something in the future, though, if I come across a fair offer and have some money available to spend on my hobby.

Some ports are not available on one platform or the other: Illumos mostly because they’d require patches to build and Linux often because it relies on additional libraries that have not yet been added to Raven. And then there’s a lot of packages that are mostly untested. All of these issues can be fixed, of course. All of those require a larger user-base, though. So it’s probably the best strategy to keep working on making Raven attractive to more users and address things when the right people show up.

What’s to come?

Currently Raven uses the primordial X11 input drivers (xf86-input-keyboard and xf86-input-mouse) on all platforms. In 2013 Linux pioneered support for generic input drivers by exposing the kernels “event devices”. Not too much later many Linux distributions adopted xf86-input-evdev. In 2014 there was a GSOC project to add evdev support for FreeBSD. Like many projects it came along a good part of the way but eventually was left unfinished. It was picked up and completed by a FreeBSD developer in 2016.

Xfce’s settings and applications menu

To use it, a special kernel had to be built so it would expose /dev/input device nodes. Then a sysctl had to be set – and eventually X11 had to be patched for emulated udev support… Why would anybody want to do all this just for different input drivers? Multi-touch support is just one valid reason. Another one is that having evdev-based input drivers is half the way to eventually support libinput, too. And that is one of the prerequisites for Wayland!

This month FreeBSD has finally enabled evdev support in the GENERIC kernel in both -CURRENT and 12-STABLE. That means the upcoming FreeBSD 12.0 will not support it out of the box, but most likely a future 12.1 will. Dragonfly BSD has also grown support for event devices and people are interested in working towards Wayland. I hope that we’ll be able to get xf86-input-evdev working with our X11 (on Dragonfly, FreeBSD and Linux) next year,

I’m taking a little break from Xfce now (but plan to port most of the remaining components later to make it a well-supported DE in Raven). There are a few things I have planned like adding Linux support for OpenVPN (it depends on some libraries and programs that are Linux only which are not yet in Raven). Also I intend to take a look at adding some more Qt5 components and write a few requested ports. And finally I want to write another post next year – a tutorial on using Ravenports and creating new ports.

So keep flying with us – it’s exciting times!

Ravenports: A modern, cross-platform package solution

This post is about Ravenports, a universal package system und building framework for *nix systems (DragonflyBSD, FreeBSD, Linux and Solaris at the time of this writing). It’s a relatively young project that begun in late February 2017 after a longer period of careful planning. The idea is to provide a unified, convenient experience in a cross-platform way while putting focus on performance, scalability and modern tooling.

What exactly is it and why should you care? If you’ve read my previous post, you know that I consider the old package systems lacking in several ways. For me Raven already does a great job at solving some problems existing with other systems – and it’s still far from tapping its full potential.

Rationale

A lot of people will think now: “We already have quite capable package systems. What’s the point in doing it again?” Yes, in many regards it’s “re-inventing the wheel”… And rightfully so! Most of the known package systems are pretty old now and while new features were of course added, this is sometimes problematic. There is a point where it’s an advantage to start fresh and incorporate modern ideas right from the start. Being able to benefit from the experience and knowledge gained by using the other systems for two decades when designing a new system is invaluable.

Ravenadm running on FreeBSD, OmniOS, Ubuntu Linux and DragonflyBSD

Ravenports was designed, implemented and is primarily maintained by a veteran in packaging software. John Marino at a time maintained literally thousands of ports for FreeBSD and DragonflyBSD. In addition to that, he wrote an alternative build tool called Synth. Aiming for higher portability, he modified Synth to work with Pkgsrc (which is available for many platforms) and also ported the modern Pkg package manager from FreeBSD to work with it.

In the end he had too many ideas about what could be improved in package building that would not fit into any existing project. Eventually Ravenports was born when he decided to give it a try and create a new framework with the powerful capabilities that he wanted to have and without the known weaknesses of the existing ones.

How does it compare to xyz?

It probably makes sense to get to know Ravenports by comparison to others. Let’s take a look at some of them first:

1) FreeBSD’s ports system is the oldest one such framework. It’s quite easy to use today, very flexible and since the introduction of Pkg (or “pkg-ng”) it also has a really nice package manager.
2) NetBSD adopted the ports system and developed it according to their own needs. It’s missing some of the newer features that FreeBSD added later but has gained support for an incredible amount of operating systems. Unfortunately it still uses the old pkg_* tools that really show their age now.
3) OpenBSD also adopted the early FreeBSD ports system. They took a different path and added other features. OpenBSD put the focus on avoiding users having to compile their own packages. To do so, they added so-called package flavors. This allows for building packages multiple times with different compile-time options set. Their package tools were re-written in Perl and do what they are meant to. But IMO they don’t compare well to a modern package manager.
4) Gentoo Linux with its portage system has taken flexibility to the extreme. It gives you fine-grained control over exactly how to build your software and really shines in that. The logical consequence is that, while it supports binary packages, this support is rudimentary in comparison.

EDE desktop, pekwm with Menda theme and brand-new LibreOffice

FreeBSD gained support for flavors in December 2017 and NetBSD did some work to support subpackages in a GSoC project in the same year. It’s hard to retrofit major new features into an existing framework, tough. When Ravenports started in the beginning of 2017, it already had those two features: Variant packages (Raven’s name for flavors) and subpackages. As a result they feel completely natural and fit well into the whole framework (which is why they are used excessively).

Ravenports knows ports options that can be set before building a package. Like with NetBSD or OpenBSD there’s generally fewer options available compared to FreeBSD. This is because Raven is more geared towards building binary packages than being a ports framework to build on the target machine (which would defeat the goal of always providing a clean building environment). For that reason the options mostly exist to support the variants for the packages. Compared to NetBSD’s Pkgsrc, Ravenports supports much fewer operating systems right now but has a much easier bootstrap process (binary!) for all supported platforms. It also offers a much superior package manager. When comparing against FreeBSD, OpenBSD and Gentoo, Ravenports is much more portable and supports multiple operating systems and – with the exception of FreeBSD – comes with a more modern package manager for binary packages.

Strong points

As Ravenports is not tied to a single operating system, it didn’t have to take into account specific needs that are for one OS only. In general there are no second-class citizens among the supported platforms. Also it was made to be agnostic of the package manager used. Right now it’s using Pkg only but other formats could be supported and thus binary packages be installed via pacman, rpm, dpkg, you-name-it.

Repology: Raven’s package freshness in percent (06/25/2018)

It allows for different versions of some software to be concurrently installed. If you e.g. want PHP 7.2 while some of your projects are stuck with 5.6 this is not a problem. It’s also possible to define a default version for databases like MySQL and Postgres as well as languages like Perl, Python and Ruby. Speaking of MySQL: Raven knows about Oracle MySQL, MariaDB, Percona and Galera. Only the first one is currently available (the ports for the others are missing) but the selection of which product to install is already present and the others can be easily added as needed.

If you build packages yourself you’ll notice that the whole tooling is fully integrated. Everything was planned right from the beginning to interact well and thus plays together just great. Also performance is something where Raven shines: Thanks to being programmed for high concurrency, operations like port scans are amazingly fast (if you know other frameworks).

Repology: Raven’s outdated package count (06/25/2018)

Raven follows a rolling-release model with extremely current package versions. In Repology, a fine tool for package maintainers and people interested in package statistics, Ravenports is the clear leader when it comes to freshness of the package repository: It rarely falls below 98% of freshness (while no other repo has managed to even reach 90% – and Repology lists almost 200 repositories!). If it does, it’s usually for less than a day until updates get pushed.

This is only possible because much of ports maintenance is properly automated. This saves a lot of work and allows for keeping the software version current without the need for dozens of maintainers. Custom port collections are supported if you have special needs like sticking to specific program versions. This way Raven can e.g. support legacy versions that should not be part of the main tree. It might also be interesting for companies that want to package their product for multiple platforms but need to keep the source closed. Ravenports supports private GitHub repositories for cases like this. All components of project itself are completely open-source, though, and are permissively licensed.

Also Raven is not the jealous kind of application. Packages are installed into /raven by default (you can choose to build your packages with a different prefix if you wish) and thus probably separate from the default system location for software. This makes it possible to use raven in addition to your operating system’s / distribution’s package manager instead of being forced to replace it.

Shortcomings

If you ask me about permanent problems with Raven: I don’t really see any. However there’s definitely a couple of things where it’s currently way behind other package systems. Considering how young the project is this is probably no wonder.

It’s a “needs more everything” situation. In fact it has the usual “chicken egg problem”: More available ports would be nice and potentially attract more users. With more users probably more people would become porters. And with more porters there’d surely be more ports available… But every new project faces problems like this and with resolve, dedication and perseverance as well as a fair amount of work, it’s possible to achieve the goal of making a project both useful and appealing enough for others to join in. Once that happens things get easier and easier.

KeePassXC, Geany and the EDE application menu

The Ravenports catalog has over 3,000 entries right now. It’s extremely hard to compare things like the package count, though. John provided an example: FreeBSD has 8 ports for each PostgreSQL version. With 5 supported versions that’s 40 ports. Ravenports has 5 ports with 8 subpackages each. In this case the package count is comparable, but not the port count. Taking flavors and multiversions into account, all repositories look much bigger than they actually are in case of available software. Also how to measure the quality of packages? What’s with ports that are used by less than a handful of people? What with those that are extremely outdated? Do you think they should count? It’s probably best to take a look and see if the software that you need is available. It is true though, that there’s of course still many important packages missing. IMO the most important one being Rust – which is not only needed for current versions of Firefox but increasingly important to build other software, too.

Also Linux support is not perfect, yet, and Solaris support even less so. On Solaris systems Raven is currently mostly binary-only because the Solaris kernel is unable to work with system libraries other than the ones matching exactly in version. Packages built on older releases of the OS work fine on newer ones, but for each OS release, a specific build environment would need to be created before building packages is possible. This is an issue that needs to be resolved in the future (I guess some help from the Illumos/Solaris community wouldn’t hurt). Also there are packages that don’t build on Solaris without patches which are not currently available. In case of important packages this leads to blockers since all other ports which depend on one such package also cannot be built: On FreeBSD there are 3,559 packages (including variants and metapackages) available from the repository at the present time. In the Solaris repo it’s only 2,851 packages. That’s certainly a nice start – but don’t expect to run a full-fledged desktop (or even X11 at all) there, yet!

In Linux land, distributions that come with glibc version 2.23 or newer work best. On distributions with older glibc versions (e.g. CentOS 7), software will not run as the standard C library is missing some required symbols. Raven will need to be bootstrapped again to support those distros. This is likely to happen before too long, but we’re not there, yet.

Current Firefox ESR version (+ sakura and pcmanfm in the panel)

MacOS (which might be supported soon), OpenBSD and NetBSD are not currently supported, nor is Linux with musl-libc or μclibc. Also currently Raven is amd64 only. ARM64 support is planned and i386 might or might not happen but are not available now.

Current status

At this time Raven is probably most interesting for people who love tech and enjoy tinkering on *nix systems as well as those who like the features and are ok with being early adopters. Yes, in general it’s ready for the latter. At least two people (including me) use Raven’s packages exclusively on one of their machines. I’d say it is ready as a daily driver (if you can live with the limited set of software available – or consider adding more ports). In fact I built a laptop that I use e.g. for on-call duty with it. Since that one is critical, it probably needs to be considered as “in production use”.

It’s possible to install various text mode applications with Raven, but X11 is also available. You can choose from multiple window managers or from at least two desktop environments (Lumina and the ultra-light EDE). Xfce4 is partially available (i.e. the panel is already ported). If you’re looking for web browsers, a current version of Firefox ESR (called “rustless-firefox”) can be installed as well as Surf, a simple webkit-based browser. The LibreOffice suite is available in its latest version, too. The same is true for the just released Perl 5.28 and Python 3.7.

Running Chocolate DooM and Chocolate Heretic

Oh, and if you’re into gaming… It’s not all just serious stuff. Yes, you can install and play DooM!

Conclusion

Ravenports is a fascinating project with lots and lots of possibilities. I wanted to get into porting with FreeBSD for quite a while but hesitated as I’m not a programmer. Then again I had been interested in package building for a long time and had played around with it on Arch Linux quite a bit. After my submissions to FreeBSD had been rotting in bug tracker for months (and still are after almost a year), I chose to give Raven a try in the meantime.

I was already familiar with Pkg and had used Synth before, too. Bootstrapping Raven’s pkg and then installing stuff was as easy as expected. The same was true for building the ports myself. Then I did quite a bit of reading and wrote my first port. It didn’t take more than 5 minutes after I opened my pull request on GitHub, before John responded – and the port was committed not much later. This was such a huge contrast that I decided to do more with Raven.

There was a learning curve, yes, but I received lots of help in getting started. I obviously liked the project enough to become a regular contributor and even got commit access to the ravensource repo later. Currently I’m maintaining just over 80 ports and I hope to write many more in the future. There have been some hard ports along the way (where I learned a lot about *nix), but lots of things are actually pretty easy once you get the hang of it.

Tongue-in-cheek: Make chaos or “make sense”!

If this post got you interested, just give it a try. Feel free to comment here and if you run into problems I’ll try to help. After this general overview of Raven the next post I plan to write will be on actually using it.

Modern-day package requirements

A little rant first: Many thanks to the EU (and all the people who decide on topics related to tech without having any idea on how tech stuff actually works). Their GDPR is the reason for me having been really occupied with work this month! Email being a topic that I’m teaching myself while writing the series of posts about it, I have to get back to it as time permits. This means that for May I’m going to write about a topic that I’m more familiar with.

Benefits of package management

I’ve written about package management before, telling a bit about the history of it and then focusing on how package management is done on FreeBSD. The benefits of package management are so obvious that I don’t see any reason not to content myself with just touching them:

Package management makes work put into building software re-usable. It helps you to install software and to keep it up to date. It makes it very easy to remove things in a clean manner. And package management provides a trusted source for your software needs. Think about it for just a moment and you’ll come up with more benefits.

Common package management requirements

But let’s take a look at the same topic from a different angle. What do we actually require our package systems to do? What features are necessary? While this may sound like a rather similar question, I assure you that it’s much less boring. Why? Because we’re looking at what we need – and it’s very much possible that the outcome actually is: No, we’re not using the right tool!

Yes, we need package management, obviously. While there’s this strange, overly colorful OS that cannot even get the slashes in directories right, we can easily dismiss that. We’re talking *nix here, anyway!

Ok, ok, there’s OmniOS with its KYSTY policy. That stands for “keep your software to yourself” and is how the users of said OS describe the fact that there’s no official packages available for it. While it’s probably safe to assume that the common kiddies on the web don’t know their way around on Solaris, I’m still not entirely convinced that this is an approach to recommend.

Going down that road is a pretty bold move, though. Of course it’s possible to manage your software stack properly. With a lot of machines and a lot of needed programs this will however turn into an abundance of work (maybe there are companies out there who enjoy paying highly qualified staff to carefully maintain software while others rarely spend more than a couple of minutes per day to keep their stuff up-to-date).

Also if you’re a genius who uses the method that’s called “It’s all in my head!” in the Linux from Scratch book, I’m not going to argue against it (except that this is eventually going to fail when you have to hand things over to a mere mortal when you’re leaving).

But enough of those really special corner cases. Let’s discuss what we actually require our package systems to provide! And let’s do so from the perspective not of a hobby admin but from a business-orientated one. There are three things that are essential and covered by just about any package system.

Ease of use

One of the major requirements we have today is that package management needs to be easy to use. Yes, building and installing software from source is usually easy enough on *nix today. However figuring out which configure options to use isn’t. Build one package without some feature and you might notice much later that it’s actually needed after all. Or even find that you compiled something in that’s getting in the way of something else later! Avoiding this means having to do some planning.

Reading (and understanding!) the output of ./configure –help probably isn’t something you’re going to entrust the newly employed junior admin with. Asking that person to just install mysql on the new server will probably be ok, though. Especially since package managers will usually handle dependencies, too.

Making use of package management means that somebody else (the package maintainer) has already thought about how the software will be used in most of the cases. For you this means that not having to hire and pay senior admins for work that can be done by a junior in your organization, too.

Fast operations

Time is money and while “compiling!” is a perfectly acceptable excuse for a dev, it shouldn’t be for an admin who is asked why the web server still wasn’t deployed on the new system.

Compiling takes time and uses resources. Even if your staff uses terminal multiplexers (which they should), thus being able to compile stuff on various systems at the same time, customers usually want software available when they call – and not two hours later (because the admin was a bit confused with the twenty-something tmux sessions and got stuck with one task while a lot of the other compile jobs have been finished ages ago).

Don’t make your customers wait longer than necessary. Most requests can be satisfied with a standard package. No need to delay things where it doesn’t make any sense.

Regular (security) updates

It’s 2018 and you probably want that new browser version that mitigates some of the Spectre vulnerabilities on your staff’s workstations ASAP. And maybe you even have customers that are using Drupal, in which case… Well, you get the point.

While it does make sense to subscribe to security newsletters and keep an eye on new CVEs, it takes a specialist to maintain your own software stack. When you got word of a new CVE for a program that you’re using that doesn’t mean the way you built the software makes it vulnerable. And perhaps you have a special use-case where it is but the vulnerability is not exploitable.

Again this important task is one that others have already done for you if you use packaged software from a popular repository. Of course those people are not perfect either and you may very well decide that you do not trust them. Doing everything yourself because you think you can do better is a perfectly legitimate way of handling things. Chances are however that your company cannot afford a specialist for this task. And in that case you’re definitely better off trusting the package maintainers than carelessly doing things yourself that you don’t have the knowledge for.

Special package management requirements

Some package managers offer special features not found in other ones. If your organization needs such a feature this can even mean that a new OS or distribution is chosen for some job because of that. Also repositories vary greatly in the number of software they offer, in the software versions that they hold and in the frequency of updates taking place.

“Stability” vs. “freshness”

A lot of organizations prefer “stable”, well-tested software versions. In many cases I think of “stable” as a marketing word for “really old”. For certain use-cases I agree that it makes sense to choose a system where not much will change within the next decade. But IMO this is far less often the case than some decision makers may think.

The other extreme is rolling-release systems which generally adapt the newest software versions after minimal testing. And yes, at one point there was even the “Arch server project” (if I remember the name correctly), which was all about running Arch Linux on a server. In fact this is not as bad an idea as it may seem. There are people who really live Arch and they’ll be able to maintain an Arch server for you. But I think this makes most sense as a box for your developers who want to play with new versions of the software that you’re using way before it hits your actual dev or even prod servers.

Where possible I definitely favor the “deliver current versions” model. Not even due to the security aspect (patches are being backported in case of the “stable” repositories) but because of the newer features. It’s rather annoying if you want to make use of the jumphost ability of OpenSSH (for which a nice new way of doing it was introduced not too long ago) and then notice you can’t use it because there’s that stupid CentOS box with its old SSH involved!

Number of packages

If you need one or a couple of packages that are not available (or too old) in the package repository of your OS or distribution, chances are that external repos exist or that the upstream project provides packages. That may be ok. However if you find that a lot of the software that you require is not available this may very well be a good reason to think about using a different OS or distribution.

A large number of packages in the repository increases the chance that you may get what you need. Still it can very well be the case where certain packages that you require (and which are rather costly to maintain yourself) are available on another repo.

Package auditing

Some package systems allow you to audit the installed packages. If security is very important for your organization, you’ll be happy to have your package tool recommend to “upgrade or deinstall” the installed version of some application because it’s known to be vulnerable.

Flexibility

What if you have special needs on some servers and require support for rarely needed functionality to be compiled into some software? With most package systems you’re out of luck. The best thing that you can do is roll your own customized package using a different name.

The ports tree on *BSD or portage on Gentoo Linux really show their power in this case, allowing you to just build the software easily and with the options that you choose.

Heterogeneous environments

So most of the time it makes perfect sense to stick to the standard repository for your OS or distribution. If you have special needs you’d probably consider another one and use the standard repo for that one. But what about heterogeneous environments?

Perhaps your database product only runs on, say, CentOS. You don’t have much choice here. However a lot of customers want their stuff hosted on Linux but they demand newer program versions. So a colleague installed several Ubuntu boxes. And another colleague, a really strange guy, slipped in some FreeBSD storage servers! When the others found out that this was not even Linux and started protesting (because “BSD is dying”), they were already running too damn well to replaced with something that does not have as good ZFS support.

A scenario like that is not too uncommon. If you don’t do anything about it, this might lead to “camps” among the employees; some of them are sure that CentOS is so truly enterprise that it’s the way to go. And of course yum is better than apt-get (and whatever that BSD thing offers – if anything). Some others laugh at that because Ubuntu is clearly superior and using apt-get feels a lot more natural than having to use yum (which is still better than that BSD thing which they refuse to even touch). And then there’s the BSD guy who is happy to have a real OS at his hand rather than “kernel + distro-chosen packages”.

In general if you are working for a small organization, every admin will have to be able to work with each system that is being used. Proper training for all package systems is probably expansive and thus managers will quite possible be reluctant to accept more than two package systems.

Portability

There’s a little known (in the Linux community) solution to this: Pkgsrc (“package source”). It’s NetBSD’s package management system. But with probably the most important goal of the NetBSD project being portability, it’s portable, too!

Pkgsrc is available for many different platforms. It runs on NetBSD, of course. But it runs on Linux as well as on the other BSDs and on Solaris. It’s even available for commercial UNIX platforms and various exotic platforms.

For this very nature of it, Pkgsrc may be one answer for your packaging needs in heterogeneous environments. It can provide a unified means of package management across multiple platforms. It rids you of the headache of version jungle if you use different repositories for different platforms. And it’s free and open source, too!

Is it the only solution out there? No. Is it the best one? That certainly depends on what you are looking for specifically. But it’s definitely something that you should be aware of.

What’s next?

The next post will be about a relatively new alternative to traditional package management systems that tries to deliver all the strong points in one system while avoiding their weaknesses!

The history of *nix package management

Very few people will argue against the statement that Unix-like operating systems conquered the (professional) world due to a whole lot of strong points – one of which is package management. Whenever you take a look at another *nix OS or even just another Linux distro, one of the first things (if not the first!) is to get familiar with how package management works there. You want to be able to install and uninstall programs after all, right?

If you’re looking for another article on using jails on a custom-built OPNsense BSD router, please bear with me. We’re getting there. To make our jails useful we will use packages. And while you can safely expect any BSD or Linux user to understand that topic pretty well, products like OPNsense are also popular with people who are Windows users. So while this is not exactly a follow-up article on the BSD router series, I’m working towards it. Should you not care for how that package management stuff all came to be, just skip this post.

When there’s no package manager

There’s this myth that Slackware Linux has no package manager, which is not true. However Slackware’s package management lacks automatic dependency resolving. That’s a very different thing but probably the reason for the confusion. But what is package management and what is dependency resolving? We’ll get to that in a minute.

To be honest, it’s not very likely today to encounter a *nix system that doesn’t provide some form of package manager. If you have such a system at hand, you’re quite probably doing Linux from Scratch (a “distribution” meant to learn the nuts and bolts of Linux systems by building everything yourself) or have manually installed a Linux system and deliberately left out the package manager. Both are special cases. Well, or you have a fresh install of FreeBSD. But we’ll talk about FreeBSD’s modern package manager in detail in the next post.

Even Microsoft has included Pkgmgr.exe since Windows Vista. While it goes by the name of “package manager”, it turns pale when compared to *nix package managers. It is a command-line tool that allows to install and uninstall packages, yes. But those are limited to operating system fixes and components from Microsoft. Nice try, but what Redmond offered in late 2006 is vastly inferior to what the *nix world had more than 10 years earlier.

There’s the somewhat popular Chocolatey package manager for Windows and Microsoft said that they’d finally include a package manager called “one-get” (apt-get anyone?) with Windows 10 (or was it “nu-get” or something?). I haven’t read a lot about it on major tech sites, though, and thus have no idea if people are actually using it and if it’s worth to try out (I would, but I disagree with Microsoft’s EULA and thus I haven’t had a Windows PC in roughly 10 years).

But how on earth are you expected to work with a *nix system when you cannot install any packages?

Before package managers: Make magic

Unix begun its life as an OS by programmers for programmers. Want to use a program on your box that is not part of your OS? Go get the source, compile and link it and then copy the executable to /usr/local/whatever. In times where you would have just some 100 MB of storage in total (or even less), this probably worked well enough. You simply couldn’t go rampage and install unneeded software anyways, and sticking to the /usr/local scheme you separate optional stuff from the actual operating system.

More space became available however and software grew bigger and more complex. Unix got the ability to use libraries (“shared objects”), ELF executables, etc. To solve the task of building more complicated software easily, make was developed: A tool that read a Makefile which told it exactly what to do. Software begun shipping not just with the source code but also with Makefiles. Provided that all dependencies existed on the system, it was quite simple to build software again.

Compilation process (invoked by make)

Makefiles also provide a facility called “targets” which made a single file support multiple actions. In addition to a simple make statement that builds the program, it became common to add a target that allowed for make install to copy the program files into their assumed place in the filesystem. Doing an update meant building a newer version and simply overwriting the files in place.

Make can do a lot more, though. Faster recompiles by to looking at the generated file’s timestamp (and only rebuilding what has changed and needs to be rebuilt) and other features like this are not of particular interest for our topic. But they certainly helped with the quick adoption of make by most programmers. So the outcome for us is that we use Makefiles instead of compile scripts.

Dependency and portability trouble

Being able to rely on make to build (and install) software is much better than always having to invoke compiler, linker, etc. by hand. But that didn’t mean that you could just type “make” on your system and expect it to work! You had to read the readme file first (which is still a good idea, BTW) to find out which dependencies you had to install beforehand. If those were not available, the compilation process would fail. And there was more trouble: Different implementations of core functionality in various operating systems made it next to impossible for the programmers to make their software work on multiple Unices. Introduction of the POSIX standard helped quite a bit but still operating systems had differences to take into account.

Configure script running

Two of the answers to the dependency and portability problems were autoconf and metaconf (the latter is still used for building Perl where it originated). Autoconf is a tool used to generate configure scripts. Such a script is run first after extracting the source tarball to inspect your operating system. It will check if all the needed dependencies are present and if core OS functionality meets the expectations of the software that is going to be built. This is a very complex matter – but thanks to the people who invested that tremendous effort in building those tools, actually building fairly portable software became much, much easier!

How to get rid of software?

Back to make. So we’re now in the pleasant situation that it’s quite easy to build software (at least when you compare it to the dark days of the past). But what would you do if you want to get rid of some program that you installed previously? Your best bet might be to look closely at what make install did and remove all the files that it installed. For simple programs this is probably not that bad but for bigger software it becomes quite a pain.

Some programs also came with an uninstall target for make however, which would delete all installed files again. That’s quite nice, but there’s a problem: After building and installing a program you would probably delete the source code. And having to unpack the sources again to uninstall the software is quite some effort if you didn’t keep it around. Especially since you probably need the source for exactly the same version as newer versions might install more or other files, too!

This is the point where package management comes to the rescue.

Simple package management

So how does package management work? Well, let’s look at packages first. Imagine you just built version 1.0.2 of the program foo. You probably ran ./configure and then make. The compilation process succeeded and you could now issue make install to install the program on your system. The package building process is somewhat similar – the biggest difference is that the install destination was changed! Thanks to the modifications, make wouldn’t put the executable into /usr/local/bin, the manpages into /usr/local/man, etc. Instead make would then put the binaries e.g. into the directory /usr/obj/foo-1.0.2/usr/local/bin and the manpages into /usr/obj/foo-1.0.2/usr/local/man.

Installing tmux with installpkg (on Slackware)

Since this location is not in the system’s PATH, it’s not of much use on this machine. But we wanted to create a package and not just install the software, right? As a next step, the contents of /usr/obj/foo-1.0.2/ could be packaged up nicely into a tarball. Now if you distribute that tarball to other systems running the same OS version, you can simply untar the contents to / and achieve the same result as running make install after an unmodified build. The benefit is obvious: You don’t have to compile the program on each and every machine!

So far for primitive package usage. Advancing to actual package management, you would include a list of files and some metadata into the tarball. Then you wouldn’t extract packages by hand but leave that to the package manager. Why? Because it would not only extract all the needed files. It will also record the installation in its package database and keep the file list around in case it’s needed again.

Uninstalling tmux and extracting the package to look inside

Installing using a package manager means that you can query it for a list of installed packages on a system. This is much more convenient than ls /usr/local, especially if you want to know which version of some package is installed! And since the package manager keeps the list of files installed by a package around, it can also take care of a clean uninstall without leaving you wondering if you missed something when you deleted stuff manually. Oh, and it will be able to lend you a hand in upgrading software, too!

That’s about what Slackware’s package management does: It enables you to install, uninstall and update packages. Period.

Dependency tracking

But what about programs that require dependencies to run? If you install them from a package you never ran configure and thus might not have the dependency installed, right? Right. In that case the program won’t run. As simple as that. This is the time to ldd the program executable to get a list of all libraries it is dynamically linked against. Note which ones are missing on your system, find out which other packages provide them and install those, too.

Pacman (Arch Linux) handles dependencies automatically

If you know your way around this works ok. If not… Well, while there are a lot of libraries where you can guess from the name which packages they would likely belong to, there are others, too. Happy hunting! Got frustrated already? Keep saying to yourself that you’re learning fast the hard way. This might ease the pain. Or go and use a package management system that provides dependency handling!

Here’s an example: You want to install BASH on a *nix system that just provides the old bourne shell (/bin/sh). The package manager will look at the packaging information and see: BASH requires readline to be installed. Then the package manager will look at the package information for that package and find out: Readline requires ncurses to be present. Finally it will look at the ncurses package and nod: No further dependencies. It will then offer you to install ncurses, readline and BASH for you. Much easier, eh?

Xterm and all dependencies downloaded and installed (Arch Linux)

First package managers

A lot of people claim that the RedHat Package Manager (RPM) and Debian’s dpkg are examples of the earliest package managers. While both of them are so old that using them directly is in fact inconvenient enough to justify the existence of another program that allows to use them indirectly (yum/dnf and e.g. apt-get), this is not true.

PMS (short for “package management system”) is generally regarded to be the first (albeit primitive) package manager. Version 1.0 was ready in mid 1994 and used on the Bogus Linux distribution. With a few intermediate steps this lead to the first incarnation of RPM, Red Hat’s well-known package manager which first shipped with Red Hat Linux 2.0 in late 1995.

FreeBSD 1.0 (released in late 1993) already came with what is called the ports tree: A very convenient package building framework using make. It included version 0.5 of pkg_install, the pkg_* tools that would later become part of the OS! I’ll cover the ports tree in some detail in a later article because it’s still used to build packages on FreeBSD today.

Part of a Makefile (actually for a FreeBSD port)

Version 2.0-RELEASE (late 1994) shipped the pkg_* tools. They consisted of a set of tools like pkg_add to install a package, pkg_info to show installed packages, pkg_delete to delete packages and pkg_create to create packages.

FreeBSD’s pkg_add got support for using remote repositories in version 3.1-RELEASE (early 1999). But those tools were really showing their age when they were put to rest with 10.0-RELEASE (early 2014). A replacement has been developed in form of the much more modern solution initially called pkg-ng or simply pkg. Again that will be covered in another post (the next one actually).

With the ports tree FreeBSD undoubtedly had the most sophisticated package building framework of that time. Still it’s one of the most flexible ones and a bliss to work with compared to creating DEB or RPM packages… And since Bogus’s PMS was started at least a month after pkg_install, it’s even entirely possible that the first working package management tool was in fact another FreeBSD innovation.