Should you abandon Linux and switch to *BSD?

The popularity of Linux skyrockets these days: More and more companies adapt it and even the just-above-average who doesn’t accept the imposition that is called “Windows 10” is often open to try it out. However at the same time, the popularity of said System seems to be fading among some of the more technical people, operating system enthusiasts and even followers of the FLOSS ideology.

Just recently an article called Why you should migrate everything from Linux to BSD was published and it has caught some attention and even replies like this one (it makes false claims like ZFS not being available on NetBSD and such, though.)

There have always been some posts like this, this is nothing too special. What I think is new, however, is the frequency that you can find discussions like this. Also the general tone has changed. Just a couple of years ago most Linux users wouldn’t even have bothered to comment such a thing. Today they seem to be much more open to learning about alternatives or in fact looking for something better than what they have. So what’s wrong with Linux?

It’s not about Systemd (alone)…

There are a lot of perfectly valid technical reasons to not want to use Systemd on your Linux system. But none of those could ever be an excuse for the hatred that this project has attracted. However there is a pretty simple explanation for that phenomenon: It’s how somebody is acting all high and mighty, simply dismissing valid critics and being a great example for a person with an arrogant attitude.

We have a lot of… let’s say… difficult persons it tech. Sometimes you think they have no manners, are being jerks or greatly overestimate their knowledge in certain fields. That’s ok and in fact they often are somewhat brilliant in a certain area. Most of us have learned to live with that.

And then there are people who think that they can dictate the one way to go. Well, there are project leaders who actually can do that due to being widely respected. But sometimes it’s different. Now when somebody wants to take something away from you (or he really doesn’t but you get the impression), you are likely to stand upon your defense.

Now when all of that unites in one person, you have the perfect boogeyman. Then all the technical aspects lose in weight and the feeling takes over. Which is not to say that feeling is not an important factor: If you don’t feel comfortable with Linux anymore, it might be time to move on.

The GNOME factor

The GNOME desktop is well known among *nix desktop users. It suits the needs some people but not others. That’s fine and there would not be any problem at all. However GNOME has a certain reputation which is not that nice… Why? Not because they didn’t accept some feature requests. Also not because are being ignoramuses when it comes to systems that are not so mainstream. No. They are notorious for cutting features that have already existed! This is what makes a lot of people mad.

I was a GNOME user myself and remember pretty well how I liked the file manager. To my dismay they removed so many features which I needed that the application became useless for me. I didn’t want to go looking for something else, but I was eventually forced to as the situation became unbearable. As I’m more of a calm guy, I didn’t go off at insult anybody, but other people did. And things got worse…

Today I’m a former GNOME user. This is *nix and not Windows. Nobody can force tiles upon us against our will. Yes, some projects think they know better than their users and that leads to the latter becoming upset. But as long as there are alternatives, we can move on.

Clumsy leaders

Recently Linus Torvalds spoke out against ZFS. Being the Linux “inventor” he has earned a lot of respect among the *nix community. I also used to hold him in high esteem despite his often ignoble behavior. However over the course of the last few years, I’ve lost a lot of respect for him.

The ZFS statement was the last coffin nail for me. He says that he always found ZFS more of a “buzzword” than anything else and that “benchmarks” didn’t make it look that good anyway. This is so far off the shot that I’m ashamed to ever have considered him a technical genius. He obviously does have no idea at all what problem ZFS actually solves! Speed benchmarks are all nice and well, but they are not why people want ZFS! And of course it’s far from being a buzzword – if you have valuable data today, you almost certainly want to bet on ZFS.

But not only is he ok with judging what he has not even bothered to take a look at from closer that several hundred meters – he’s also making completely stupid claims that make him look like a terribly ridiculous figure. According to Mr. Torvalds, ZFS had “no real maintenance behind it either any more”.

Ouch! He hasn’t even heard about OpenZFS, I’d guess. If you’re not in a closed-Solaris environment, this is what people are referring to when they say “ZFS”. Nobody outside the small, isolated isle of Oracle has any interest in ClosedZFS. Yes, Oracle laid off most of their Solaris staff and nobody knows if there is any noteworthy future for that OS. But not too long Solaris 11.4 was released – so even if Linus referred to the situation at Oracle, he’s not exactly right. In the case of OpenZFS, however, he could not be more wrong.

The OpenZFS is as alive as it could be. There are regular leadership meetings, many new features are being developed – and just recently a common repository for both Linux and FreeBSD was created, with other operating systems expected to join in! This is a moment of glory for collaboration in Open Source, but Linus didn’t hear a thing – or did he not want to hear anything? The fact that the second-in-command, GKH, has attacked ZFS about a year ago in a pretty questionable way, too, does not bode well.

Why do people leave Linux?

There have always been compelling technical reasons why you would choose *BSD over Linux (e.g. the complete operating system approach, much more consistent design, etc.). But I’d say that lately the the feeling part of it became much more important.

I left Linux because I was so sick and tired of the stupid fights between the hardcore fans of one distro or another and the unbearable arrogance of many. Yes, I also had the feeling that Linux was heading down the wrong way, too. I simply was no longer really happy with it and ready to try something new. There was a learning curve for sure, but the FreeBSD community is extremely friendly and while there are of course also people getting into disputes, I got the feeling that I described as “BSD is for grown-ups”. Not saying that there aren’t any really bright people in the Linux community, but on average I feel that the BSD users are more technical.

Others have stated similar reasons. The primary developer of ClonOS (that strives to be for FreeBSD what Proxmox is for Linux) wrote this:

According to the authors of the project, Linux is no longer a member of the common people, it is fully controlled by big commercial organization. while FreeBSD is developed mostly by enthusiasts. Today, Linux – it is a commercial machine for making money – is that it was Microsoft Windows in 90 years. While many Linux users have struggled against the Windows monopoly (CBSD author of one of them).

Yes, FreeBSD very far behind in their characteristics in comparing to Linux. Just look at the abundance of such powerfull decisions as the OpenVZ, Docker, Rancher, Kubernetis, LXD, Ceph, GlusterFS, OpenNebula, OpenStack, Proxmox, ISPPanel and a dozen others. All this is created by commercial companies for Linux and this is done very well. However, Linux is oversaturated with similar solutions. Therefore, it’s much more interesting to create it on FreeBSD, where nothing like that exists. This is an excellent challenge to improve and fix in FreeBSD.

We all love independence and freedom and FreeBSD today – an independent and free operating system, which is in the hands of ordinary people.

They are not alone. Even convinced followers of the FSF ideas have come to the conclusion that Linux may not be the right platform for them anymore. The people behind Hyperbola GNU/Linux have announced this:

Due to the Linux kernel rapidly proceeding down an unstable path, we are planning on implementing a completely new OS derived from several BSD implementations. This was not an easy decision to make, but we wish to use our time and resources to create a viable alternative to the current operating system trends which are actively seeking to undermine user choice and freedom.

This will not be a “distro”, but a hard fork of the OpenBSD kernel and userspace including new code written under GPLv3 and LGPLv3 to replace GPL-incompatible parts and non-free ones.

Reasons for this include:

Linux kernel forcing adaption of DRM, including HDCP.
Linux kernel proposed usage of Rust (which contains freedom flaws and a centralized code repository that is more prone to cyber attack and generally requires internet access to use.)
Linux kernel being written without security and in mind. (KSPP is basically a dead project and Grsec is no longer free software)
Many GNU userspace and core utils are all forcing adaption of features without build time options to disable them. E.g. (PulseAudio / SystemD / Rust / Java as forced dependencies)

As such, we will continue to support the Milky Way branch until 2022 when our legacy Linux-libre kernel reaches End of Life.

Will *BSD be a better OS for you?

So the big question is: If you are a Linux user, should you make the switch, too? I won’t unconditionally say yes. It really depends.

Are you happy with the overall situation in Linux? In that case there’s no need to migrate anything over. However you might still want to give a BSD of your choice a try. Perhaps you find something that you like even better? If you spend a bit of time exploring a BSD, you will find that several problems can be solved in other ways than those you are familiar with. And that will likely make you a better Linux admin, even if you decide to stick with it. Or maybe you’ll want to use the best tool for the job which could sometimes be Linux and sometimes a BSD. Getting to know a somewhat similar but also at times quite different *nix system will enable you to make an informed choice.

Not happy with Linux anymore in the recent years? Try out a BSD. If you need help to decide which one might be for you, I’ve written an article about that topic, too. Do a bit of reading, then install that BSD in a VM and explore. If you go with FreeBSD, make sure you take a look at the handbook (probably also available in your language) as that is a great source of information and one thing that sets FreeBSD apart from almost all Linux distros.

If you find that you like what you found, make a list of your requirements and find out if your BSD would indeed fulfill your needs. If it doesn’t, consider alternatives. Once the path is clear, I recommend to take a look at the community, too. For example there’s the weekly BSDNow! podcast that’s very informative. A lot of people have already written in, confessing that they are still Linux users only, but the topics of the show got them still hooked.

Do not rush things. Did you start with Linux or have you migrated e.g. from Windows? If you did come from a different OS, remember that there have been frustrating moments when you were all new to Linux and had certain misconceptions. You will be going through that again, but looking at the final outcome it will likely be a pretty rewarding journey.

Also don’t be shy and ask others if you don’t have the time or will to figure out everything yourself. The BSD people are usually pretty approachable and helpful. Feel free to ask me questions here, I might be able to give some answers.

It has been a couple of years now since I replaced the last machine that ran Linux at home. Would I choose to make the switch after all the experience that I gained since then? Oh yes! Anytime.

A glimpse into 2020

When you read this, the old year will be over (well, depending on the time zone you live in). If we’re lucky, this might be the year to get our hands on the first affordable RISC-V hardware that can actually run a Unix-like operating system. It should definitely be the year to get interesting devices like the ARM64-based PinePhone. And it also means that Python 2 is finally dead.

Speaking about that: For me 2019 has been a pretty busy year. On this blog I wrote about quite some different topics, among them my first attempt at writing something programming-related as I tried to teach myself a little bit of Python. If I had to name an overall theme, I’d say that the past year was the year of hardware architectures. I didn’t plan this, but that’s what’s happened. But I don’t actually want to look back in this post. On the contrary! But speaking of dead things also kind of fits into the next topic (more than I like…)!

FreeBSD on SPARC64

The next post that I plan to write will be about FreeBSD on the SPARC64 architecture. What I did not know when I decided on that is that it is more or less a doomed architecture when it comes to FreeBSD. SPARC64 is in grave danger – people expect support for it to be dropped before FreeBSD 13.0 is released!

The reason is that it is one of the architectures that still need the old GCC 4.2 (yes, from 2007!) toolchain – and that old cruft finally has to go. And while everybody agrees that this is a completely sensible thing to do, SPARC64 doesn’t seem to have as many friends among the FreeBSD developers to make the transition to something newer. A few people are trying to get something done (I’m also tinkering and trying to help), but it’s far from a save bet that it’ll succeed.

IMHO it would be a real shame to see FreeBSD on SPARC64 die. If it does survive, I’ll definitely try to help with QA. Can you help? If so: Please do! I’ll give more details on what the current status is and where the problems are in my January post.

FreeBSD on ARM64

I plan on writing another post about the current status on FreeBSD on ARM64. The topic of making it a Tier 1 architecture has recently been brought up again and I’d like to join the discussion about that rather sooner than later. If it wasn’t for the very unpleasant situation with SPARC64, this would actually be my next post.

Progress has been made on the networking issues on the Cavium ThunderX servers and I’ll also take a look again at the PineBook. Most likely I’ll also buy a PinePhone and/or one of their Tablets. If I do, you will find a review here.

Orchestration and Configuration Management with SaltStack

I wanted to write about this topic for a while, but now I’ve at last started to set aside the hardware that I need for the project. Several years ago (gosh…) I wrote a little comparison of Puppet, Chef, Salt and Ansible. After a general introduction I might update and publish this as well.

But that will only be the start of a series of posts introducing Salt. There will be a slight focus on FreeBSD, but in general it will show off how to work with various operating systems and distributions. We’ll start with Salt-SSH using Remote Execution Modules, talk about targeting and get to know grains. Then we’ll progress to the state system, pillar data and so on before switching to the master-minion model.

I’m looking forward to this one. If anybody has any ideas – just tell me, I’m open to suggestions on what to cover.

HardenedBSD

While I ran this derivative of FreeBSD for a couple of weeks on spare hardware (until I needed that for something else), I just didn’t find the time to write about my experience with it, yet. I liked it, though, and plan on re-visiting the OS. And when I do, you’ll read about it here for sure.

The project is currently being re-structured. So I’ll wait with this topic for a while longer. Might happen in the second half of the year (time flies by much too fast, anyway).

E-Mail

For years now I’ve been putting this one off. In fact I’ve started digging a bit into the topic twice. Two times I got distracted with other topics. Maybe third time is the charm?

To help making this more likely, I registered a domain (with a pretty bad pun in German) in December. Maybe now that I pay money, I’ll actually live up to the plan to “do moar with mail”. I’ll be using BSD technology where possible. So expect that the mailing stuff will involve OpenSMTPd.

illumos

This one is really a time issue. I’m still very much interested in the heritage of OpenSolaris and would like to do some more things with it. However I have no idea when I’ll find the time to do a dedicated illumos project. But there will definitely be some illumos involved with one of the other topics. You guess which one! 😉

Ravenports

The Ravenports project is still as fascinating to me as it was when I discovered it. I really wish I could dedicate more time to porting and helping to bring things forward (there’s still quite a lot of ports missing that I’d like to have and use).

While things are going well in general and ports are being updated really, really fast most of the time, big changes are rare right now. But big changes are what it makes sense to write about. And while there are some noteworthy things that I can think of, I’m still waiting for something else to land. Once that happens I will dedicate another post to Raven.

Linux vs. FreeBSD

I recently needed to setup a new Linux machine for a customer. Usually my co-workers do that, since I volunteered to take care of our BSD machines. That installation left me totally puzzled. Has the Debian installer become worse – or does my memory fail me and it has always been so bad (and I didn’t notice when I was into Linux only)?

Since then I thought about a few things. The conclusion is that I really love FreeBSD. It’s not perfect (well, nothing is), but there are so many areas where it’s much, much more comfortable to work with (can you say iptables or mdadm? Yuck!). And there is a lot more beauty and even technical genius if you take a closer look and compare things.

Yes, Linux in much more advanced in many areas. But that’s not much of a surprise given how much more manpower goes into that system. But it is a little miracle how the BSDs with their much lower manpower continue to deliver excellent operating systems on par with or even superior to Linux when it comes to sanity of use. Thank you, *BSD!

Happy new year!

So that’s what I have on my mind right now (I’m not out of ideas, but these are the topics that are on the top of my “would like to write about” list currently). Which of these topics will I be able to deliver and which will I miss? Time will tell. Feel free to comment and tell me what interests you the most.

Happy new year to all of my readers!

Writing a daemon using FreeBSD and Python pt.3

Part 1 of this series covered Python fundamentals, signal handling and logging. We wrote an init script as well as a program that can be daemonized by daemon(8).

In the previous part we modified the program as well as the init script so that it can daemonize itself using the Python daemon module. I also covered a few topics that people totally new to programming (or Python) might want to know to better understand what’s happening.

Part 3 is about exploring a simple means of IPC (inter-program communication) by using named pipes.

Creating a named pipe

What is a named pipe – also known as a fifo (first in, first out)? It is a way of connecting two processes together, where one can sequentially send data and the other receives it in exactly the same order. It’s basically what us Unix lovers use for our command lines all the time when we pipe the input of one program into another. E.g.:

ls | wc -l

In this case the output of ls is piped to wc which will then print the amount of lines to stdout (which could be used as input for another program with another pipe). This kind of pipe between two programs is usually short lived. When the first program is done sending output and the second one has received all the data, the pipe goes away with the two processes. It also only exists between the two processes involved.

A named pipe in contrast is something a bit more permanent and more flexible. It has a representation in the filesystem (which is why it’s a named pipe). One program creates a named pipe (usually in /var/run) and attaches to the receiving end of the pipe. Another process can then attach to the sending end and start putting data into it which will then be received by the former. Named pipes have their own character (p) showing that a file is of type named pipe, looking like this when you ls -l:

prw-rw-r--

Here’s what the next version of the code looks like:

#!/usr/local/bin/python3.6
 
 # Imports #
import daemon, daemon.pidfile, logging, os, signal, time
 
 # Globals #
IN_PIPE = '/var/run/bd_in.pipe'
 
 # Fuctions #
def handler_sigterm(signum, frame):
    logging.debug("Caught SIGTERM! Cleaning up...")
    if os.path.exists(IN_PIPE):
        try:
            os.unlink(IN_PIPE)
        except:
            raise
    logging.info("All done, terminating now.")
    exit(0)

def start_logging():
    try:
        logging.basicConfig(filename='/var/log/bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
    except:
        print("Error: Could not create log file! Exiting...")
        exit(1)

def assert_no_pipe_exists():
    if os.path.exists(IN_PIPE):
        logging.critical("Cannot start: Pipe file \"" + IN_PIPE + "\" already exists!")
        exit(1)

def make_pipe():
    try:
        os.mkfifo(IN_PIPE)
    except:
        logging.critical("Cannot start: Creating pipe file \"" + IN_PIPE + "\" failed!")
        exit(1)
    logging.debug("Created pipe \"" + IN_PIPE)

 # Main #
with daemon.DaemonContext(pidfile=daemon.pidfile.TimeoutPIDLockFile("/var/run/bdaemon.pid"), umask=0o002):
    signal.signal(signal.SIGTERM, handler_sigterm)
    start_logging()
    assert_no_pipe_exists()
    make_pipe()

    logging.info("Baby Daemon started up and ready!")
    while True:
        time.sleep(1)

We’re using a new import here: os. It gives the programmer access to various OS-dependent functions (like pipes which are not existent on Windows for example). I’ve also added a global definition for the location of the named pipe.

The next thing that you’ll notice is that the signal handler function got some new code. Before the daemon terminates it tries to clean up. If the named pipe exists the program will attempt to delete it. I’m not handling what could possibly go wrong here as this is just an example. That’s why in this case I just re-raise the exception and let the program error out.

Then we have a new “start_logging()” function that I put the logging stuff into to unclutter main. Except for that changed structure, there’s really nothing new here.

The next new function, “assert_no_pipe_exists()” should be fairly easy to read: It checks if a file by the name it wants to use is already present in the filesystem (be it as a leftover from an unclean exit or by chance from some other program). If it is found, the daemon aborts because it cannot really continue. If the filename is not taken, however, “make_pipe()” will attempt to create the named pipe.

The other thing that I did was moving the main part back from being a function directly to the program. And since we’re doing small incremental steps, that’s it for today’s step 1. Fire up the daemon using the init script and you should see that the named pipe was created in /var/run. Stop the process and the pipe should be gone.

Using the named pipe

Creating and removing the named pipe is a good first step, but now let’s use it! To do so we must first modify the daemon to attach to the receiving end of the pipe:

#!/usr/local/bin/python3.6
 
 # Imports #
import daemon, daemon.pidfile, errno, logging, os, signal, time
 
 # Globals #
IN_PIPE = '/var/run/bd_in.pipe'
 
 # Fuctions #
def handler_sigterm(signum, frame):
    try:
        close(inpipe)
    except:
        pass

    logging.debug("Caught SIGTERM! Cleaning up...")
    if os.path.exists(IN_PIPE):
        try:
            os.unlink(IN_PIPE)
        except:
            raise
    logging.info("All done, terminating now.")
    exit(0)

def start_logging():
    try:
        logging.basicConfig(filename='/var/log/bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
    except:
        print("Error: Could not create log file! Exiting...")
        exit(1)

def assert_no_pipe_exists():
    if os.path.exists(IN_PIPE):
        logging.critical("Cannot start: Pipe file \"" + IN_PIPE + "\" already exists!")
        exit(1)

def make_pipe():
    try:
        os.mkfifo(IN_PIPE)
    except:
        logging.critical("Cannot start: Creating pipe file \"" + IN_PIPE + "\" failed!")
        exit(1)
    logging.debug("Created pipe \"" + IN_PIPE)

def read_from_pipe():
    try:
        buffer = os.read(inpipe, 255)
    except OSError as err:
        if err.errno == errno.EAGAIN or err.errno == errno.EWOULDBLOCK:
            buffer = None
        else:
            raise
 
    if buffer is None or len(buffer) == 0:
        logging.debug("Inpipe not ready.")
    else:
        logging.debug("Got data from the pipe: " + buffer.decode())
    
 # Main #
with daemon.DaemonContext(pidfile=daemon.pidfile.TimeoutPIDLockFile("/var/run/bdaemon.pid"), umask=0o002):
    signal.signal(signal.SIGTERM, handler_sigterm)
    start_logging()
    assert_no_pipe_exists()
    make_pipe()
    inpipe = os.open(IN_PIPE, os.O_RDONLY | os.O_NONBLOCK)
    logging.info("Baby Daemon started up and ready!")

    while True:
        time.sleep(5)
        read_from_pipe()

Apart from one more import, errno, we have three important changes here. First, the cleanup has been extended, there is a new function, called “read_from_pipe()” and then main has been modified as well. We’ll take a look at the latter first.

There’s a ton of examples on named pipes on the net, but they usually use just one program that forks off a child process and then communicates over the pipe with that. That’s pretty simple to do and works nicely by just copying and pasting the example code in a file. But adapting it for our little daemon does not work: The daemon just seems to “hang” after first trying to read something from the pipe. What’s happening there?

By default, reads from the pipe are in blocking mode, which means that on the attempt to read, the system just waits for data if there is none! The solution is to use non-blocking mode, which however means to use the raw os.open function (that supports flags to be passed to the operating system) instead of the nice Python open function with its convenient file object.

So what does the line starting with “inpipe” do? It calls the function os.open and tells it to open IN_PIPE where we defined the location of our pipe. Then it gives the flags, so that the operating system knows how to open the file, in this case in read-only and in non-blocking mode. We need to open it read-only, because the daemon should be at the receiving side of the pipe. And, yes, we want non-blocking, so that the program continues on if there is no data in the pipe without waiting for it all the time!

What might look a little strange to you, is the | character between the two flags. Especially since on the terminal it’s known as the pipe character and we’re talking about pipes here, right? In this case it’s something completely unrelated however. That symbol just happens to be Python’s choice for representing the bit-wise OR operator. Let’s leave it at that (I’ll explain a bit more of it in a future “Python pieces” section, but this article will be long enough without it).

However that’s still not all that the line we’re just discussing does. The os.open() function returns a file descriptor that we’re then assigning to the inpipe variable to keep it around.

What’s left is a new infinite loop that calls read_from_pipe() every 5 seconds.

Speaking of that function, let’s take a closer look at what it does. It tries to use the os.read function to read up to 255 bytes from the pipe into the variable named buffer. We’re doing so in a try/except block, because the read is somewhat likely to fail (e.g. if the pipe is empty). When there’s an exception, the code checks for the exact error that happened and if it’s EAGAIN or EWOULDBLOCK, we deliberately empty the buffer. If some other error occurred, it’s something that we didn’t expect, so let’s better take the straight way out by raising the exception again and crashing the program.

On FreeBSD the error numbers are defined in /usr/include/errno.h. If you take a look at it, you see that EAGAIN and EWOULDBLOCK are the same thing, so checking for one of them would be enough. But it makes sense to know that on some systems these are separate errors and that it’s good practice to check for both.

If the buffer either has the None value or has a length of 0, we assume that the read failed. Otherwise we put the data into the log. To make it readable we have to use decode, because we will be receiving encoded data.

All that’s left is the cleanup function. I’ve added another try/except block that simply tries to close the pipe file before trying to delete it. This is example code, so to make things not even more complex, I just silently ignore if the attempt fails.

Control script

Ok, great! That was quite a bit of things to cover, but now we have a daemon that creates a pipe and tries to read data from it. There’s just one problem: How can we test it? By creating another, separate program, that puts data in the pipe of course! For that let’s create another file with the name bdaemonctl.py:

#!/usr/local/bin/python3.6

 # Imports #
import os, time

 # Globals #
OUT_PIPE = '/var/run/bd_in.pipe'

 # Main #
try:
    outpipe = os.open(OUT_PIPE, os.O_WRONLY)
except:
    raise

for i in range(0, 21):
    print(i)
    try:
        os.write(outpipe, bytes(str(i).encode('utf-8')))
    except BrokenPipeError:
        print("Pipe has disappeared, exiting!")
        os.close(outpipe)
        exit(1)
    time.sleep(3)
os.close(outpipe)

Fortunately this one is fairly simple. We do our imports and define a variable for the pipe. We could skip the latter, because we’re using it on only one occasion but in general it’s a good idea to keep it as it is. Why? Because hiding things deep in the code may not be such a smart move. Defining things like this at the top of the file increases the maintainability of your code a lot. And since we want to send data this time, of course we name our variable OUT_PIPE appropriately.

In the main section we just try to open the pipe file and crash if that doesn’t work. It’s pretty obvious that such a case (e.g. the pipe is not there because the daemon is not running) should be better handled. But I wanted to keep things simple here because it’s just an example after all.

Then we have a loop that counts from 0 to 20, outputs the current number to stdout and tries to also send the data down the pipe. If that works, the program waits three seconds and then continues the loop.

To be able to write to the pipe we need a byte stream but we only have numbers. We first convert them to a string and use a proper encoding (utf8) and then convert them to bytes that can be sent over the pipe.

When the loop is over, we close the pipe file properly because we as the sender are done with it. I added a little bit of code to handle the case when the daemon exits while the control script runs and still tries to send data over the pipe. This results in a “broken pipe” error. If that happens, we just print an error message, close the file (to not leak the file descriptor) and exit with an error code of 1.

So for today we’re done! We can now send data from a control program to the daemon and thus have achieved uni-directional communication between two processes.

What’s next?

I’ll take a break from these programming-related posts and write about something else next.

However I plan to continue with a 4th part later which will cover argument parsing. With that we could e.g. modify our control program to send arbitrary data to the daemon from the command line – which would of course be much more useful than the simple test case that we have right now.

Writing a daemon using FreeBSD and Python pt.2

The previous part of this series left off with a running “baby daemon” example. It covered Python fundamentals, signal handling, logging as well as an init script to start the daemon.

Daemonization with Python

The outcome of part 1 was a program that needed external help actually to be daemonized. I used FreeBSD’s handy daemon(8) utility to put the program into the background, to handle the pidfile, etc. Now we’re making one step forward and try to achieve the same thing using just Python.

To do that, we need a module that is not part of Python’s standard library. So you might need to first install the package py36-daemon if you don’t already have it on your system. Here’s a small piece of code for you – but don’t get fooled by the line count, there’s actually a lot of things going on there (and of concepts to grasp):

#!/usr/local/bin/python3.6
 
 # Imports #
import daemon, daemon.pidfile
import logging
import signal
import time

 # Fuctions #
def handler_sigterm(signum, frame):
    logging.debug("Exiting on SIGTERM")
    exit(0)

def main_program():
    signal.signal(signal.SIGTERM, handler_sigterm)
    try:
        logging.basicConfig(filename='/var/log/bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
    except:
        print("Error: Could not create log file! Exiting...")
        exit(1)
 
    logging.info("Started!")
    while True:
        time.sleep(1)

 # Main #
with daemon.DaemonContext(pidfile=daemon.pidfile.TimeoutPIDLockFile("/var/run/bdaemon.pid"), umask=0o002):
    main_program()

I dropped some ballast from the previous version; e.g. overriding SIGINT was a nice thing to try out once, but it’s not useful as we move on. Also that countdown is gone. Now the daemon continues running until it’s signaled to terminate (thanks to what is called an “infinite loop”).

We have two new imports here that we need for the daemonization. As you can see, it is possible to import multiple modules in one line. For readability reasons I wouldn’t recommend it in general. I only do it when I import multiple modules that kind of belong together anyway. However in the coming examples I might just put everything together to save some lines.

The first more interesting thing with this version is that the main program was moved to a function called “main_program”. We could have done that before if we really wanted to, but I did it now so the code doesn’t take attention away from the primary beast of this example. Take a look at the line that starts with the with keyword. Now that’s a mouthful, isn’t it? Let’s break this one up into a couple of pieces so that it’s easier to chew, shall we?

The value for umask is looking a bit strange. It contains an “o” among the numbers, so it has to be a string, doesn’t it? But why is it written without quotes then? Well, it is a number. Python uses the “0o” prefix to denote octal (the base-8 numbering system) numbers and 0x would mean hexadecimal (base-16) ones.

Remember that we talked about try/except before (for the logging)? You can expand on that. A try block can not only have except blocks, it can also have a finally block. Statements in such a block are meant to be executed no matter the outcome of the try block. The classical example is that when you open a file, you definitely want to close it again (everything else is a total mess and would make your program an exceptionally bad one).

Closing it when you are done is simple. But what if an exception is raised? Then the code path that properly closes the file might never be reached! You could close the file in every thinkable scenario – but that would be both tedious and error-prone. For that reasons there’s another way to handle those cases: Close the file in the finally block and you can be sure that it will be closed regardless of what happens in the try or in any except block.

Ok, but what does this have to do with our little daemon? Actually a lot. That case of try/finally has been so common that Python provides a shortcut with so-called context managers. They are objects that manage a resource for you like this: You request it, it is valid only inside one block (the with one!) and when the block ends, the context manager takes care of properly cleaning up for you without having you add any extra code (or even without you knowing, if you just copy/paste code from the net without reading explanations like this).

So the with statement in our code above lets Python handle the daemonization process while the main_program function is running. When it ends on the signal, Python cleans up everything and the process terminates – which is great for us. Accept that for now and live with the fact that you might not know just how it does that. We’ll come back to things like that.

Updated init script

Ok, the one thing left to do here is making the required changes to the init script. We are no longer using the daemon(8) utility, so we need to adjust it. Here it is the new one:

#!/bin/sh

. /etc/rc.subr

name=bdaemon
rcvar=bdaemon_enable

command="/root/bdaemon.py"
command_interpreter=/usr/local/bin/python3.6
pidfile="/var/run/${name}.pid"

load_rc_config $name
run_rc_command "$1"

Not too much changed here, but let’s still go over what has. The command definition is pretty obvious: The program can now daemonize itself, so we call it directly. It doesn’t take any arguments, which means we can drop command_args.

However we need to add command_interpreter instead (one important thing that I had overlooked first), because the program will look like this in the process list:

/usr/local/bin/python3.6 /root/bdaemon.py

Without defining the interpreter, the init system would not recognize this process as being the correct one. Then we also need to point it to the to the pidfile, because in theory there could be multiple processes that match otherwise.

And that’s it! Now we have a daemon process running on FreeBSD, written in pure Python.

Python pieces

This next part is a completely optional excursion for people who are pretty new to programming. We’ll take a step back and discuss concepts like functions and arguments, modules, as well as namespaces. This should help you better understand what’s happening here, if you like to know more. Feel free to save some time and skip the excursion if you are familiar with those things.

Functions and arguments

As you’ve seen, functions are defined in Python by using the def keyword, the function name and – at the very least – an empty pair of parentheses. Inside the parentheses you could put one or more arguments if needed:

def greet(name):
    print("Hi, " + name + "!")

greet("Alice")
greet("Bob")

Here we’re passing a string to the function that it uses to greet that person. We can add a second argument like this:

def greet(name, phrase):
    print("Hi, " + name + "! " + phrase)

greet("Alice", "Great to see you again!")
greet("Bob", "How are you doing?")

The arguments used here are called positional arguments, because it’s decided by their position what goes where. Invert them when calling the function and the output will obviously be garbage as the strings are assigned to the wrong function variable. However it’s also possible to refer to the variable by name, so that the order does no longer matter:

def greet(name, phrase):
    print("Hi, " + name + "! " + phrase)

greet(phrase="Great to see you again!", name="Alice")
greet("Bob", "How are you doing?")

This is what is used to assign the values for the daemon context. Technically it’s possible to mix the ways of calling (as done here), but that’s a bit ugly.

We’re not using it, yet, but it’s good to know that it exists: There are also default values. Those mean that you can leave out some arguments when calling a function – if you are ok with the default value.

def greet(name, phrase = "Pleased to meet you."):
    print("Hi, " + name + "! " + phrase)

greet(phrase="Great to see you again!", name="Alice")
greet("Bob", "How are you doing?")
greet("Carol")

And then there’s something known as function overloading. We’re not going into the details here, but you might want to know that you can have multiple functions with the same name but a different number of arguments (so that it’s still possible to precisely identify which one needs to be called)!

Modules

When reading about Python it usually won’t take too long before you come across the word module. But what’s a module? Luckily that’s rather easy to explain: It’s a file with the .py extension and with Python code in it. So if you’ve been following this daemon tutorial, you’ve been creating Python modules all the way!

Usually modules are what you might want to refer as to libraries in other languages. You can import them and they provide you with additional functions. You can either use modules that come with Python by default (that collection of modules is known as the standard library, so don’t get confused by the terminology there), additional third-party modules (there are probably millions) or modules that you wrote yourself.

It’s fairly easy to do the latter. Let’s pick up the previous example and put the following into a file called “greeter.py”:

forgot_name = "Sorry, what was your name again?"

def greet(name, phrase = "Pleased to meet you."):
    print("Hi, " + name + "! " + phrase)

Now you can do this in another Python program:

import greeter

greeter.greet("Carol")
print(greeter.forgot_name)

This shows that after importing we can use the “greet()” function in this program, even though it’s defined elsewhere. We can also access variables used in the imported module (greeter.forgot_name in this case).

Namespaces

Ever wondered what that dot means (when it’s not used in a filename)? You can think of it as a hierarchical separator. The standard Python functions (e.g. print) are available in the global namespace and can thus be used directly. Others are in a different namespace and to use them, it’s necessary to refer to that namespace as well as the function name so that Python understand what you want and finds the function. One example that we’ve used is time.sleep().

Where does this additional namespace come from? Well, remember that we did import time at the top of the program? That created the “time” namespace (and made the functions from the time module available there).

There’s another way of importing; we could import either everything (using an asterisk (*) character, but that’s considered poor coding) or just specific functions from one module into the global namespace:

from time import sleep
sleep(2)
exit(0)

This code will work because the “from MODULE import FUNCTION” statement in this example imported the sleep function so that it becomes available in the global namespace.

So why do we go through all the hassle to have multiple namespaces in the first place? Can’t we just put everything in the global one? Sure, we could – and for more simple programs that’s in fact an option. But consider the following case: Python provides the open keyword. It’s used to open a file and get a nice object back that makes accessing or manipulating data really easy. But then there’s also os.open, which is not as friendly, but let’s you use more advanced things since it uses the raw operating system functionality. See the problem?

If you import the functions from os into the global namespace, you have a name clash in the case of open. This is not an error, mind you. You can actually do that, but you should know what happens. The function imported later will override the one that went by that name previously, effectively making the original one inaccessible. This is called “shadowing” of the original function.

To avoid problems like this it’s often better to have your own separate namespace where you can be sure that no clashes happen.

What’s next?

In the next part we’ll take a look at implementing IPC (inter-process communication) using named pipes (a.k.a “fifos”).

Writing a daemon using FreeBSD and Python pt.1

Being a sysadmin by profession, I don’t code. At least not often enough or with as high quality output that programmers would accept to call coding. I do write and maintain shell scripts. I also write new formulas for configuration management with SaltStack.

The latter is Python-based and after hearing mostly good things about that language, I’ve been trying to do some simple things with it for a while now. And guess what: It’s just so much more convenient compared to using shell code! I’ll definitely keep doing some simple tasks in Python, just to get some experience with it.

Not too long I thought about a little project that I’d try to do and decided to go with Python again. Thinking about what the program should do, I figured that a daemon would make a nice fit for it. But how do you write a daemon? Fortunately it’s especially easy on FreeBSD. So let’s go!

Python

The first thing that I did, was to create a new file called bdaemon.py (for “baby daemon”) and use chmod to make it executable. And here’s what I put into it as a first test:

#!/usr/local/bin/python3.6

 # Imports #
import time

 # Globals #
TTL_SECONDS = 30
TTL_CHECK_INTERVAL = 5

 # Fuctions #

 # Main #
print("Started!")
for i in range(1, TTL_SECONDS + 1):
    time.sleep(1)
    if i % TTL_CHECK_INTERVAL == 0:
        print("Running for " + str(i) + " seconds...")
print("TTL reached, terminating!")
exit(0)

This very simple program has the shebang line that points the operating system to the right interpreter. Then I import Python’s time module which gives me access to a lot of time-related functions. Next I define two global variables that control how long the program runs and in which interval it will give output.

The main part of the program first outputs a starting message on the terminal. It then enters a for loop, that counts from 1 to 30. In Python you do this by providing a list of values after the in keyword. Counting to 5 could have been written as for i in [1, 2, 3, 4, 5]: for example.

With range we can have Python create a list of sequential numeric values on the fly – and since it’s much less to type (and allows for dynamic list creation by setting the final number via a variable), I chose to go with that. Oh, BTW: In Python the last value of those ranges is exclusive, not inclusive. This means that range(1, 5) leads to [1, 2, 3, 4] – if you want the 5 included in the list, you have to use range(1, 6)! That’s why I add 1 to the TTL_SECONDS variable.

I use time.sleep to create a delay in the loop block. Then I do a check if the remainder of the division of the current running time by the defined check interval is zero (% is the modulus operator which gives that remainder value of the division). If it is, the program creates more output.

Mind the indentation: In Python it is used to create code blocks. The for statement is not indented, but it ends with a colon. That means that it’s starting a code block. Everything up to (but not including) the second to last print statement is indented by four spaces and thus part of the code block. Said print statement is indented two levels (8 spaces) – that’s because it’s another block of its own started by the if statement before it. We could create a third, forth and so on level deep indentation if we required other blocks beneath the if block.

Eventually the program will print that the TTL has been reached and exit the program with an error code of 0 (which means that there was no error).

Have you noticed the str(i) part in one of the print statements? That is required because the counter variable “i” holds numeric values and we’re printing data of a different type. So to be able to concatenate (that’s what the plus sign is doing in this case!) the variable’s contents to the rest of the data, it needs to match its type. We’re achieving this by doing a conversion to a string (think converting the number 5 to the literal “5” that can be part of a line of text where it looks similar but is actually a different thing).

Oh, and the pound signs are used to start comments that are ignored by Python. And that’s already it for some fundamental Python basics. Hopefully enough to understand this little example code (if not, tell me!).

Signals

The next thing to explore is signal handling. Since a daemon is essentially a program running in the background, we need a way to tell it to quit for example. This is usually done by using signals. You can send some of them to normal programs running in the terminal by hitting key combinations, while all of them can be sent by the kill command.

If you press CTRL-C for example, you’re sending SIGINT to the currently running application, telling it “abort operation”. A somewhat similar one is SIGTERM, which kind of means “hey, please quit”. It’s a graceful shutdown signal, allowing the program to e.g. do some cleanup and then shut down properly.

If you use kill -9, however, you’re sending SIGKILL, the ungraceful shutdown signal, that effectively means “die!” for the process targeted (if you’ve ever done that to a live database or another touchy application, you know that you really have to think before using it – or you might be in for all kinds of pain for the next few hours).

#!/usr/local/bin/python3.6

 # Imports #
import signal
import time

 # Globals #
TTL_SECONDS = 30
TTL_CHECK_INTERVAL = 5

 # Fuctions #
def signal_handler(signum, frame):
    print("Received signal" + str(signum) + "!")
    if signum == 2:
        exit(0)

 # Main #
signal.signal(signal.SIGHUP, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGQUIT, signal_handler)
signal.signal(signal.SIGILL, signal_handler)
signal.signal(signal.SIGTRAP, signal_handler)
signal.signal(signal.SIGABRT, signal_handler)
signal.signal(signal.SIGEMT, signal_handler)
#signal.signal(signal.SIGKILL, signal_handler)
signal.signal(signal.SIGSEGV, signal_handler)
signal.signal(signal.SIGSYS, signal_handler)
signal.signal(signal.SIGPIPE, signal_handler)
signal.signal(signal.SIGALRM, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
#signal.signal(signal.SIGSTOP, signal_handler)
signal.signal(signal.SIGTSTP, signal_handler)
signal.signal(signal.SIGCONT, signal_handler)
signal.signal(signal.SIGCHLD, signal_handler)
signal.signal(signal.SIGTTIN, signal_handler)
signal.signal(signal.SIGTTOU, signal_handler)
signal.signal(signal.SIGIO, signal_handler)
signal.signal(signal.SIGXCPU, signal_handler)
signal.signal(signal.SIGXFSZ, signal_handler)
signal.signal(signal.SIGVTALRM, signal_handler)
signal.signal(signal.SIGPROF, signal_handler)
signal.signal(signal.SIGWINCH, signal_handler)
signal.signal(signal.SIGINFO, signal_handler)
signal.signal(signal.SIGUSR1, signal_handler)
signal.signal(signal.SIGUSR2, signal_handler)
#signal.signal(signal.SIGTHR, signal_handler)

print("Started!")
for i in range(1, TTL_SECONDS + 1):
    time.sleep(1)
    if i % TTL_CHECK_INTERVAL == 0:
        print("Running for " + str(i) + " seconds...")
print("TTL reached, terminating!")
exit(0)

For this little example code I’ve added a function called “signal_handler” – because that’s what it is for. And in the main program I installed that signal handler for quite a lot of signals. To be able to do that, I needed to import the signal module, of course.

If this program is run, it will handle every signal you can send on a FreeBSD system (run kill -l to list all available signals on a Unix-like operating system). Why are some of those commented out? Well, try commenting those lines in! Python will complain and stop your program. This is because not all signals are allowed to be handled.

SIGKILL for example by its nature is something that you don’t want to allow to be overridden with custom behavior after all! While your program can choose to handle e.g. SIGINT and choose to ignore it, SIGKILL means that the process totally needs to be shutdown immediately.

Try running the program and send some signals while it’s running. On BSD systems you can e.g. send CTRL-T for SIGINFO. The operating system prints some information about the current load. And then the program has the chance to output some additional information (some may tell you what file they currently process, how much percent they have finished copying, etc.). If you send SIGINT, this program terminates as it should.

Logging

There’s another thing that we have to consider when dealing with processes running in the background: A daemon detaches from the TTY. That means it can no longer receive input the usual way from STDIN. But we investigated signals so that’s fine. However it also means a daemon cannot use STDOUT or STDERR to print anything to the terminal.

Where does the data go that a daemon writes to e.g. STDOUT? It goes to the system log. If no special configuration for it exists, you will find it in /var/log/messages. Since we expect quite a bit of debug output during the development phase, we don’t really want to clutter /var/log/messages with all of that. So to write a well-behaving little daemon, there’s one more topic that we have to look into: Logging.

#!/usr/local/bin/python3.6

 # Imports #
import logging
import signal
import time

 # Globals #
TTL_SECONDS = 30
TTL_CHECK_INTERVAL = 5

 # Fuctions #
def handler_sigterm(signum, frame):
    logging.debug("Exiting on SIGTERM")
    exit(0)

def handler_sigint(signum, frame):
    logging.debug("Not going to quit, there you have it!")

 # Main #
signal.signal(signal.SIGINT, handler_sigint)
signal.signal(signal.SIGTERM, handler_sigterm)
try:
    logging.basicConfig(filename='bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
except:
    print("Error: Could not create log file! Exiting...")
    exit(1)

logging.info("Started!")
for i in range(1, TTL_SECONDS + 1):
    time.sleep(1)
    if i % TTL_CHECK_INTERVAL == 0:
        logging.info("Running for " + str(i) + " seconds...")
logging.info("TTL reached, terminating!")
exit(0)

The code has been simplified a bit: Now it installs only handlers for two signals – and we’re using two different handler functions. One overrides the default behavior of SIGINT with a dummy function, effectively refusing the expected behavior for testing purposes. The other one handles SIGTERM in the way it should. If you are fast enough on another terminal window, you can figure out the PID of the running program and then kill -15 it.

Logging with Python is extremely simple: You import the module for it, call a function like logging.basicConfig – and start logging. This line sets the filename of the log to “bdaemon.log” (for “baby daemon”) in the current directory. It changes the default format to displaying just the log level and the actual message. And then it defines the lowest level that should be logged.

There are various pre-defined levels like debug, info, warning, critical, etc. But what’s that try and except thing? Well, the logging module will attempt to create a logfile (or append to it, if it already exists). This is an operation that could fail. Perhaps we’re running the program in a directory where we don’t have the permission to create the log file? Or maybe for whatever reason a directory of that name exists? In both cases Python cannot create the file an an error occurs.

If such a thing happens, Python doesn’t know what to do. It knows what the programmer wanted to do, but has no clue on what to do if things fail. Does it make sense to keep the program running if something unexpected happened? Probably not. So it throws an exception. If an unhandled exception occurs, the program aborts. But we can catch the exception.

By putting the function that opens the file in a try block, we’re telling Python that we’re expecting it could fail. And with except we can catch an exception and handle expected problems. There are a lot of exception types; by not specifying any, we’re catching all of them. That might not be the best idea, because maybe something else happened and we’re just expecting that the logfile could not be created. But let’s keep it simple for now.

The one remaining thing to do is to change any print statements so that we’re using the logging instead. Depending on how important the log entry is, we can also use different levels from least important (DEBUG) to most important (CRITICAL).

You can either wait for the program to finish and then take a look at the log, or you open a second terminal and tail -f bdaemon.log there to watch output as the program is running.

Alright! With this we have everything required to daemonize the program next. Let’s write a little init script for it, shall we?

Init

Init scripts are used to control daemons (start and stop them, telling them to reload the configuration, etc.). There are various different init systems in use across the Unix-like operating system. FreeBSD uses the standard BSD init system called rc.d. It works with little (or not so little if you need to manage very complex daemons) shell scripts.

Since a lot of the functionality of the init system is all the same across most of these scripts, rc.d handles all the common cases in shell files of it’s own that are then used in each of the scripts. In Python this would be done by importing a module; the term with shell scripting is to source another shell script (or fragment).

Create the file /usr/local/etc/rc.d/bdaemon with the following contents:

#!/bin/sh

. /etc/rc.subr

name=bdaemon
rcvar=bdaemon_enable

command="/usr/sbin/daemon"
command_args="-p /var/run/${name}.pid /path/to/script/bdaemon.py"

load_rc_config $name
run_rc_command "$1"

Yes, you need root privileges to do that. Daemons are system services and so we’re messing with the system now (totally at beginner level, though). Save the file and you should be able to start the program as a daemon e.g. by running service bdaemon onestart!

How’s that? What does that all mean and where does the daemonization happen? Well, the first line after the shebang sources the main rc fragment with all the required functions (read the dot as “source”). Then it defines a name for the daemon and an rcvar.

What is an rcvar? Well, by putting “bdaemon_enable=YES” into your /etc/rc.conf you could enable this daemon for automatic startup when the system is coming up. If that line is not present there, the daemon will not start. That’s why we need to use “onestart” to start it anyway (try it without the “one” if you’ve never done that and see what happens!).

Then the command to run as well as the arguments for that command are defined. And eventually two helper functions from rc.subr are called which do all the actual complex magic that they thankfully hide from us!

Ok, but what is /usr/sbin/daemon? Well, FreeBSD comes with an extremely useful little utility that handles the daemonization process for others! This means it can help you if you want to use something as a background service but you don’t want to handle the actual daemonization yourself. Which is perfect in our case! With it you could even write a daemon in shell script for example.

The “-p” argument tells the daemon utility to handle the PID file for the process as well. This is required for the init system to control the daemon. While our little example program is short-lived, we can still do something while it runs. Try out service onestatus and service onestop on it for example. If there was no PID file present, the init system would claim that the daemon is not running, even if it is! And it would not be able to shut it down.

There we go. Our first FreeBSD daemon process written in Python! One last thing that you should do is change the filename for the logfile to use an absolute path like /var/log/bdaemon.log. If you want to read more about the daemon utility, read it’s manpage, daemon(8). And should you be curious about what the init system can do, have a look here.

What’s next?

While using /usr/sbin/daemon is perfectly fine, you might feel that we kind of cheated. So next time we’ll take a brief look at daemonizing with Python directly.

I also want to explore IPC (“inter-process communication) with named pipes. This will allow for a little bit more advanced daemon that can be interacted with using a separate program.

Using FreeBSD with Ports (2/2): Tool-assisted updating

In the previous post I explained why sometimes building your software from ports may make sense on FreeBSD. I also introduced the reader to the old-fashioned way of using tools to make working with ports a bit more convenient.

In this follow-up post we’re going to take a closer look at portmaster and see how it especially makes updating from ports much, much easier. For people coming here without having read the previous article: What I describe here is not what every FreeBSD admin today should consider good practice (any more)! It can still be useful in special cases, but my main intention is to discuss this for building up the foundation for what you actually should do today.

Building a desktop

Last time we stopped after installing the xorg-minimal meta-port. Let’s say we want a very simple desktop installed on this machine. I chose the most frugal *nix desktop, EDE (the Equinox Desktop Environment, looking kind of like Win95), because that’s drawing in things that I need for demonstrating a few interesting things here – and not that much more.

Unfortunately in the ports tree that we’re using, exactly that port is broken (the newer compiler in FreeBSD 11.2 is more picky than the older ones and not quite happy with EDE code). So to go on with it, we have to fix it first. I’ve uploaded an additional patch file from a later version of the port and also prepared a patch for the port’s Makefile. If you want to follow along, you can just copy the three lines below to your terminal:

# fetch http://www.elderlinux.org/files/patch-evoke_Xsm.cpp -o /usr/ports/x11-wm/ede/files/patch-evoke_Xsm.cpp
# fetch http://www.elderlinux.org/files/patch_ede_port -o /usr/ports/x11-wm/ede/patch_ede_port
# patch -d /usr/ports/x11-wm/ede -i patch_ede_port

Using Portmaster to build and install EDE

Thanks to build-time dependencies and default options in FreeBSD it’s still another 110 ports to build, but that’s fine. We could remove some unneeded options and cut it down quite a bit. Just to give you an idea: By configuring only one package (doxygen) to not pull in all the dependencies that it usually does, it would be just 55 (!) ports.

But let’s say we’re lazy. Do we have to face all of those configure dialogs (72 in cause you are curious)? No, we don’t. That’s why portmaster has the -G flag which skips the config make target and just uses the standard port options:

# portmaster -DG x11-wm/ede

EDE was successfully installed

Using this option can be a huge time-saver if you’re building something where you know that you don’t need to change the options for the application and its dependencies.

System update

Now that we have a simple test system with 265 installed but outdated packages. Let’s update it! Remember that unlike e.g. Linux, FreeBSD keeps third party software installed from packages or ports and the actual operating system separate. We’ll update the latter first:

# freebsd-update upgrade -r 11.3-RELEASE

With this command, we make the updater download the required files for the upgrade from 11.2-RELEASE to 11.3-RELEASE.

Upgrading FreeBSD to 11.3-RELEASE

When it’s done, and we’ve closed the three lists with the removed, updated and new files, we can install the new kernel:

# freebsd-update install

Once that’s done, it’s time to reboot the system so the new kernel boots up. Then the userland part of the operating system is installed by using the same command again:

# shutdown -r now
# freebsd-update install

Kernel upgrade complete

Preparations

Now in our fresh 11.3 system we should first get rid of the old ports tree to replace it with a newer one, right? Wait, hold that rm command for a second and let me show you something really useful!

If you take a look at the /usr/ports directory, you’ll find a file appropriately named UPDATING. And since that’s right what we were about to do, why not take a look at it?

So what is this? It’s an important file for people updating their systems using ports. Here is where ports maintainers document breaking changes. You are free to ignore it and the advice that it gives and there’s actually a chance that you’ll get away with it and everything will be fine. But sometimes you don’t – and fixing stuff that you screwed up might take you quite a bit longer than at least skimming over UPDATING.

# less /usr/ports/UPDATING

But right now it’s completely sufficient to look at the metadata of the first notification which reads:

20180330:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

The main takeaway here is the date. The last heads-up notice for our old ports tree was on 2018-03-30.

Checking out a newer ports tree

Now let’s throw it all away and then get the new ports tree. Usually I’d use portsnap for this, but in this case I want a special ports tree (the one that would have come with the OS if I got ports from a fresh 11.3 installation), so I’m checking it out from SVN:

# rm -rf /usr/ports/.* /usr/ports/*
# svnlite co svn://svn.freebsd.org/ports/tags/RELEASE_11_3_0 /usr/ports

If you’re serious about updating a production server that you care about, now is the time to read through UPDATING again. Search for the date string that you previously took a note of and then read the messages all the way up to the beginning of the file. It’s enough to read the AFFECTS lines until you hit one message that describes a port which you are using. You can ignore all the rest but should really read those heads-up messages that affect your system.

What software can be updated?

BTW… You know which packages you have installed, don’t you? A huge update like what we’re facing here takes some planning up-front if you want to do it in a professional manner. In general you should update much more often, of course! This makes things much, much easier.

Updating from ports

Ok, we’re all set. But which software can be updated? You can ask pkg(8) to compare installed packages to the respective distinfo from the corresponding port:

# pkg version -l "<"

If you pipe that into wc -l you will see that 165 of the 265 installed packages can (and probably should) be updated.

Updating software from ports

We’ll start with something really simple. Let’s say, we just want to update pkgconf for now. How do we do that with portmaster? Easy enough: Just like we would have portmaster install it in the first place. If something is already installed, the tool will figure out if an update is available and then offer to update it.

# portmaster -G devel/pkgconf

And what will happen if the port is already up to date? Well, then portmaster will simply re-build and re-install it.

Partial update finished

While partial updates are possible, it’s a much better idea to keep the whole system updated (if at all possible). To do that, you can use the -a flag for portmaster to update all installed software.

Another nice flag is -i, which is for interactive mode. In this mode, portmaster will ask for every port if it should be updated or not. If you’re leaving out critical ports (dependencies) this can lead to an impossible update plan and portmaster will not start updating. But it can be useful for cherry-picking.

Interactive update mode

Now let’s attempt to upgrade all the ports, shall we?

# portmaster -aG

As always, portmaster will show you its plan and ask for your confirmation. There are two things that probably deserve a word or two about them. Usually the plan is to update an application, but sometimes portmaster wants to re-install or even downgrade them (see picture)! What’s happening here?

Re-installs mostly happen when a dependency changed and portmaster figured out that it might be a good idea to rebuild the port against the newer version of the dependency, even though there is no newer version available for the actual application.

Upgrading, “downgrading”, re-installing

Downgrades are a different thing. They can happen when you installed something from a newer ports tree and then go back to an older one (something you usually shouldn’t do). But in this case it’s actually a false claim. Portmaster will not downgrade the package – it was merely confused by the fact that the versioning scheme changed (because of the 0 in 2018.4 it thinks that this version is older than the previous 2.1.3…).

Moved ports

If you’re paying close attention to all the information that portmaster gives you, you’ll have seen lines like the following one:

===>>> The x11/bigreqsproto port moved to x11/xorgproto

There’s another interesting file in the ports tree called MOVED. It keeps track of moved or removed ports. Sometimes ports are renamed or moved to another category if the maintainer decides it fits better. Portmaster for example started as sysutils/portmaster and was later moved when the ports-mgmt category was introduced. However you won’t find this information in the MOVED file – because it happened before the time that the current MOVED keeps records for (i.e. early 2007 in this case).

The example above is due to the fact that last year the upstream project (Xorg) decided to combine the protocol headers into one distribution package. Before that there were more than 20 separate packages for them (and before that, once upon a time, all of Xorg had been one giant monolithic release – but I digress…)

Problem with merged ports

The good news here is that portmaster is smart enough to parse the MOVED file and figure out how to deal with this kind of changes in the ports tree! The bad news is that this does not work for more complicated things like the merges that we just talked about…

So what now? Good thing you read the relevant UPDATING notification, eh?

20180731:
  AFFECTS: users of x11/xorg and all ports with USE_XORG=*proto
  AUTHOR: zeising@FreeBSD.org

Bulk-deleting obsolete ports and trying again

So let’s first get rid of the old *proto packages with the command that developer Niclas Zeising proposes and then try again:

# pkg version -l \? | cut -f 1 -w | grep -v compat | xargs pkg delete -fy
# portmaster -aG

Required options

Alright, we have one more problem to overcome. There are ports that will fail to build if we run portmaster with the -G flag. Why? Because they have mandatory ports options where you need to choose from things like a backend or a certain mechanic.

The “mandatory options” case

One such case is freetype2. Since this one fails, build it separately and do not skip the config dialog for this one:

# portmaster print/freetype2

Once that’s done, we can continue with updating all the remaining ports. After quite a while (because LLVM is a beast) all should be done!

Updating complete!

Default version changes

Did you read the following notice in UPDATING?

20181213:
  AFFECTS: users of lang/perl5*
  AUTHOR: mat@FreeBSD.org

For the big update run we ignored this. And in fact, portmaster did update Perl, but only to the latest version in the 5.26 branch of the language:

# pkg info -x perl
perl5-5.26.3

Why? Well, because that was the version of Perl that was already installed. Actually this is ok, if you’re not using Perl yourself and thus can live without the latest features added. However if you want to (or have to) upgrade to a later Perl major version we have a little bit more work to do.

First edit /etc/make.conf and put the following line in there:

DEFAULT_VERSIONS+=perl5=5.28

This is a hint to the ports framework that whenever you’re building a Perl port, you want to build it against this version. If you don’t do that, you will receive a warning when building the other Perl. Because in this case you’re installing an additional Perl version but all the ports will use the primary one. So more likely than not you don’t want to do this.

Upgrading Perl

Next we need to build the newer Perl. The special thing here is that we need to tell portmaster of the changed origin so that the new version actually replaces the old one. We can do this by using the -o flag. Mind the syntax here, it’s new origin first and then old origin and not vice versa (took me a while to get used to it…)!

But let’s check the origin real quick, before we go on. The pkg command above showed that the package is called perl5. This outputs what we wanted to know:

# pkg info perl5
perl5-5.26.3
Name           : perl5
Version        : 5.26.3
Installed on   : Tue Sep 10 21:15:36 2019 CEST
Origin         : lang/perl5.26
[...]

There we have it. Now portmaster can begin doing its thing:

# portmaster -oG lang/perl5.28 lang/perl5.26

Rebuilding ports that depend on Perl

Ok, the default Perl version has been updated… But this broke all the Perl ports! So it’s time to fix them by rebuilding against the newer Perl 5.28. Luckily the UPDATING notice points us to a simple way to do this:

# portmaster -f `pkg shlib -qR libperl.so.5.26`

And that’s it! Congratulations on updating an old test system via ports.

At last: All done!

What if something goes wrong?

You know your system and applications, are proficient with your shell and you’ve read UPDATING. What could possibly go wrong? Well, we’re dealing with computers here. Something really strange can go wrong anytime. It’s not very likely, but sometimes things happen.

Portmaster can help you if you ask for it before attempting upgrades. Before deinstalling an old package, it creates a backup. However after installing the new version it throws it away. But you can make it keep the backup by supplying the -b flag. Sometimes the old package can come in handy if something goes completely sideways. If you need backup packages, have a look in /usr/ports/packages/portmaster-backup. You can simple pkg add those if you need the old version back (of course you need to be sure that you didn’t update the packages dependencies or you need the downgrade them again, too!).

If you want to be extra cautious when updating that very special old box (that nobody dared to touch for nearly a decade because the boss threatened to call down terrible curses (not the library!) upon the one who breaks it), portmaster will also support you. Use the -w flag and have it preserve old shared libs before deinstalling an old package. I wouldn’t normally do it (and think my example made it clear that it’s really special). In fact I’ve never used it. But it might be good to know about it should you ever need it.

That said, on the more complicated boxes I usually create a temporary directory and issue a pkg create -a, completely backing up all the packages before I begin the update process. Usually I can throw away everything a while later, but having the backups saved me some pain a couple of times already.

In the end it boils down to: Letting your colleagues call you a coward or being the tough guy but maybe ruining your evening / weekend. Your choice.

Anf if you need to know even more about the tool that we’ve been using over and over now, just man portmaster.

What’s next?

I haven’t decided on the next topic that I’m going to write about. However I’ve planned for two more articles that will cover building from ports the modern way(tm) – and I hope that it will not take me another two years before I come to it…

Using FreeBSD with Ports (1/2): Classic way with tools

Installing software on Unix-like systems originally meant compiling everything from source and then copying the resulting program to the desired location. This is much more complicated than people unfamiliar with the subject may think. I’ve written an article about the history of *nix package management for anyone interested to get an idea of what it once was like and how we got what we have today.

FreeBSD pioneered the Ports framework which quickly spread across the other BSD operating systems which adopted it to their specific needs. Today we have FreeBSD Ports, OpenBSD Ports and NetBSD’s portable Pkgsrc. DragonflyBSD currently still uses DPorts (which is meant to be replaced by Ravenports in the future) and there are a few others like e.g. mports for MidnightBSD. Linux distributions have developed a whole lot of packaging systems, some closer to ports (like Gentoo’s Portage where even the name gives you an idea where it comes from), most less close. Ports however are regarded the classical *BSD way of bringing software to your system.

While the ports frameworks have diverged quite a bit over time, they all share a basic purpose:They are used to build binary packages that can then be installed, deinstalled and so on using a package manager. If you’re new to package management with FreeBSD’s pkg(8), I’ve written two introduction articles about pkg basics and pkg common usage that might be helpful for you.

Why using Ports?

OpenBSD’s ports for example are maintained only to build packages from and the regular user is advised to stay away from it and just install pre-built packages. On FreeBSD this is not the case. Using Ports to install software on your machines is nothing uncommon. You will read or hear pretty often that in general people recommend against using ports. They point to the fact that it’s more convenient to install packages, that it’s so much quicker to do so and that especially updating is much easier. These are all valid points and there are more. You’re now waiting for me to make a claim that they are wrong after all, aren’t you? No they are right. Use packages whenever possible.

But when it’s not possible to use the packages that are provided by the FreeBSD project? Well, then it’s best to… still use packages! That statement doesn’t make sense? Trust me, it does. We’ll come back to it. But to give you more of an idea of the advantages and disadvantages of managing a FreeBSD system with ports, let’s just go ahead and do it anyway, shall we?

Just don’t do this to a production system but get a freshly installed system to play with (probably in a VM if you don’t have the right hardware lying around). It still is very much possible to manage your systems with ports. In fact at work I have several dozen of legacy systems some of which begun their career with something like FreeBSD 5.x and have been updated since then. Of course ports were being used to install software on them. Unfortunately machines that old have a tendency to grow very special quirks that make it not feasible to really convert them to something easier to maintain. Anyways… Be glad if you don’t have to mess with it. But it’s still a good thing to know how you’d do it.

So – why would you use Ports rather than packages? In short: When you need something that packages cannot offer you. Probably you need a feature compiled in that is not activated by default and thus not available in FreeBSD’s packages. Perhaps you totally need the latest version in ports and cannot wait for the next package run to complete (which can take a couple of days). Maybe you require software licensed in a way that binary redistribution is prohibited. Or you’ve got that 9-STABLE box over there which you know quite well you shouldn’t run anymore. But because you relay on special hardware that offers drivers only for FreeBSD 9.x… Yeah, there’s always things like that and sometimes it’s hard to argue away. But when it comes to packages – since FreeBSD 9 is EOL for quite some time there are no packages being built for it anymore. You’ll come up with any number of other reasons why you can’t use packages, I’m not having any doubts.

Using Ports

Alright, so let’s get to it. You should have a basic understanding of the ports system already. If you haven’t, then I’d like to point you to another two articles written as an introduction to ports: Ports basics and building from ports.

If you just read the older articles, let me point out two things that have happened since then. In FreeBSD 11.0 a handy new option was added to portsnap. If you do

# portsnap auto

it will automatically figure out if it needs to fetch and extract the ports tree (if it’s not there) of if fetching the latest changesets and updating is the right thing to do.

The other thing is a pretty big change that happened to the ports tree in December 2017 not too long after I wrote the articles. The ports framework gained support for flavors and flavored ports. This is used heavily for Python ports which can often build for Python 2.7, 3.5, 3.6, … Now there’s generally only one Port for both Python2 and Python3 which can build the py27-foo, py36-foo, py37-foo and so on packages. I originally intended to cover tool-assisted ports management shortly after the last article on Ports, but after that major change it wasn’t clear if the old tools would be updated to cope with flavors at all. They were eventually, but since then I never found the time to revisit this topic.

Scenario

I’ve setup a fresh FreeBSD 11.2 system on my test machine and selected to install the ports that came with this old version. Yes, I deliberately chose that version, because in a second step we’re going to update the system to FreeBSD 11.3.

Using two releases for this has the advantage that it’s two fixed points in time that are easy to follow along if you want to. The ports tree changes all the time thanks to the heroic efforts that the porters put into it. Therefor it wouldn’t make sense to just use the latest version available because I cannot know what will happen in the future when you, the reader, might want to try it out. Also I wanted to make sure that some specific changes happened in between the two versions of the ports tree so that I can show you something important to watch out for.

Mastering ports – with Portmaster

There are two utilities that are commonly used to manage ports: portupgrade and portmaster. They pretty much do the same thing and you can choose either. I’m going to use portmaster here that I prefer for the simple reason of being more light-weight (portupdate brings in Ruby which I try to avoid if I don’t need it for something else). But there’s nothing wrong with portupgrade otherwise.

Building portmaster

You can of course install portmaster as a package – or you can build it from ports. Since this post is about working with ports, we’ll do the latter. Remember how to locate ports if you don’t know their origin? Also you can have make mess with a port without changing your shell’s pwd to the right directory:

# whereis portmaster
portmaster: /usr/ports/ports-mgmt/portmaster
# make -C /usr/ports/ports-mgmt/portmaster install clean

Portmaster build options

As soon as pkg and dialog4ports are in place, the build options menu is displayed. Portmaster can be built with completions for bash and zsh, both are installed by default. Once portmaster is installed and available, we can use it to build ports instead of doing so manually.

Building / installing X.org with portmaster

Let’s say we want to install a minimal X11 environment on this machine. How can we do it? Actually as simple as telling portmaster the origin of the port to install:

# portmaster x11/xorg-minimal

Port options…

After running that command, you’ll find yourself in a familiar situation: Lots and lots of configuration menus for port options will be presented, one after another. Portmaster will recursively let you configure the port and all of its dependencies. In our case you’ll have 42 ports to configure before you can go on.

Portmaster in action

In the background portmaster is already pretty busy. It’s gathering information about dependencies and dependencies of dependencies. It starts fetching all the missing distfiles so that the compilation can start right away when it’s the particular port’s turn. All the nice little things that make building from ports a bit faster and a little less of a hassle.

Portmaster: Summary for building xorg-minimal (1/2)

Once it has all the required information gathered and sorted out, portmaster will report to you what it is about to do. In our case it wants to build and install 152 ports to fulfill the task we gave it.

Portmaster: Summary for building xorg-minimal (2/2)

Of course it’s a good idea to take a look at the list before you acknowledge portmaster’s plan.

Portmaster: distfile deletion?

Portmaster will ask whether to keep the distfiles after building. This is ok if you’re building something small, but when you give it a bigger task, you probably don’t want to sit and wait to decide on distfiles before the next port gets built… So let’s cancel the job right there by hitting Ctrl-C.

Portmaster: Ctrl-C report (1/2)

When you cancel portmaster, it will try to exit cleanly and will also give a report. From that report you can see what has already been done and what still needs to be done (in form of a command that would basically resume building the leftovers).

Portmaster: Ctrl-C report (2/2)

It’s also good to know that there’s the /tmp/portmasterfail.txt file that contains the command should you need it later (e.g. after you closed your terminal). Let’s start the building process again, but this time specify the -D argument. This tells portmaster to keep the distfiles. You could also specify -d to delete them if you need the space. Having told portmaster about your decision helps to not interrupt the building all the time.

# portmaster -D x11/xorg-minimal

Portmaster: Build error

All right, there we have an error! Things like this happen when working with older ports trees. This one is a somewhat special case. Texinfo depends on one file being fetched that changes over time but doesn’t have any version info or such. So portmaster would fetch the current file, but that has silently changed since the distfile information was put into our old ports tree. Fortunately it’s nothing too important and we can simply “fix” this by overwriting the file’s checksum with the one for the newer file:

# make makesum -C /usr/ports/print/texinfo
# portmaster -D x11/xorg-minimal

Manually fetching missing distfile

Next problem: This old version of mesa is no longer available in the places that it used to be. This can be fixed rather easily if you can find the distfile somewhere else on the net! Just put it into its place manually. Usually this is /usr/ports/distfiles, but there are ports that use subdirectories like /usr/ports/distfiles/bash or even deeper structures like e.g. /usr/ports/distfiles/xorg/xserver!

# fetch https://mesa.freedesktop.org/archive/older-versions/17.x/mesa-17.3.9.tar.xz -o /usr/ports/distfiles/mesa-17.3.9.tar.xz
# portmaster -D x11/xorg-minimal

Portmaster: Success!

Eventually the process should end successfully. Portmaster gives a final report – and our requested program has been installed!

What’s next?

Now you know how to build ports using portmaster. There’s more to that tool, though. But we’ll come to that in the next post which will also show how to use it to update ports.

Summer Sun and microsystems

Juli is coming to an end so there should be a new article on the Eerie Linux blog! Here it is. It’s not about a single technical topic, though. More of a “meta” article. Im writing about what I’d like to be writing about soon! No, I haven’t been on vacation or anything. Sometimes things just don’t work out as planned. More on that in a second.

Illumos!

Have you noticed that I changed the header graphic? In 2016 I added beastie and puffy. And now there’s also the Illumos phoenix. Being a Unix lover, I’ve long felt that I’m really missing something in knowing next to nothing about the former Sun platform. Doing just a little bit of experimentation, I’ve become somewhat fascinated with the Solaris / Illumos world.

It’s Unix, so I know my way around for the most basic things. But it’s quite different from *BSD and Linux that I work with on a daily base – and from what I can say so far, definitely not for the worse. The system left an impression on me of being well engineered and offering interesting or even beautiful solutions for common problems! I can’t remember the last time when something in Linux-space struck me as being beautifully crafted – which is no wonder given the fact that all the tools are developed separately and are only bundled together to form a complete operating system by the distributions. While it usually works, the aspect of a consistent OS with a closely coordinated userland was one of the strong points that drew me towards *BSD. Obviously the same is true for the various Illumos distributions.

Since I’m really just starting my journey, I don’t have that much to say, yet. I plan on doing an interview with an OmniOS user who has switched over from FreeBSD not too long ago and hope that I’ll be able to arrange something. Other than that I’ll have to do some reading and will probably visit the official Solaris, too, even though I’m all for Open Source. When I feel capable of writing something remotely useful, I plan on presenting the various Illumos distributions and their strong points. Might take me some time, though, but from now on several Solaris/Illumos topics are on my todo list.

Illumos people reading this: If you’d like me to write about your OS, please help me by making some suggestions of topics for beginners! That would be very much appreciated.

BSD router

The article series about the custom-built BSD router is still the most popular one on my blog. Given that so many people are interested in this topic, I wanted to write a follow-up for quite some time now. Since the newest version of OPNsense (19.7) was released this month, I decided to finally pick that topic. Would have been nice to do a fresh reinstall and cover what has changed.

Also there have been many new firmware releases for the APU2 since I covered it. I was rather excited when I learned (in January or so) that they had finally enabled the ECC feature for the RAM! Other stuff happened, too, so I’d definitely have something to play with and to write about.

There’s only one problem… As of exactly this month, I don’t have a working APU2 anymore. If you have children, keep your stuff out of their reach! It doesn’t make that great a toy, anyway.

So this topic has to be postponed due to lack of hardware. But I will get a new one sometime and then write about it. I already have some ideas on what to there for a second mini series.

Ravenports

Another thing that I’d have liked to write about is my favorite packaging system, Ravenports. There’s a pretty big thing coming soon (I hope), but it’s not there, yet. So no new issue of the “raven report” this month!

Other than that big change, I still have the other issue on my todo list: Re-bootstrap on FreeBSD i386. I’ve done that once (before the toolchain update to gcc 8) and will do it again as I find the time. Of course right now it makes sense to wait for the anticipated new component to land first!

I’ve also been thinking about writing a bootstrap script. Probably I’ll give it a shot – and if it can do i386, I’ll have that covered, too.

HardenedBSD

This has been on my list for over a year now, too. I took a quick look at that project and like what they do very much. It’s basically FreeBSD with lots and lots of security enhancements, built and maintained by a small team of people dedicated to make FreeBSD more secure. Why a separate project, then? For the simple reason that FreeBSD is a huge project where several parties have various interests in the OS and where getting very invasive changes in is sometimes not that easy. So HardenedBSD is developed in parallel to give the developers all the freedom to make changes as they please.

Except for all the security related stuff there are a few things where the system is different from upstream FreeBSD, too. I’ll need to explore this a little more and start actually doing stuff with HardenedBSD. Then I’ll definitely write about it here on the blog.

Ports and packages

In 2017 I wrote about package management and Ports on FreeBSD in a mini series of articles. It ended with building software from ports. Actually it wasn’t meant to end right there, as I had at least two more articles in the pipe. I’ll be concentrating on this one and hope to have a blog post out in early August.

Other stuff

My interest in the ARM platform has not vanished and I want to do more with it (and then probably blog about it, too). Also I want to revisit FreeBSD jails and still have to get my series on email done… And I’m not done with the “power user” series, either.

So there’s more than enough topics and far too little time. Let’s see what I can get done this year and what will have to wait!

ARM’d and dangerous pt. 2: FreeBSD on the Pinebook (aarch64)

In February I wrote about real aarch64 server hardware. My interest in the ARM platform has not decreased – and this month I finally got my Pinebook shipped.

[update] I’ve updated the post from 05-24 to the 05-31 version.[/update]

Aarch64

I’ve always been interested in alternatives to common solutions – not just with open source operating systems, but also with architectures. Sure, ARM is pretty much ubiquitous when it comes to mobile devices. But when it comes to servers or PCs, there’s a total dominance of the amd64 (“x86_64”) architecture. However in times of Meltdown, Spectre, Zombie Load and such it might well be worth to take a closer look at alternative platforms, even if it’s not your primary interest.

Arm Holdings is a UK-based company that designs processors and licenses them. There have been multiple revisions of the architecture, like ARMv6 and ARMv7. Both are 32-bit architectures used e.g. in the popular Raspberry Pi and Raspberry Pi 2. ARMv8 is the first 64-bit version and is available in form of the Raspberry Pi 3 among others. The incredible success of those small single-board computers lead to a whole lot of spin-offs.

Pine64

While a lot of the competitors specialize in extremely cheap Pi clones with a few improvements (and too often with their own problems), one of the better alternatives comes from Pine64. They don’t just sell hardware that works with one custom-compiled Linux kernel which rarely (or never) gets any upgrades. On the contrary: They are trying to build a community around the hardware dedicated to do cool things with it and eventually blaze a trail for high-quality ARM-based alternative hardware.

One really compelling offer is that of the Pinebook. While it’s nothing special to use one of the little single-board computers for a media station or things like that, a real laptop is something way different. Especially as the company behind Pine64 decided to sell it at cost to people of the Linux and *BSD communities! It really is a $99 (plus shipping) 11.6″ laptop. Definitely a nice idea to steer the community. Especially since they announced a Pinebook Pro, a tablet, a phone and so on during this year’s FOSDEM. I’m not sure when they will be available, though.

There’s only one “problem” with the Pinebook: You cannot buy it regularly. If you are interested in purchasing one, you need to register on the site and when a new batch is to be made, you get a coupon code that can be used to actually place an order. Sometime last year I decided that this sounded pretty interesting and requested a coupon. I got notified in February or so and eventually the laptop was shipped in May.

Pinebook

Here’s some info on the specs (see here for the full specifications):

  • Allwinner A64 Quad Core SOC with Mali 400 MP2 GPU
  • 2GB LPDDR3 RAM
  • 16GB of eMMC (upgradable)
  • 2x USB 2.0 Host
  • Lithium Polymer Battery (10000mAH)
  • Stereo Speakers
  • WiFi 802.11bgn + Bluetooth 4.0

Also they offered a 1366×768 display. When my Pinebook arrived, I was in for a surprise, though: They had upgraded this model to use an 1920×1080 IPS which is really nice! The low resolution of the original model was one of the things that made me think twice before buying. Glad that I chose to go ahead.

A Pinebook 1080p

The Pinebook comes with KDE Neon preinstalled. It’s apparently a Debian-based distro with the latest Plasma Workspaces desktop. I opened Firefox and was able to browse the net – but I didn’t buy this laptop to use boring Linux. 😉 Let’s try something more exciting where not all the devices work, yet!

Preparation

The Pinebook is new enough that it’s not supported with any FreeBSD release at this time. So you have to use the development branch known as -CURRENT. Right now it’s 13-CURRENT and it’s the exciting branch where all the latest features, fixes and improvements go. The FreeBSD project provides snapshots for developers or people who are willing to test bleeding edge code. Needless to say that by doing this you install an OS that is not at all meant for daily usage. It should not hold important data or anything. This is tinkering with some cool stuff – no more no less.

You need a computer that can make use of micro SD cards (or SD cards by using an adapter that usually comes with micro SDs). A 4 GB card suffices but anything bigger is also fine.

Get a snapshot and the checksum file from here (e.g. CHECKSUM.SHA512-FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447 and FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img.xz)

Since there are differences with the various ARM-based platforms, make sure you get the right one for the Pinebook. Each file contains a date (2019-05-31 in this case) and a revision number (r348447) so you know when the snapshot was created and what version of the code in Subversion it was built from.

Once you downloaded both files, verify the checksum and decompress the archive:

% shasum -c CHECKSUM.SHA512-FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447
% xz -dvv FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img.xz

Now dd the image onto a micro SD card. Be absolutely sure that you dd over the right device – if you don’t, you can easily lose data that you wanted to keep! In my case the right device is mmcsd0, substitute it for yours. Get the image written with the following command (FreeBSD versions before 12.0 do not support the “progress” option – leave out the “status=progress” in this case):

# dd if=FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img of=/dev/mmcsd0 bs=1m status=progress

FreeBSD

Now insert the micro SD card into your Pinebook and turn it on. It should boot off of the card and right into FreeBSD. You might see some scary looking messages like lock order reversals and things like that which you are probably not used too. Welcome to -CURRENT! Since you are testing a development snapshot, this version of FreeBSD has some diagnostic features enabled that are helpful in pinning down bugs (and eventually fixing them). If you’re just an advanced user (like me) and not a developer, ignore them.

Log in as root with the password root. Take a look around and if everything seems to work, power the device off again:

# shutdown -p now

On first boot, the partition holding the primary filesystem is grown to the maximum available space and the filesystem is, too. Let’s put the card back into the other computer and have a look at the partitions:

# gpart show mmcsd0
=>      63  15493057  mmcsd0  MBR  (7.4G)
        63      2016          - free -  (1.0M)
      2079    110502       1  fat32lba  [active]  (54M)
    112581  15378491       2  freebsd  (7.3G)
  15491072      2048          - free -  (1.0M)

# gpart show mmcsd0s2
=>       0  15378491  mmcsd0s2  BSD  (7.3G)
         0        59            - free -  (30K)
        59  15378432         1  freebsd-ufs  (7.3G)

As you can see, the FreeBSD slice is now > 7 GB in size, even though the original image was a lot smaller to fit onto a 4 GB card. Now that we have enough usable space, let’s mount the filesystem (the only one on the second slice) and copy the image over:

# mount /dev/mmcsd0s2a /mnt
# cp FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img /mnt
# umount /mnt

Putting FreeBSD on the Pinebook

Put the micro SD card back into the Pinebook and boot off of it. Let’s see how many storage devices there are:

# geom disk list
Geom name: mmcsd0
Providers:
1. Name: mmcsd0
   Mediasize: 7932477440 (7.4G)
   Sectorsize: 512
   Stripesize: 4194304
   Stripeoffset: 0

[...]

Geom name: mmcsd1
Providers:
1. Name: mmcsd1
   Mediasize: 15518924800 (14G)
   Sectorsize: 512
   Stripesize: 512
   Stripeoffset: 0

[...]

Alright, mmcsd0 is the SD card and mmcsd1 is the internal eMMC, which is a bit bigger. To “install” FreeBSD on the device (and erase Linux) just dd the image onto the internal storage, shut down the system and eject the card:

# cd /
# dd if=FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img of=/dev/mmcsd1 status=progress
# shutdown -p now

Now the depenguinization of your device is complete! Power it on again and it’ll boot FreeBSD off the eMMC.

If you get tired of this system, try another. Go to the pine64 website and browse through the “Partner Projects” tab. You’ll certainly find other interesting operating systems or distributions to install.

Status of FreeBSD on the Pinebook

So what works on FreeBSD currently and what doesn’t? I’ve read about screen flickering on the console and things like that. But from what I can say, those issues are gone. The console works well even on my 1080p model. X11 works as well and I’ve tested various desktop environments. Building packages from ports works, but of course this is not the kind of hardware that’s best fit for that.

What did NOT work is e.g. Firefox – it crashes. Also sound is not working, yet.

I didn’t test WLAN or Bluetooth since I’m using a USB to LAN adapter to access the net. For the dmesg output see the end of this post.

What’s next?

I’ll stick to FreeBSD on the Pinebook and if that is your special interest, too, feel free to contact me. My current plan is to write about configuring a fresh systems, packages and a project that I started for the Pinebook (lite packages and the corresponding ports options light”).

dmesg.boot

------
KDB: debugger backends: ddb
KDB: current backend: ddb
Copyright (c) 1992-2019 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
	The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 13.0-CURRENT r348447 GENERIC arm64
FreeBSD clang version 8.0.0 (tags/RELEASE_800/final 356365) (based on LLVM 8.0.0)
WARNING: WITNESS option enabled, expect reduced performance.
VT(efifb): resolution 1920x1080
KLD file umodem.ko is missing dependencies
Starting CPU 1 (1)
Starting CPU 2 (2)
Starting CPU 3 (3)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
arc4random: WARNING: initial seeding bypassed the cryptographic random device because it was not yet seeded and the knob 'bypass_before_seeding' was enabled.
random: entropy device external interface
MAP 47ef4000 mode 2 pages 24
MAP b8f2f000 mode 2 pages 1
MAP b8f31000 mode 2 pages 1
MAP bdf50000 mode 2 pages 16
kbd0 at kbdmux0
ofwbus0: 
clk_fixed0:  on ofwbus0
clk_fixed1:  on ofwbus0
simplebus0:  on ofwbus0
rtc0:  mem 0x1f00000-0x1f003ff irq 52,53 on simplebus0
rtc0: registered as a time-of-day clock, resolution 1.000000s
regfix0:  on ofwbus0
regfix1:  on ofwbus0
ccu_a64ng0:  mem 0x1c20000-0x1c203ff on simplebus0
ccu_sun8i_r0:  mem 0x1f01400-0x1f014ff on simplebus0
psci0:  on ofwbus0
aw_sid0:  mem 0x1c14000-0x1c143ff on simplebus0
iichb0:  mem 0x1f03400-0x1f037ff irq 57 on simplebus0
iicbus0:  on iichb0
gic0:  mem 0x1c81000-0x1c81fff,0x1c82000-0x1c83fff,0x1c84000-0x1c85fff,0x1c86000-0x1c87fff irq 49 on simplebus0
gic0: pn 0x2, arch 0x2, rev 0x1, implementer 0x43b irqs 224
gpio0:  mem 0x1c20800-0x1c20bff irq 23,24,25 on simplebus0
gpiobus0:  on gpio0
aw_nmi0:  mem 0x1f00c00-0x1f00fff irq 54 on simplebus0
iichb1:  mem 0x1f02400-0x1f027ff irq 55 on simplebus0
iicbus1:  on iichb1
gpio1:  mem 0x1f02c00-0x1f02fff irq 56 on simplebus0
gpiobus1:  on gpio1
axp8xx_pmu0:  at addr 0x746 irq 59 on iicbus0
gpiobus2:  on axp8xx_pmu0
generic_timer0:  irq 4,5,6,7 on ofwbus0
Timecounter "ARM MPCore Timecounter" frequency 24000000 Hz quality 1000
Event timer "ARM MPCore Eventtimer" frequency 24000000 Hz quality 1000
a10_timer0:  mem 0x1c20c00-0x1c20c2b irq 8,9 on simplebus0
Timecounter "a10_timer timer0" frequency 24000000 Hz quality 2000
aw_syscon0:  mem 0x1c00000-0x1c00fff on simplebus0
awusbphy0:  mem 0x1c19400-0x1c19413,0x1c1a800-0x1c1a803,0x1c1b800-0x1c1b803 on simplebus0
cpulist0:  on ofwbus0
cpu0:  on cpulist0
cpufreq_dt0:  on cpu0
cpu1:  on cpulist0
cpu2:  on cpulist0
cpu3:  on cpulist0
aw_thermal0:  mem 0x1c25000-0x1c250ff irq 10 on simplebus0
a31dmac0:  mem 0x1c02000-0x1c02fff irq 11 on simplebus0
aw_mmc0:  mem 0x1c0f000-0x1c0ffff irq 15 on simplebus0
mmc0:  on aw_mmc0
aw_mmc1:  mem 0x1c10000-0x1c10fff irq 16 on simplebus0
mmc1:  on aw_mmc1
aw_mmc2:  mem 0x1c11000-0x1c11fff irq 17 on simplebus0
mmc2:  on aw_mmc2
ehci0:  mem 0x1c1a000-0x1c1a0ff irq 19 on simplebus0
usbus0: EHCI version 1.0
usbus0 on ehci0
ohci0:  mem 0x1c1a400-0x1c1a4ff irq 20 on simplebus0
usbus1 on ohci0
ehci1:  mem 0x1c1b000-0x1c1b0ff irq 21 on simplebus0
usbus2: EHCI version 1.0
usbus2 on ehci1
ohci1:  mem 0x1c1b400-0x1c1b4ff irq 22 on simplebus0
usbus3 on ohci1
gpioc0:  on gpio0
uart0:  mem 0x1c28000-0x1c283ff irq 31 on simplebus0
uart0: console (115384,n,8,1)
pwm0:  mem 0x1c21400-0x1c217ff on simplebus0
pwmbus0:  on pwm0
pwmc0:  on pwm0
iic0:  on iicbus1
gpioc1:  on gpio1
gpioc2:  on axp8xx_pmu0
iic1:  on iicbus0
aw_wdog0:  mem 0x1c20ca0-0x1c20cbf irq 58 on simplebus0
cryptosoft0: 
Timecounters tick every 1.000 msec
usbus0: 480Mbps High Speed USB v2.0
usbus1: 12Mbps Full Speed USB v1.0
ugen0.1:  at usbus0
ugen1.1:  at usbus1
uhub0:  on usbus1
uhub1:  on usbus0
usbus2: 480Mbps High Speed USB v2.0
usbus3: 12Mbps Full Speed USB v1.0
ugen2.1:  at usbus2
uhub2:  on usbus2
ugen3.1:  at usbus3
uhub3:  on usbus3
AW_MMC_INT_RESP_TIMEOUT 
uhub0: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
uhub3: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
mmc0: No compatible cards found on bus
aw_mmc0: Spurious interrupt - no active request, rint: 0x00000004

aw_mmc1: Cannot set vqmmc to 33000003300000
uhub1: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
uhub2: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
mmc1: No compatible cards found on bus
aw_mmc1: Spurious interrupt - no active request, rint: 0x00000004

aw_mmc2: Cannot set vqmmc to 33000003300000
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_DATA_END_BIT_ERR
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
mmcsd0: 16GB  at mmc2 52.0MHz/8bit/4096-block
mmcsd0boot0: 4MB partion 1 at mmcsd0
mmcsd0boot1: 4MB partion 2 at mmcsd0
mmcsd0rpmb: 4MB partion 3 at mmcsd0
Release APs...done
CPU  0: ARM Cortex-A53 r0p4 affinity:  0
Trying to mount root from ufs:/dev/ufs/rootfs [rw]...
 Instruction Set Attributes 0 = 
 Instruction Set Attributes 1 = 
         Processor Features 0 = 
         Processor Features 1 = 
      Memory Model Features 0 = 
      Memory Model Features 1 = 
      Memory Model Features 2 = 
             Debug Features 0 = 
             Debug Features 1 = 
         Auxiliary Features 0 = 
         Auxiliary Features 1 = 
CPU  1: ARM Cortex-A53 r0p4 affinity:  1
CPU  2: ARM Cortex-A53 r0p4 affinity:  2
CPU  3: ARM Cortex-A53 r0p4 affinity:  3
WARNING: WITNESS option enabled, expect reduced performance.
ugen2.2:  at usbus2
uhub4:  on usbus2
random: randomdev_wait_until_seeded unblock wait
uhub4: 4 ports with 1 removable, self powered
ugen2.3:  at usbus2
ukbd0 on uhub4
ukbd0:  on usbus2
kbd1 at ukbd0
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
ums0 on uhub4
ums0:  on usbus2
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
ums0: 5 buttons and [XYZT] coordinates ID=1
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
ugen2.4:  at usbus2
random: randomdev_wait_until_seeded unblock wait
random: unblocking device.
GEOM_PART: mmcsd0s2 was automatically resized.
  Use `gpart commit mmcsd0s2` to save changes or `gpart undo mmcsd0s2` to revert them.
lock order reversal:
 1st 0xffff000040a26ff8 bufwait (bufwait) @ /usr/src/sys/kern/vfs_bio.c:3904
 2nd 0xfffffd00011a1400 dirhash (dirhash) @ /usr/src/sys/ufs/ufs/ufs_dirhash.c:289
stack backtrace:
#0 0xffff0000004538a0 at witness_debugger+0x64
#1 0xffff0000003f7f9c at _sx_xlock+0x7c
#2 0xffff00000068879c at ufsdirhash_add+0x38
#3 0xffff00000068b0d8 at ufs_direnter+0x3c4
#4 0xffff000000691b14 at ufs_rename+0xb7c
#5 0xffff000000755b60 at VOP_RENAME_APV+0x90
#6 0xffff0000004c1678 at kern_renameat+0x304
#7 0xffff000000718448 at do_el0_sync+0x4fc
#8 0xffff0000006ff200 at handle_el0_sync+0x84
lo0: link state changed to UP

Rusted ravens: Ravenports march 2019 status update

It’s been a couple of months since I last wrote about Ravenports, the universal *nix application building framework. Not exactly being a slowly moving project, a lot has happened since then.

Platform support

Raven currently supports DragonFly BSD, FreeBSD, Linux and Solaris/Illumos, the latter being only in the form of binary packages (except for when you have access to an installation of Solaris 10u8 – which can be used to build packages, too).

People following the project will notice the lack of macOS/Darwin support mentioned here. This is not a mistake as support for that platform has been put on hold for now. While Raven has successfully been bootstrapped on macOS before, the developers have lost access to any macOS machines and thus cannot continue support for it.

This does not mean that platform is gone forever. It might be resurrected at a later point in time if given access to a Mac again. The adventurous can even try to bootstrap Raven on different platforms now as the process has been documented (with macOS as the example).

I intended to do some work on bootstrapping Raven on FreeBSD/ARM64 – only to find that FreeBSD unfortunately still has a long way before making that platform tier 1. At work I had access to server-class ARM64 hardware, but current versions of FreeBSD have trouble booting up and I could not get the network running at all (if you’re interested in the details see my previous post). I’m still hoping for reactions on the mailing list but until upstream FreeBSD is fixed on ThunderX trying to bootstrap does not make much sense.

Toolchain and package updates

The toolchain used by Ravenports has been updated to GCC 8.3 and Binutils 2.32 on all four supported platforms (access to Mac was lost before the toolchain update).

As usual, Solaris needed a bit of extra treatment but up to date compiler and tools are available for it now, too. Even better: The linker from the LLVM project (lld) is available and usable on Solaris/Illumos now as well. Since it takes several hours (!) to link on Solaris, a mostly static lld executable was added to the sysroot package for that platform. This way this long-building package does not have to be rebuilt as often.

Packages have been rebuilt with this bleeding-edge toolchain (plus the usual fallout has been collected and fixed). So if you are using Raven, you are making use of the latest compiler technology with the best in optimization. Of course a lot of effort went into providing the most current versions of the packaged software, too (at least where that is feasible).

On the desktop side of things I’ve added the awesome window manager to the list of available software. It’s actually my WM of choice, but not too many people are into tiling so I postponed this one for after making Xfce available. Work on bringing in more Lua-related ports for an advanced configuration it is ongoing, but the WM is already usable as it is now.

I’ve also done a bit of less visible work, going back to many ports that I created previously and added in missing license info. This work is also not completed, yet, but the situation is improving, of course.

Rust!

One of the big drawbacks of Ravenports as stated last time, was the lack of the Rust compiler. This effectively meant a showstopper for things like current versions of Firefox, Thunderbird, librsvg, etc. The great news is that this blocker has been mostly removed: Rust is available via Raven for Dragonfly, FreeBSD and Linux! Solaris/Illumos support is pending, I think that any helping hand would be greatly appreciated.

Bringing in Rust was a big project on its own. Adding an initial bootstrap package for Dragonfly alone took months (thank you, Mr. Neumann!). The first working version of the port made Rust 1.31 available. It has since been updated to version 1.32 and 1.33 and John has added functionality to the Raven framework to work with Rust’s crates as well as scripts to assist with future updates. Taking all of that into consideration, Rust support in Raven is already pretty good for the short time that we have it.

Eventually even a port for Firefox landed – as of now it’s marked broken, though. The reason is that while it does compile just fine, the program crashes when actually started. The exact cause for this is yet unknown. If anybody with some debugging abilities has a little time on his hands, nailing down what happens would be a task that a lot of people will be benefit from for sure!

Updated ravenadm

Ravenadm, the Ravenports administration tool, has seen several updates with new features. Some have brought internal changes or new features necessary for new or updated packages. One example is a project-wide fix for ports that build with Meson: Before the change many programs needed special treatment to make Meson honor the rpath for the resulting binaries. Now Raven can automatically take care of this, saving us a whole bunch of sed commands in the specification file. Another new feature is the “Solaris functions” mechanism which can automatically fix certain functions that required generating patches before. Certainly also very nice to have!

Probably my favorite new feature is that Ravenadm now supports concurrent processes in several cases: While you cannot start a second set of package builds at the same time for obvious reasons, it is now possible to ask Ravenadm in which bucket a certain port lives, sort manifests, and such while building packages! I cannot say how much the previous behavior got in my way while doing porting work… This makes porting much, much more pleasant.

A last improvement that I want to mention here is a rather simple one – however one that has a huge impact. Newer versions of Ravenadm put all license-related texts into the logs! This means you can simply look at the log and see if e.g. the terms got extracted correctly. Before you had to use the ENTERAFTER option to enter an interactive build session and look at the extracted file. This is a huge improvement for porters.

SSL

Another big and most likely unique feature added to Raven recently is SSL autoselection. Raven has had autoselection facilities for Python, Ruby and Perl for about a year now. The latter allow for multiple versions of the interpreters to be installed in parallel and take care of calling the actual binary with the same parameters, preferring the newest version over older ones (until configured differently).

Raven supports LibreSSL, OpenSSL as well as LibreSSL-devel and OpenSSL-devel. Before the change, you could select the SSL library to use in the profile and it would be used to link all packages against it. Now we have even more flexibility: You can e.g. build all the packages against LibreSSL by default and just fall back to OpenSSL for the few packages that really require it!

And in fact Raven takes it all one step further: You can have OpenSSL 1.0.2 and OpenSSL 1.1.1 (which introduced braking changes) installed in parallel and use packages on the same system where some require the new version and some that cannot use it, yet! Pretty nice, huh?

Future work

Of course there are still enough rough edges that require work. Probably the most pressing issue is to get Firefox working so Raven’s users can have access to a convenient and modern browser. There are also quite some programs which need ports created for them. The goal here is to provide the most critical programs to allow Dragonfly to make the switch from Dports to Ravenports for the official packages.

On FreeBSD Filezilla does not currently work: It cannot be built with GCC due to a compiler bug in GCC 7.x and 8.x. Therefore it is a special port that get’s build with Clang instead. The problem is that libfilezilla needs to be built with the same toolchain – and that cannot currently be built with Clang using Raven…

Raven on Linux has some packages not available due to additional dependencies on that platform. I begun adding some Linux-specific ports but lost motivation to do so pretty fast (there are enough other things after all). Also the package manager is still causing pain, randomly crashing.

Solaris is also missing quite some packages. This is due to additional patches being required for a lot of software to build properly. Ravenports tries to support this platform as good as possible; however this could surely be improved if anybody using Solaris or an Illumos distribution as his or her OS of choice would start using Raven and giving feedback or even contribute.

Get in touch!

Interested in Raven? Get in touch with us! There is an official IRC channel now (#ravenports on Freenode) which is probably the best place to talk to other Raven users or the porters and developers. You can of course also send an email.

If you want to contribute, there is now a new “Customsource” repository on GitHub that you can create pull requests against. Feel free to contribute anything from finished ports that might only need polish to WIP ports that are important for you but you got stuck with.

There are many other means of helping with the than project then doing porting work, though. Open issues if you find problems with packages or have an idea. Also tell us if you read the wiki and found something hard to understand. Or if you could use a tutorial for something – just ping me. Asking doesn’t hurt and chances are that I can write something up.

Got something else that I didn’t talk about here? Tell us anyway. We’re pretty approachable and less elitist than you might think people who work on new package systems would be! 😉