The history of *nix package management

Very few people will argue against the statement that Unix-like operating systems conquered the (professional) world due to a whole lot of strong points – one of which is package management. Whenever you take a look at another *nix OS or even just another Linux distro, one of the first things (if not the first!) is to get familiar with how package management works there. You want to be able to install and uninstall programs after all, right?

If you’re looking for another article on using jails on a custom-built OPNsense BSD router, please bear with me. We’re getting there. To make our jails useful we will use packages. And while you can safely expect any BSD or Linux user to understand that topic pretty well, products like OPNsense are also popular with people who are Windows users. So while this is not exactly a follow-up article on the BSD router series, I’m working towards it. Should you not care for how that package management stuff all came to be, just skip this post.

When there’s no package manager

There’s this myth that Slackware Linux has no package manager, which is not true. However Slackware’s package management lacks automatic dependency resolving. That’s a very different thing but probably the reason for the confusion. But what is package management and what is dependency resolving? We’ll get to that in a minute.

To be honest, it’s not very likely today to encounter a *nix system that doesn’t provide some form of package manager. If you have such a system at hand, you’re quite probably doing Linux from Scratch (a “distribution” meant to learn the nuts and bolts of Linux systems by building everything yourself) or have manually installed a Linux system and deliberately left out the package manager. Both are special cases. Well, or you have a fresh install of FreeBSD. But we’ll talk about FreeBSD’s modern package manager in detail in the next post.

Even Microsoft has included Pkgmgr.exe since Windows Vista. While it goes by the name of “package manager”, it turns pale when compared to *nix package managers. It is a command-line tool that allows to install and uninstall packages, yes. But those are limited to operating system fixes and components from Microsoft. Nice try, but what Redmond offered in late 2006 is vastly inferior to what the *nix world had more than 10 years earlier.

There’s the somewhat popular Chocolatey package manager for Windows and Microsoft said that they’d finally include a package manager called “one-get” (apt-get anyone?) with Windows 10 (or was it “nu-get” or something?). I haven’t read a lot about it on major tech sites, though, and thus have no idea if people are actually using it and if it’s worth to try out (I would, but I disagree with Microsoft’s EULA and thus I haven’t had a Windows PC in roughly 10 years).

But how on earth are you expected to work with a *nix system when you cannot install any packages?

Before package managers: Make magic

Unix begun its life as an OS by programmers for programmers. Want to use a program on your box that is not part of your OS? Go get the source, compile and link it and then copy the executable to /usr/local/whatever. In times where you would have just some 100 MB of storage in total (or even less), this probably worked well enough. You simply couldn’t go rampage and install unneeded software anyways, and sticking to the /usr/local scheme you separate optional stuff from the actual operating system.

More space became available however and software grew bigger and more complex. Unix got the ability to use libraries (“shared objects”), ELF executables, etc. To solve the task of building more complicated software easily, make was developed: A tool that read a Makefile which told it exactly what to do. Software begun shipping not just with the source code but also with Makefiles. Provided that all dependencies existed on the system, it was quite simple to build software again.

Compilation process (invoked by make)

Makefiles also provide a facility called “targets” which made a single file support multiple actions. In addition to a simple make statement that builds the program, it became common to add a target that allowed for make install to copy the program files into their assumed place in the filesystem. Doing an update meant building a newer version and simply overwriting the files in place.

Make can do a lot more, though. Faster recompiles by to looking at the generated file’s timestamp (and only rebuilding what has changed and needs to be rebuilt) and other features like this are not of particular interest for our topic. But they certainly helped with the quick adoption of make by most programmers. So the outcome for us is that we use Makefiles instead of compile scripts.

Dependency and portability trouble

Being able to rely on make to build (and install) software is much better than always having to invoke compiler, linker, etc. by hand. But that didn’t mean that you could just type “make” on your system and expect it to work! You had to read the readme file first (which is still a good idea, BTW) to find out which dependencies you had to install beforehand. If those were not available, the compilation process would fail. And there was more trouble: Different implementations of core functionality in various operating systems made it next to impossible for the programmers to make their software work on multiple Unices. Introduction of the POSIX standard helped quite a bit but still operating systems had differences to take into account.

Configure script running

Two of the answers to the dependency and portability problems were autoconf and metaconf (the latter is still used for building Perl where it originated). Autoconf is a tool used to generate configure scripts. Such a script is run first after extracting the source tarball to inspect your operating system. It will check if all the needed dependencies are present and if core OS functionality meets the expectations of the software that is going to be built. This is a very complex matter – but thanks to the people who invested that tremendous effort in building those tools, actually building fairly portable software became much, much easier!

How to get rid of software?

Back to make. So we’re now in the pleasant situation that it’s quite easy to build software (at least when you compare it to the dark days of the past). But what would you do if you want to get rid of some program that you installed previously? Your best bet might be to look closely at what make install did and remove all the files that it installed. For simple programs this is probably not that bad but for bigger software it becomes quite a pain.

Some programs also came with an uninstall target for make however, which would delete all installed files again. That’s quite nice, but there’s a problem: After building and installing a program you would probably delete the source code. And having to unpack the sources again to uninstall the software is quite some effort if you didn’t keep it around. Especially since you probably need the source for exactly the same version as newer versions might install more or other files, too!

This is the point where package management comes to the rescue.

Simple package management

So how does package management work? Well, let’s look at packages first. Imagine you just built version 1.0.2 of the program foo. You probably ran ./configure and then make. The compilation process succeeded and you could now issue make install to install the program on your system. The package building process is somewhat similar – the biggest difference is that the install destination was changed! Thanks to the modifications, make wouldn’t put the executable into /usr/local/bin, the manpages into /usr/local/man, etc. Instead make would then put the binaries e.g. into the directory /usr/obj/foo-1.0.2/usr/local/bin and the manpages into /usr/obj/foo-1.0.2/usr/local/man.

Installing tmux with installpkg (on Slackware)

Since this location is not in the system’s PATH, it’s not of much use on this machine. But we wanted to create a package and not just install the software, right? As a next step, the contents of /usr/obj/foo-1.0.2/ could be packaged up nicely into a tarball. Now if you distribute that tarball to other systems running the same OS version, you can simply untar the contents to / and achieve the same result as running make install after an unmodified build. The benefit is obvious: You don’t have to compile the program on each and every machine!

So far for primitive package usage. Advancing to actual package management, you would include a list of files and some metadata into the tarball. Then you wouldn’t extract packages by hand but leave that to the package manager. Why? Because it would not only extract all the needed files. It will also record the installation in its package database and keep the file list around in case it’s needed again.

Uninstalling tmux and extracting the package to look inside

Installing using a package manager means that you can query it for a list of installed packages on a system. This is much more convenient than ls /usr/local, especially if you want to know which version of some package is installed! And since the package manager keeps the list of files installed by a package around, it can also take care of a clean uninstall without leaving you wondering if you missed something when you deleted stuff manually. Oh, and it will be able to lend you a hand in upgrading software, too!

That’s about what Slackware’s package management does: It enables you to install, uninstall and update packages. Period.

Dependency tracking

But what about programs that require dependencies to run? If you install them from a package you never ran configure and thus might not have the dependency installed, right? Right. In that case the program won’t run. As simple as that. This is the time to ldd the program executable to get a list of all libraries it is dynamically linked against. Note which ones are missing on your system, find out which other packages provide them and install those, too.

Pacman (Arch Linux) handles dependencies automatically

If you know your way around this works ok. If not… Well, while there are a lot of libraries where you can guess from the name which packages they would likely belong to, there are others, too. Happy hunting! Got frustrated already? Keep saying to yourself that you’re learning fast the hard way. This might ease the pain. Or go and use a package management system that provides dependency handling!

Here’s an example: You want to install BASH on a *nix system that just provides the old bourne shell (/bin/sh). The package manager will look at the packaging information and see: BASH requires readline to be installed. Then the package manager will look at the package information for that package and find out: Readline requires ncurses to be present. Finally it will look at the ncurses package and nod: No further dependencies. It will then offer you to install ncurses, readline and BASH for you. Much easier, eh?

Xterm and all dependencies downloaded and installed (Arch Linux)

First package managers

A lot of people claim that the RedHat Package Manager (RPM) and Debian’s dpkg are examples of the earliest package managers. While both of them are so old that using them directly is in fact inconvenient enough to justify the existence of another program that allows to use them indirectly (yum/dnf and e.g. apt-get), this is not true.

PMS (short for “package management system”) is generally regarded to be the first (albeit primitive) package manager. Version 1.0 was ready in mid 1994 and used on the Bogus Linux distribution. With a few intermediate steps this lead to the first incarnation of RPM, Red Hat’s well-known package manager which first shipped with Red Hat Linux 2.0 in late 1995.

FreeBSD 1.0 (released in late 1993) already came with what is called the ports tree: A very convenient package building framework using make. It included version 0.5 of pkg_install, the pkg_* tools that would later become part of the OS! I’ll cover the ports tree in some detail in a later article because it’s still used to build packages on FreeBSD today.

Part of a Makefile (actually for a FreeBSD port)

Version 2.0-RELEASE (late 1994) shipped the pkg_* tools. They consisted of a set of tools like pkg_add to install a package, pkg_info to show installed packages, pkg_delete to delete packages and pkg_create to create packages.

FreeBSD’s pkg_add got support for using remote repositories in version 3.1-RELEASE (early 1999). But those tools were really showing their age when they were put to rest with 10.0-RELEASE (early 2014). A replacement has been developed in form of the much more modern solution initially called pkg-ng or simply pkg. Again that will be covered in another post (the next one actually).

With the ports tree FreeBSD undoubtedly had the most sophisticated package building framework of that time. Still it’s one of the most flexible ones and a bliss to work with compared to creating DEB or RPM packages… And since Bogus’s PMS was started at least a month after pkg_install, it’s even entirely possible that the first working package management tool was in fact another FreeBSD innovation.

Advertisements

Eerie Linux: 5 years of bloggin’!


The Eerie Linux blog silently turned 5 years just last month. I thought a while about what kind of anniversary post I should write to celebrate the fifth birthday. I was even thinking of closing the blog on that day or at least announce that I would no longer be able to write posts regularly. I decided against it. While I don’t make any promises, I will try to keep the blog up for now.

The June marathon

In the end I decided not just to hold back that birthday post (this one) but do something special instead: Write a full article every five days! It was a lot of work, but June 2017 saw 6 posts each with over 1,600 words on average with one just falling short on 2,000. I put a lot of detail into those posts and also included quite some pictures.

It has been a fun experience but also an exhausting one. I have always been pressed for time and even though I tried to create as much material on weekends if the targeted date was during the week. Still I almost never managed to complete a whole article before the day it was due and often had to finish it in the late hours of evening after work. But now it’s done and I’m happy about that! ๐Ÿ˜‰

5 years of blogging

A lot has happened in the last 5 years. When I started the blog in June 2012, I had quite some time on my hands but I wasn’t sure if I would always find enough topics to write about. This has changed completely. Free time is pretty scarce these days but there’s just so much going on in technology and related areas that I have a very, very long list of things that I’d like to write about – and that list grows faster than I can write and publish articles.

I’ve also moved houses three times over these years – and still haven’t missed a single month completely. Each and every month has had at least one new article and I’m a bit proud of that because a lot of times it really hasn’t been easy.

Since 2013 every year I get most page hits from the US with Germany being second. Ranks 3+ vary.

2012

After thinking about starting a blog for over a year, in 2012 I actually started it. I had been using SuSE and Ubuntu Linux on the desktop for a while and wanted to know more about the operating system. And I figured that it would make sense to pick an ambitious but realistic project and write about it as the journey continued.

In my first half-year I wrote 24 posts introducing myself, finding a suitable distro (looking info Gentoo first but then settling for Arch), thoughts on graphical toolkits and so on. The most important articles were part of a series on installing and comparing 20 Linux desktop environments.

The 6 month of 2012 saw just over 1,000 page views and I even got my first “likes” and comments. However I had no idea if I was doing good for a blog of that kind. Considering that it was public and that the whole world could potentially visit the blog, it seemed pretty low. Especially if you consider the many hours that went into the posts. “There must be thousands of Linux blogs out there and who should read them all?”, I thought. But I went on doing what I was doing because of my own interest in Linux topics. And I also continued to blog about it. If somebody would read and enjoy it: Execllent. If not it had at least made me write an English text which is quite valuable for the non-native speaker.

2013

In retrospective, 2013 was an interesting year. I got the most comments and “likes” that I ever got in a single year. And page hits increased to just over 6,600! You can imagine that I was extremely happy that there actually proved to be some interest in what I was doing. I already had less time now and managed to write 22 posts in the whole year instead of 24 in 6 just months the year before.

I continued to explore and compare applications build with the Qt and GTK toolkits and these proved to be my most popular articles. But I also decided to take a little peek into the bigger world of *nix and have a shy first look at Hurd and BSD. My focus completely remained on Linux, though (little did I know that this would come to an end in the future!). Then I dug into package building and learned a lot by trying to update an old and no longer supported Linux distro. Finally I got my domain elderlinux.org and made the first step towards my original goal: Building my own Linux distribution (you have to have done that once, right? And if only for learning purposes).

2014

In 2014 things started to decline. The page hits raised slightly to over 6,800 but that was it. I published 14 posts, but all top ten most popular ones were written in previous years. I didn’t notice that back in the day, though. I managed to get a wide variety of topics covered, including my first post on hardware (writing about the new RISC-V platform that I still keep an eye on).

The most important achievement of the year was that I completed my Arch:E5 project. My own distribution was Arch-derived but did a lot of things different. It used the de-blobbed Linux libre kernel, was based on a different libc, replaced systemd with runit and used LLVM/Clang as the default compiler among other things. It also used a more modular repository architecture compared to mainline Arch Linux. I took this project pretty far: In the end I had a nice self-hosted distro that even came with two desktop environments to choose from. I learned a lot by doing this but since nobody else seemed to be interested in it (I didn’t reach out on the Arch forums or anything, though, to be honest!), I ended the project, continuing to explore other things.

2015

This was the year things changed. Page hits dropped: With about 6,500 hits fewer people visited my blog than even in 2013. I only wrote one post per month (with the exception of April where it was one April fools article and another setting things straight again). Only two posts of this year made it to the top 10 of most popular posts: One about the “Truly Ergonomic Keyboard” (which obviously brought some people to my blog who would probably not be interested in most other articles that I wrote) and another one that was a “FreeBSD tutorial for Linux users” (that received unusual attention thanks to being featured on FreeBSDNews).

I didn’t intend it to, but 2015 was the first year on the blog that was totally dominated by *BSD topics. Since I had started to seriously explore FreeBSD and OpenBSD, this looks like a natural thing. I wrote an April Fools post about Arch Linux’s Pacman coming to OpenBSD and then tried to prove that actually works. Then a friend asked me about FreeBSD and I decided to write a little introduction series. And then the year was more or less over.

2016

After the disappointment of declining public interest in my blog I didn’t expect much from 2016. Especially as I had been venturing deeper int *BSD territory – and liked it enough to continue writing about it. This was obviously even more niche than Linux and how many people would want to read that stuff, especially from a beginner? I was in for a surprise: the blog got more than 7,100 hits that year with four new posts (all of which were featured on FreeBSDNews) making it into to top 10 this time! I had hoped to reach 7,000 hits in 2014 and after it looked like things weren’t going in a good direction, this was a pretty rewarding experience.

I wrote about various *BSD topics: A howto on setting up a dual-boot FreeBSD/OpenBSD with full disk encryption, a little comparison of documentation in Linux and (Free)BSD, a short introduction to Vagrant and a series on getting started with Bacula on FreeBSD. And finally in December an article on using TrueOS for over three months as my daily driver. This post would spark a lot of interest in 2017, making it the top ranked popular post at the time I write this.

2017

In the first half of this year I have already written 14 articles, including two series that a lot of work went into: The adventures of reviving and updating an ancient FreeBSD 4.11 system with Pkgsrc and building a home router with OPNsense/pfsense. And now after only 6.5 month page hits had already climbed up to over 6,700! Recent 3 month have all totalled in more that 1,000, a mark that I had never reached before.

And that’s all before FreeBSD News, Lobsters and even DragonFlyDigest linked to either my pfSense vs. OPNsense article or even to the whole BSD home router series! That made the stats really skyrocket over the previous two weeks. It definitely looks like there are quite some other people out there that don’t think *BSD is boring!

Current stats

Daily blog stats 07/2017

Before the great rush I was receiving about 20 to 60 page hits each day. The new record is now 425 hits on Jul 18 after Lobste.rs picked up the pfSense vs. OPNsense comparison!

Weekly blog stats 07/2017

Weekly hits were between 140 and 370 between Jan and Jul. And then there was this week that saw 1.200 page hits – this is as much as the whole month of May this year and that was the absolute monthly record before!

Monthly blog stats 07/2017

Between January 2016 and June 2017, the blog received 440 (January ’16) and 1.200 (May ’17) hits. And then July happened with over 2.700 hits!

Yearly blog stats 07/2017

The best blogging year so far had been 2016 with 7.100 hits – now at the end of July 2017, this blog has already seen over 8.800 hits. I’m pretty confident to reach the magic mark of 10.000 this time (wow!).

The future?

Of course I cannot say for sure. But I’ve found my place in the FreeBSD community and made a comfortable home with GhostBSD. After becoming part of the small team that develops this OS, I’ve faced quite some challenges and without any doubt there are more to come. But it is a great learning experience and being a (albeit small) part of it feels very rewarding.

And even though time is a very limiting factor I currently don’t feel like taking a break any longer! I will definitely continue to explore more BSD and write about it. Next station: Some preparations for an article on using jails on the newly installed OPNsense router (or anywhere else!). Thanks for reading – and see you soon.

Building a BSD home router (pt. 3): Serial access and flashing the firmware

Part 1 of this article series was about why you want to build your own router, and how to assemble the APU2 that I chose as the hardware to build this on. Part 2 gave some Unix history and explained what a serial console is.

In this post we will prepare a USB memstick to update the BIOS and connect to the APU2 using the serial console. Then we’ll flash the latest firmware on there.

Cables and serial connections

In the ol’ days you would simply connect the COM port on one machine to the COM port on the other. Today a lot of newer laptops don’t even have a serial port (if yours still has one of those funny devices that you’d access through /dev/fd0, chances are pretty high though, that it also has a COM port!). Fortunately USB to serial adapter cables exist, solving that problem.

The APU2 has a male DB9 (9 pins) serial port. RS-232 is the common standard for serial communication. According to it, some pins are used to transfer information while others are used to receive information. Now if you connect two machines with a straight serial cable, both will talk on the same pins and listen on the same pins. So both will send data over pins that nobody listens on and never receive anything on the other pins. This is not really useful. To make the connection work, you need a crossed-over cable (a so-called null modem cable or adapter). This means that the receiving pins on one end are paired with transmitting pins on the other end and vice versa.

I thought that I would never need a nullmodem cable at home. I still don’t think that I might ever need a straight serial cable. And I could in fact take one home from work and return it the other day. However I could already see what would have happened in that case: When I get some time to tinker with my APU it will be on the weekend and I won’t have the cable in reach when I need it. So I got my own. And while I was at it, I decided to not only get a USB to RS-232 DB9 serial adapter cable (look for the PL2303 chipset translating USB to serial: It’s well supported across a wide range of operating systems including FreeBSD). I also bought a null modem adapter and a gender changer. So now I’m completely flexible with my gear. However you probably just want to get a null modem cable USB/female DB9 (or ask somebody who has one if you could borrow it).

Another thing that you have to know is the baud rate (modulation rate) for your connection. A higher baud rate means a faster connection. As long as both connected machines agree on the baud rate, everything is fine. If they disagree, this can lead to displaying garbage instead of the actual console or in seemingly nothing happening at all.

To flash or not to flash

At the time of this writing, PC Engines have released 5 updates for the APU2’s firmware and if you like the improvements, it makes sense to put the newest version on there. They recommend booting TinyCore Linux and then using flashrom to flash the BIOS.

Flashrom is available for FreeBSD, too. I would imagine that it works as well. However I have no experience with that and flashing stuff always bears the risk of bricking your device (for which case PC Engines offers a small rescue device). If I had thought about this right at the beginning, I would probably have tried it out. But my APU2 is already updated and since this post is public and not just for me… Well, let’s just do this by the book and use Linux for that.

If you’re a little anxious and don’t feel well about flashing at all, leave it be; in general the old BIOS will do, too. Flashing according to a guide does not have a high risk of bricking your device but even a small risk is a risk. Disclaimer: You decide. And it’s all your responsibility, of course.

Alright, we need to prepare a USB stick with TinyCore and the ROM on it. PC Engines even offer a howto guide showing how to create this using FreeBSD. That guide works, but it clearly shows that those guys know Linux a lot better than (modern) FreeBSD. For that reason I’m going to modify it slightly here and use today’s tools.

Preparing the BIOS updater memstick

First we’re going to download the TinyCore tarball and the zipped ROM (you might want to take a look if a newer version than shown here is out) and install the syslinux package (containing the that Linux bootloader):

% su -
# mkdir -p /tmp/apu2 && cd /tmp/apu2
# fetch http://pcengines.ch/file/apu_tinycore.tar.bz2 http://pcengines.ch/file/apu2_v4.0.7.rom.zip
# pkg install syslinux

Now attach your USB memstick to the system and after a couple of seconds look at the dmesg if you don’t know what device it will be attached to:

# dmesg | tail
[...]
da0 at umass-sim0 bus 0 scbus2 target 0 lun 0
da0: < USB DISK 2.0 PMAP> Removable Direct Access SPC-4 SCSI device
[...]

So for me it’s da0 in this case and I need to supplement the “X” that I use in the following commands with “0”. You should use whatever number you figured out. PC Engine’s howto suggests zeroing out the stick and if it isn’t a new and unused one that makes sense. In my case this leads to an error:

# dd if=/dev/zero of=/dev/daX bs=1m
dd: /dev/da0: Operation not supported

# mount
[...]
/dev/da0s1 on /media/disk (ext2fs, local, nosuid)

Looks like this is because GhostBSD automatically mounted da0s1 when it found an EXT2 filesystem from a previous task in there. So let’s unmount that and try again:

# umount /media/disk
# dd if=/dev/zero of=/dev/daX bs=1m

Depending on the speed and size of your memstick (and the generation of your USB port) this can take quite some time to finish. And since dd normally is working quietly, you might want to know how far it came so far. Fortunately FreeBSD implements the SIGINFO signal. Just press CTRL+T while it is running and it print some status information like this:

load: 1.52  cmd: dd 92084 [physwr] 379.15r 0.00u 0.56s 0% 3188k
2512+0 records in
2512+0 records out
2634022912 bytes transferred in 379.208179 secs (6946113 bytes/sec)

When it’s done, we can create an MBR partitioning scheme on the now empty stick as well as an MBR partition and write the bootcode on the stick so that we can boot off of it at all:

# gpart create -s mbr daX
da0 created
# gpart add -t fat32 daX
da0s1 added
# gpart bootcode -b /boot/boot0 daX
bootcode written to da0

The next thing to do is putting a FAT32 filesystem on the partition and force install the Syslinux bootloader there that will be chainloaded by the bootcode that we wrote into the MBR.

# newfs_msdos /dev/daXs1
newfs_msdos: trim 40 sectors to adjust to a multiple of 63
[...]
# syslinux -if /dev/daXs1

Now we need to mount the new filesystem and put the OS on there:

# mount -t msdosfs /dev/daXs1 /mnt
# tar xjf apu_tinycore.tar.bz2 --no-same-owner -C /mnt

We’re writing to a FAT filesystem – and as the primary filesystem from a once popular single user OS it does simply not support concepts like file ownership. That’s why we need “–no-same-owner” here (otherwise we’d see harmless but unnecessary warnings). As the next step we’ll add the ROM image and check its integrity – we don’t want to flash garbage on our APU and brick it, do we?

# unzip -d /mnt apu2_v4.0.7.rom.zip
# grep -c `md5 -q /mnt/apu2_v4.0.7.rom` /mnt/apu2_v4.0.7.rom.md5
1

Make sure that the last command outputs 1 (in which case the calculated md5 hash matches)! If it does not, delete the zip file, download it again and extract it over the corrupt ROM file. Finally sync the filesystem, unmount it and remove the USB stick:

# sync
# umount /mnt

The memstick is ready. If you want to test it, boot some other PC or laptop from it. If you can read the line “Booting the kernel” and then nothing seems to happen anymore, it means that it’s working. TinyCore is configured to use the serial console, that’s why you don’t see anything on your screen and your keyboard doesn’t do anything after that. Just turn your computer off and plug the USB stick out.

Attaching the serial console and flashing the BIOS

Alright, back to the APU2 (finally). Put the memstick into one of the USB ports and attach your serial cable to the COM port. Now connect the other end of the null modem cable with another computer running FreeBSD (or Linux or whatever – this should all work, you’ll just have to figure out how to connect to a serial console on your OS).

Open a terminal, become root and attach the serial console (cuaU0 is the USB to serial adapter, 115,200 the baud rate):

% su -
# cu -l /dev/cuaU0 -s 115200
Connected

Now connect the APU2 to power and see what happens! If you don’t do anything, the BIOS should load TinyCore from the memstick after a couple of seconds:

Connected to the serial console – and booting Linux

If nothing happens, you might have the wrong cable (is it really crossed-over?). Or maybe you’ve mistyped the baud rate? The “Connected” by the way only means that your host system has attached to the serial cable. You’ll get that message even when the other end is not connected to anything or the machine at the other end is turned off.

Once Linux was loaded, use flashrom to update the APU’s firmware like PC Engines show in their howto. Then reboot.

Preparing to flash the BIOS

The APU2’s BIOS supports the serial console. That means that even before the machine has booted an operating system with serial console capability, you can access and configure the BIOS of the headless machine:

Serial access to the BIOS settings

Another nice thing that comes with the APU is the Memtest program. If you want to know whether your new hardware is actually good or might probably have bad ram, put it to the test for a couple of hours or over night:

The firmware comes with Memtest

If you’re using cu as in my example, you can close the serial connection using the character sequence ~. (tilde and dot).

What’s next?

You now know how to access your box using the serial console. Next time we’ll make use of that skill again to put pfSense as the first of two options on the APU. The other option is OPNsense which will be covered in a later post. Both are FreeBSD-based router/firewall operating systems.

Back and forth: Linux and *BSD

This is kind of the post that I wanted to write much earlier this year. After running a Linux-only environment at home for years, I had become less and less happy with the general direction things seem to be heading. I had run FreeBSD and OpenBSD on real hardware (old laptops) and several versions of PC-BSD in VirtualBox over the years. In January I decided to step forward and install PC-BSD (10.2) on my primary computer for daily usage. It remained a short episode – and this post will describe why. When TrueOS was released to the public I decided to try out that right away. But that will be another post.

Initial contact

I cannot remember when I first read about the BSDs. That must have been many years ago when I became interested in reading a bit about UNIX. I remember beastie and puffy and I remember that I failed installing a system in a VM because it was somehow too complicated. It likely was OpenBSD and the chance is quite high that I quit during the partitioning which probably was way over my head at that time.

While I never lost interest in it (Unix fascinated me) I decided to “learn Linux first” as that was the system I had chosen to run my computers with. As the Linux world was big enough for years (trying out the various desktops, doing a lot of distro hopping, …) I touched *BSD only rarely. Basically it was limited to installing PC-BSD in a VM when I found out that a new version was released. It seemed to be nice but I didn’t see any benefit over my Linux systems and so I stuck with that.

After studying something entirely different, I had made the decision to break up and get into the IT instead, even though was I well beyond the age that you usually start an apprenticeship. In my country that means that you apply to a company to work as an apprentice there half of the week and go to school the other days. Being somewhat of a Linux nerd I had only applied to companies that I knew weren’t using Windows – I had left that mess and was determined to avoid it in the future as far as possible. In the end I signed a contract of apprenticeship with a hosting company, moved into the area and started learning Linux a lot deeper than I had before. And… I came in contact with FreeBSD.

Being a hosting company that had been founded in the nineties, it had of course started on FreeBSD. Even though the focus of the company shifted to Linux years ago, there still were about 100 servers running FreeBSD. My colleagues generally disliked those servers – simply because they were different. And our CIO declared that he hated them and would love to get rid of them as FreeBSD was totally obsolete these days. If it hadn’t been for our boss to have a soft spot for them (as that had been what he started with and also what he had come to know best over the years) there definitely would have been far less FreeBSD servers around.

Digging into FreeBSD

Now for whatever reason I do have a heart for underdogs and so I begun to be interested in those odd systems quite a bit. Nobody wanted to touch those dinosaurs if he didn’t really have to. However somebody had to take care of them anyways, right? They were production servers after all! I volunteered. There were moments where I kind of regretted this decision but now in hindsight it was an excellent choice. I’ve learned a ton of little things that made me understand *nix and even the IT in general quite a bit better compared to what I would know now if I had followed the straight Linux path.

I also found out that only very few things that the colleagues hated about our FreeBSD boxes were things to actually blame FreeBSD for. By far the biggest problem was that they simply had been neglected for like a decade? Our Linux systems used configuration management, the FreeBSDs were still managed by hand (!). We had some sophisticated tooling on Linux, on the BSD boxes there were crude old scripts to (kind of) do the same job. Those systems were not consistent at all; some at least had sudo others made you use su if you needed to use privileged commands… Things like that. A lot of things like that. So it wasn’t exactly a miracle that the BSDs were not held in very high regard.

As I said, I didn’t really see any real advantage of BSD before. Linux even seemed to be easier! Think network interfaces for example: “eth + number” is easier than “abbreviation of interface driver + number”. But Linux has since moved to “enp0s3” and the like… And when you think again, it does make a lot of sense to see what driver an interface uses from the name. Anyways: I begun to like that OS! FreeBSD’s ports framework was really great and I realized the beauty of rc.config (Arch Linux did away with their central config file to get systemd. What a great exchange… – not!). Also I liked the idea of a base system quite a bit and /rescue was just genius. Would my colleagues lose their contempt for our BSD servers if they were configured properly? I thought (and still think) so.

My apprenticeship was nearing its end and I had to choose a topic for the final project work. I was advised to NOT do something Linux related because the examiners… *cough* lacked experience in that field (in the past an apprentice even failed because they have no idea what they are doing. He went before court and it was decided in his favor. A re-examination by people who knew Linux got him an A!). Now things like that make me angry and calls upon the rebel in me. I handed in a FreeBSD topic (evaluating Puppet, Chef, SaltStack and Ansible for orchestration and configuration management of a medium-sized FreeBSD server landscape).

So for servers I was already sold. But could *BSD compete on the desktop, too? I built two test systems and was rather happy with them. However I wanted to try out a BSD system optimized for desktop usage. Enter PC-BSD.

Working with PC-BSD

I was called nuts for making that switch just days before the final presentation of the written project work (“you need to pass this – your entire career depends on it!!”). But I didn’t want to do a presentation on a FreeBSD topic using a Linux machine! Well, in fact I had been too optimistic as the installation turned out to be… rather problematic due to a lot of bad surprises. To be fair: Most of them weren’t PC-BSD’s fault at all. The BIOS mode on my computer is broken in it not supporting booting off GPT partitions in non-UEFI mode. This lead to my drives disappearing after installation – and myself wondering if my classmates were right… Never change a running system! Especially not if you’re pressed for time!

After I found out what the problem was, installing to MBR was an easy thing to do. I still needed every single night that I had left but I got everything to work to at least the level that allowed me to hold my presentation. Another thing was that I had enabled deduplication on my ZFS pool. “24 gigs of memory should be enough to use that feature!”, I thought. Nobody had told me that it slows down file deletion so much that deleting about 2 GB of data meant to go and do something else while ZFS was doing its thing. Even worse: The system was virtually unresponsive while doing that so you could forget browsing the web or something like that in the meantime. But truth be told this was my own mistake due to my very own ignorance about ZFS and I can hardly blame PC-BSD for it.

I kept PC-BSD on my laptop for about 1.5 month before I needed to return to Linux – and I would in fact even have returned earlier had I had the time to reinstall. While some issues with PC-BSD vexed me, too, I could have lived with most of them. But my wife complained all the time and that of course meant the end for my PC-BSD journey.

So what were (some) of the issues with it? My wife mostly uses the PC to check email when our children are occupied with something for a moment. For her the very long boot time was extremely annoying. And really it took multiple times as long as the Linux system before (and that was still one with Upstart!). Keeping one user logged in and changing to another user quickly wasn’t possible – which meant that I had to shut down my multiple virtual machines and log out completely if my wife just wanted to quickly check mail or something. Not cool. Things like that.

And then there were a few things that annoyed me. It drew power from the battery much, much faster than the previous Linux system. When watching a video, the screen saver kept interrupting it. Firefox had strange issues from time to time and liked to crash. Working with EXT4 formatted disks was a pain. And so on and so forth.

Of course there were good parts, too. I had a real FreeBSD system at my hands with access to ports. Two firewalls (that are nothing like the mess that is netfilter/iptables!) to choose from. Excellent documentation. Nice helper tools (like the automounter, wifi manager, disk manager, etc.). Several supported desktops to choose from. And of course the well thought-out update system that I liked a lot. Thinking about it, there are a lot of good parts actually. Unfortunately even a ton of things nice to have have a hard time covering things conceived as no-gos. That’s life.

I had intended to update to 10.3 and then write a complete blog post about PC-BSD. My wife didn’t like the idea much, though. In addition to that I had little spare time and no alternative spare hardware, so there wasn’t a chance for me to actually do that.

Interlude: Linux

So it was back to Linux. With systemd this time. I’m not exactly friends with that omnivoristic set of tools that annoyed me perhaps just not enough to switch the system over to runnit or openrc. Other than that life was good again (as my wife was happy and I could do my work). But there was one thing in the short period of time with PC-BSD that had changed everything: I had caught the bug with ZFS!

Fourtunately there’s ZFSonLinux, right? So I installed that and created a pool to use for my data. In general that worked but it’s a bit more hassle to set up compared to FreeBSD where you basically get it for free without having to do anything special! If you don’t want to compile all packages related to ZFS yourself for each new kernel, there’s a third-party package repository for Arch. ZFS is not in the official ones. At some point the names of the packages changed and the update failed. I didn’t find anything about that and had to figure out myself what happened.

After another kernel and ZFS update that I did in the morning succeed. But when I came home, my wife told me that when she logged in, she was logged out again almost instantly. I booted the computer and logged in – the same thing happened. What was that? No error message, no nothing. The system simply dropped me back at the login manager… So I switched to text mode to take a look at what might be wrong with the system. Long story short: My pool “homepool” which held all user’s home directories was not available! And worse: zpool import said that there were no pools available for import… With the update, ZFS had stopped working! That hit me in the wrong moment whan I had very little time and so I had to downgrade as the quickest solution.

In the end I chose to compile the “solaris porting layer” and the other packages myself. This was not so bad actually but knowing that on FreeBSD I’d have access to ZFS provided by the operating system without having to do anything (and that nobody was going to break it without it probably being fixed again in no time) vexed me. Of course there were other things, too, and using FreeBSD on other boxes, I wanted it back on my main desktop machine as well.

What’s next?

I installed TrueOS and used it for over three months. The next post will be a critical writeup about TrueOS.

Documentation: Linux vs. FreeBSD – a real-world example

With every operating system there comes the time when you need help with something (if you’re not the absolute รœber-guru, that is). If you are in need of help, there are many ways to get it. You can ask an experienced colleague or friend if available. If not, you can search the web. There is a very high possibility of the information that you need being out there, somewhere. If not, you could ask for help and hope that somebody answers. Well, or you could consult the documentation!

In most cases somebody has been right there before and asked for help on the net and somebody else gave an answer. That answer may or may not be correct, of course. And in fact it might even have been correct at some point in time but is no longer valid. This is a very common thing and we have learned to optimize our searches to more or less quickly find the answers that we need. After getting used to that habit, “google it” (replace with $search_engine if you – like me – try to avoid using Google services when possible) is probably the most common way to deal with it once you hit a problem on unfamiliar ground. So while users of Unix-like systems are usually aware of the existence of manpages, I’d say that especially younger people tend to avoid them. And really: You don’t need them. Except for when you do!

Public WLAN

Last week I had two appointments in another city. So I took one day off from work, got up early in the morning and drove about 1.5 hours to the first one. The second one was a few hours later and so I was left with something extremely precious: Free time! To make it even better, neither my children nor my wife were around. The perfect opportunity to get something done!

One of my hobbies aside from computer stuff is writing. In addition to shorter stuff, I also have a fantasy novel (called “Albsturm”) that I’m writing on as time permits (which it hardly ever did during the last two years). And so I figured that it would be a good idea to take a laptop computer with me and spend some hours writing (hint: Like always, I didn’t write a single sentence!). I have two reasonably new laptops that I could choose from, one running Arch Linux, the other one FreeBSD and OpenBSD. The latter is the smaller one and for that simple reason I took that one with me.

It was a warm day and I decided to sit down at a cafรฉ, have a drink and do my stuff there. When I found one, I saw a sticker which told me that public WLAN was available there. Hm. Other than writing I also had a more or less urgent email to write. Should be a quicky, just a few lines. So I thought that I should probably start with that.

Offline!

The only problem was that I had no idea whatsoever on how to connect to the WLAN using FreeBSD or OpenBSD! In fact I had no idea how to do it on Linux, either. I’m an “all cable guy”. It feels like about two decades ago that I had my first wireless mouse. I really liked it – until the batteries ran out of charge in a very bad moment and I didn’t have any replacement ready. Wireless stuff may be convenient as long as it works, but I prefer reliability over that. And I also like to set up basic things once (which means that I wouldn’t like to have to change a WLAN channel if my neighbor gets a new access point which occupies the same one that I had used before – stuff like that).

The three or four times that I had used WLAN before was on a Linux box using the graphical Network Manager which does all the magic behind the scenes. Yes, I’m aware that PC-BSD has its own tool which does the same job and GhostBSD has another for people like me who prefer a GTK application over a Qt one. I had neither PC-BSD nor GhostBSD on my laptop however. Just vanilla FreeBSD (with EDE as the desktop) and OpenBSD without any desktop (because I didn’t have time to install one, yet).

So there I was, offline and looking for a way to go online. Obviously “google it!” or some variant of that did not apply here. Sure, the adventure could have ended just there. But I am a weirdo who refuses to take a mobile with him everywhere he goes like most other people seem to do these days. Now if that’s shocking for you or you just cannot believe that someone who deals with tech does not have his mobile in reach all the time: Just imagine that I had one but it ran out of power (I’ve seen this happen to friends often enough to know that it’s quite common)! ๐Ÿ˜‰

Ok, what now? Thinking about it for a second, I realized that I had made a mistake when installing my system. You don’t install doc when you’re setting up a new system, right? The (absolutely excellent) FreeBSD handbook is available online after all. So why should you? Yeah. So am I on my own here? No! It’s me and a man’s man(1)! Will that suffice to go online?

Help!

Thanks to my previous exposure to help systems, this was the moment where I could have felt a cold chill (which would actually have felt good due to the warm weather). Remember the Windows 9.x “help” system? I cannot remember a single time when it had actually helped me. It either found nothing even remotely connected to my problem or it gave some generic advice like “ask the network administrator” (I AM the “network administrator”, dammit! I’m the guy who plugged those four cables into the switch and gave static IPs to the PCs!). It was utterly useless – and in a later version they “improved” their help by adding a stupid yellow dog… (When PC people talk about “the good old times” this is what you should remind them of :p)

But let’s not waste any more time on the horrible demons of the past and skip to the friendly daemons of today! I’ve used manpages a few times on Linux systems. This was a much better experience but still a vain effort often enough. The worst thing: For a lot of commands there are both a manpage and an info page – and those two are not identical at all! With a bit of bad luck you skimmed through one help text but the relevant information is only present in the other. Even though I can see the limitations of the older manpage system and understand the intent to create something better… No, sorry. If GNU really wanted to go with info pages instead of manpages they should just have created manpages which point the reader at the info page for each command. Just don’t make me read both because they have different information in them!

FreeBSD has a natural advantage here due to its whole-system approach. If you install third-party packages (say GNU’s coreutils) you will be in for the same mess. But everything that belongs to the base system (and that’s a whole lot of stuff!) is a consistent effort down to the manpages. And from what you hear or read on the net, the BSDs pride themselves in dedicating a fair amount of time to write documentation that’s actually useful! Does the result live up to that claim? We’ll see.

Where to start?

Manpages… Ok, sure. Just what should I start to look for? As I said, I didn’t know too much about the topic. Hm! I couldn’t think of anything quickly, so I actually did a apropos wlan. It wasn’t a serious search and I didn’t really expect anything to show up. Here’s the output of that command from a Linux box:

apropos: nothing appropriate

So was I right there? No! I was in for a first pleasant surprise. Here’s the output on my FreeBSD machine:

snmp_wlan(3) - wireless networking module for bsnmpd 1
wlan(4) - generic 802.11 link-layer support
wlan_acl(4) - MAC-based ACL support for 802.11 devices
wlan_amrr(4) - AMRR rate adaptation algorithm support for 802.11 devices
wlan_ccmp(4) - AES-CCMP crypto support for 802.11 devices
wlan_tkip(4) - TKIP and Michael crypto support for 802.11 devices
wlan_wep(4) - WEP crypto support for 802.11 devices
wlan_xauth(4) - External authenticator support for 802.11 devices
wlandebug(8) - set/query 802.11 wireless debugging messages

Not bad, huh? 9 hits compared to… 0! I had nowhere better to go, so I read wlan. It provided a fair amount insight into things that I was not too interested in at that moment. But it had a rather big SEE ALSO section (which I feel kind of lacking in the Linux manpages that I’ve read so far). This proved extremely useful since a lot of device drivers were mentioned there and I figured that this would actually be a good place to really start.

Dmesg told me that my machine has an “Intel Centrino Advanced-N 6205” and that the corresponding driver was iwn. However ifconfig showed no iwn0 interface. There were only em0 and lo0 there. How’s that? I figured that it probably had to be set up somehow. And had I not just read about the generic wlan driver?

The wlan module is required by all native 802.11 drivers

The same manpage also pointed me to ifconfig(8) which makes sense if you want to do interface related stuff (unless you’re on newer Linux systems which sometimes do not even have ifconfig and you have to use the ip utils).

The ifconfig(8) manpage is a really detailed document that helped me a lot. So it’s only

ifconfig wlan0 create wlandev iwn0

and my wlan interface appears in the list showed by running just ifconfig! That was pretty easy for something which I would have never figured out by myself.

Let’s go on!

The first step of what could have been a painful search turned out to be so surprising easy that I was in a real light mood. So instead of just getting things to work somehow(tm), I decided to do it right instead. It was only one simple command so far but I wouldn’t want to enter it again after each reboot. So it was time to find out how to have the init system taking care of creating my interface during system startup.

Phew. That could be a tough one. What obscure configuration file (or worse: systemd “unit file”) could WLAN configuration stuff go into? Hey, this is FreeBSD! Want init to do something for you? Have a look at /etc/rc.conf!

Hm. Sometimes configuration files have their own manpages, right? But even if there was one, it could hardly cover everything. Would somebody take the time and put what I need in there? Ah, let’s just give it a shot and man 5 rc.conf. Yes, there’s a manpage for it. But not just a manpage. I mean… Wow, just wow. I’m still amazed by the level of detail everything is described with! Should you ever take a look, you’ll be in for a treat of over 2400 lines! Does it cover WLAN interface creation? You bet it does! And it holds more information about that topic than fits on one terminal screen. In my case it boils down to:

wlans_iwn0="wlan0"

Really simple again – which is really encouraging if you’re new to a topic (on an operating system you’re only slowly getting familiar with because you have to spend most of your time with Linux machines).

The manpage also mentions wpa_supplicant(8) and after reading a bit about it and wpa_supplicant.conf(5), I had my system automatically make a WLAN connection during startup: It received and ack’d a DHCP offer and got an IP. Great!

Hold captive by the portal

Time to fire up a browser and surf to some website to see if it works… Oh my. What’s that? A captive portal redirects me to a page with payment information! That’s not quite what I’d call “free WLAN”! What happened? The page says “Telekom” but the sticker said that the hotspot was provided by another company. So I must be connected to the wrong one…

So it’s reading ifconfig(8)’s manpage again. Turns out that ifconfig wlan0 scan returns a list of available networks. So far so good. Of course the manpage also explains how to manually connect to a network of your choice. But this is where things went into the wrong direction for me.

The SSID of the network that I wanted to connect to was too long to fit into the column that ifconfig reserves for the output… Gosh. Now how would I connect to that one? Guess the rest of the name? Probably not a good tactic. What else? I could not connect to the network that I knew was free and I didn’t want to just randomly try connecting to the others.

Autoconnection makes its decision by signal strength. It’s rather unfortunate that the stupid paid Telekom one had a better signal where I was sitting. But by blocking that one network there’d be a good chance that the right one would have the second best signal, right? So I only have to somehow blacklist the Telekom network.

A long story short: This proved to be a dead-end. I still have no idea if it’s even possible to blacklist a network using wpa_supplicant.conf. It probably isn’t and the only way to go is define the desired network with a higher priority than the undesired one. It took me quite some time to give up on this path that seemed to lead nowhere.

What now? It looked like I had to somehow get my hands on the complete SSID of the right network. But how to do that? I could of course always ask a waitress as she probably either knows it or at least could ask somebody who does. However after spending quite some time on the matter, I wanted to figure it out myself. Finally I came across ifconfig(8) again and it mentions the -v flag to show the long SSIDs (and some more info)!

With the full SSID known to me I could adjust my wpa_supplicant.conf – and after a reboot the system picked the right network. My browser was lead to a different captive portal and after I read and accepted the license terms (which were quite reasonable), I was free to surf wherever I wanted.

I quickly wrote and sent the mail that lead me to this adventure in the first place. Then I shutdown -p now my system, put the laptop in my car and drove to my second appointment.

Summary

I’ve had some Linux experience for almost two decades now and used it on a daily basis for about half of that time. In contrast to that I’m really new to the BSDs, seriously using FreeBSD for less than a year. I probably know less than 1% of the common taks on that OS – and even less on the topic of WLAN which I avoided as far as I could.

Getting my laptop to connect to the net via WLAN in a cafรฉ using just the manpages because I was offline until I reached my goal seemed like a painful adventure full of potential pitfalls. Instead it proved to be an unexpectedly pleasant ride in unfamiliar territory.

There are many sources on the net that say BSD has far superior documentation compared to Linux. And I was impressed enough about that fact to add another one by writing this post. So if you’re a *BSD user and you need help I can only give the advice to take the time to read some manpages instead of looking or even asking on the net. It is much more rewarding to figure out things yourself using the documentation and the chance is quite high that you’ll learn another useful thing or two from it!

Can the same thing (connecting to a WLAN without graphical tools) be done on Linux? Certainly. How would you do that? I have no idea. Is there an easy way to figure things out just using the manpages? I kind of doubt it. With a lot more time on your hands: Probably. But after learning what real documentation tastes like, I don’t feel like trying it right now. I may do it in the future to complete the comparison. Or maybe not.

School, exams and… BSD!

Alright, January is already almost over, so there’s not much use in wishing my readers a happy new year, right? I wanted to have this month’s blog post out much earlier and in fact wanted to write about a completely different topic. But after January 27th it was pretty obvious for me what I’d have to write about – On that day I passed my final exam and now I’m a Computer Science Expert by profession. Time to take a look back at the apprenticeship and the status of *nix in German IT training today.

Spoiler: It’s Microsoft, Microsoft and again Microsoft. Only then there’s one drop of Linux in the ocean. I had left the (overly colorful) world of Windows in 2008. When I started the apprenticeship I was determined not to eat humble pie and come crawling back to that. While it was at times a rather tough fight, it was possible to do. And I’m documenting it here because I want to encourage other people to also take this path. The more people take the challenge the easier it will become for everyone. Besides: It is absolutely necessary to blaze the trail for better technology to actually arrive in mainstream business. This is of great importance if we do not want to totally fall behind.

Detours

I didn’t take the straight way into IT. While I had been hooked with computers since I was a little child, I also found that I had a passion to explain things to others. I gave private lessons after school for many years and after passing the Abitur (think of the British A levels) I chose to go to the university to become a teacher.

It took me a very long time of struggle to accept that I could not actually do that for a living. I am in fundamental opposition to how the German school system is being ruined and I could not spend all my work life faithfully serving an employer that I have not even the least bit of respect for.

The situation is as follows: We once had a school system in Germany that aimed at educating young people to be fit for whatever their life holds. The result was people who could stand on their own feet. Today the opposite is true: A lot of people who leave school have no idea how to find their way in life. Playing computer games is the only thing that a lot of young men (and an increasing number of women) actually do. They have not developed any character, they have no passion for anything (and thus no goals in life) and they often haven’t learned no empathy at all (and thus keep hurting other people – not even because of bad will but because of total ignorance).

At the same time things taught in school aim purely at making people available as workmen as soon as possible. Sounds contradictory? Sure thing. At the university I enjoyed the benefits of the old system where there was relatively large academic freedom and you were encouraged to take your time to learn things properly, to do some research if you hit topics of interest to you and to take courses from other faculties, etc. And this is pure insanity: All that is largely gone. New students are forced to hasten through their studies thanks to tight requirements (which semester to take which course in – very schoolish, no freedom at all)… In the name of “comparability” we did away with our own academic degrees only to adopt the inferior “master” (as well as the even more inferior “bachelor”).

Secondary schools are lowering their standards further and further so that almost anybody can get their A levels and flood the universities. At the same time there are not enough people remaining for other paths of education – and those who are far too often are completely useless to the companies: People who can be described as unreliable at best are of no use at all. I did not want to be part of that madness and so I finally decided to get out and do what I probably should have done right from the start.

Vocational school: Windows

The German vocational school system is a bit special: You only go to school one or two days (this varies among semesters). What about the other days? You spend them in a company you apply at before you can start the apprenticeship. That way you get to know the daily work routine right from the start (which is a really good thing). School is meant to teach some general skills and at work you learn practical things.

On the first day I went to vocational school, I kind of felt… displaced. Why? Well, coming back to school to teach children is something that takes a moment to adjust to. I enjoyed teaching in general (even though there are always horrible classes as well ;)) but becoming a student again afterwards is really strange. At least for a while.

Subject matter was extremely easy for me. But being almost 30 years old when I started the apprenticeship of course meant that I had a lot more of knowledge and experience than the typical 18 or 20 years old student. However this was a good thing for me since I also have a wife, two children and had to drive about 1.5 hours to school and the same distance back. Which meant that I had far less time for homework or learning than the others. In fact I only found a few hours to learn for the preliminary exam as well as for the final exam. But that’s it.

We had PCs with Windows XP and were required to work with that. Most of my classmates protested because they were used to Windows 7. I simply installed Cygwin, changed tho panel position to top and things were pretty much ok for me (it’s only for a few hours, right?). A while later we got new PCs with Windows 8(.1?) and new policies. The later made it impossible for me to use Cygwin. Since I had never touched anything after Windows XP, I took my time to take a look at that system. In fact I tried to be open for new things and since a lot of time passed since I left Windows, I no longer had any strong feelings towards it. Still Win 8 managed to surprise me: It was even worse than I had thought possible…

The UI was just plain laughable. I have no idea how anybody could do some actual work with it using the mouse. Now, I’m a console guy and I need no mouse to do stuff (if I at least have Cygwin that is). But that must have been a joke, right?

Then I found out that Windows still was not capable of even reading an EXT2 file system. Oh my. So I decided to format one USB key to FAT32 for school. But guess what? When I attached it, Windows made some message pop up that it was installing drivers – which then failed… I removed the USB key and inserted it again. Same story. A classmate told me to try another USB connector. I thought that he was fooling me but he insisted on it so I did it (expecting him to laugh at me any second). To my big surprise this time the driver could be installed! But the story does not end here. No drive icon appeared in the explorer. I removed the USB key again and reattached it once more. Nothing. My classmate took it out yet again and plugged it into the former connector (the one from which installing the driver failed). And this time the drive appeared in the explorer! It was that moment that I realized not too much had changed since XP – despite the even uglier looks. Bluescreens, program crashes and cryptic error messages that I had not seen in years all were back.

I decided that I could not work like that and decided to bring a laptop each school day. Just about all my classmates were fine with Windows however. But speaking of classmates: We lost five of them in the first two years. Two simply never showed up again, two more were fired by their companies (due to various misbehavings) and thus could not continue their apprenticeship and the other one had a serious problem with alcohol (being just 17 years old) and was also fired.

BYOD: Linux desktop

My laptop was running Linux Mint. When I bought it, it came with Mint pre-installed. My wife got used to that system and did not like my idea to install a different system (I mainly use Arch Linux as a desktop at work and on other PCs at home) and so Linux Mint stayed on there.

There were a few classmates interested in Linux in general. These quickly became the ones that I spend most of my time in school with. Three already had some experience with it but that’s it. One of them decided that it was time to switch to Linux about a year ago. I introduced him to Arch and he’s a happy Antergos (an Arch-based distro) user since then. Another classmate was also unhappy with Windows at home. I answered a few questions and helped with the usual little problems and she successfully made the switch and runs Mint now.

Some teachers couldn’t quite understand how one could be such a weirdo and not even have one single Windows PC. We were supposed to finish some project planning using some Microsoft software (forgot the name of it). I told the teacher that the required software wouldn’t run on any of my operating systems. Anything not Windows obviously wasn’t thinkable for him and he replied that in that case I’d really have to update! I explained to him that this was not the case since I ran a rolling-release distro which was not just up to date but in fact bleeding edge.

When he understood that I only had Linux at home, he asked me to install Windows in that case. Now I told him that I didn’t own any current version of Windows. He rolled his eyes and replied that I could sign up for some Microsoft service (“dream spark” or something?) where each student or apprentice could get it all for free. Then I objected that this would be of no use since I could not install Windows even if I had a license because I did not agree to Microsoft’s EULA. For a moment he did not know what to say. Then he asked me to please do it at work then. “Sorry”, I replied, “we don’t use Windows in the office either.” After that he just walked away saying nothing.

We were required to learn some basics about object-orientated programming – using C#. So I got mono as well as monodevelop and initially followed the course.

Another Laptop: Puffy for fun!

I got an older laptop for a really cheap price from a classmate and put OpenBSD on there. After having played a bit with that OS in virtual machines I wanted to run it on real hardware and so that seemed to be the perfect chance to do it. OpenBSD with full disk encryption and everything worked really nice and I even got monodevelop on there (even though it was an ancient version). So after a week I decided to use that laptop in school because it was much smaller and lighter (14″ instead of 18.3″!) – and also cheaper. ๐Ÿ˜‰

After upgrading to OpenBSD 5.6 however, I realized that the mono package had been updated from 2.10.9p3 to 3.4.0p1 which broke the ancient (2.4.2p3 – from 2011!) version of monodevelop. Now I had the option of bringing that big Linux laptop again or downgrade OpenBSD to 5.5 again. I decided to go with option 3 and complain about .NET instead. By now the programming course teacher already knew me and I received permission to do the exercises with C++ instead! He just warned me that I’d be mostly on my own in that case and that I’d of course have to write the classroom tests on C# just like everyone else. I could live with that and it worked out really well. Later when we started little GUI programs with winforms I would have been out of luck even on Linux and mono anyway. So I did these with C++ and the FLTK toolkit.

Around christmas I visited my parents for some days. My mother’s computer (a Linux machine I had set up for her) stopped working. As my father decided that he’d replace it with a new Windows box (as that’s what he knows), I gave up my OpenBSD laptop. I installed Linux on it again and gave it to my mother as a replacement to prevent her having to re-learn everything on a Windows computer…

Beastie’s turn

So for the last couple of weeks I was back on Linux. However the final exam consists of two parts: A written exam and an oral one. The later is mostly a presentation of a 35 hour project that we had to do last year. I took the chance and chose a project involving FreeBSD (comparing configuration management tools for use on that particular OS). We also had to hand in a documentation of that project.

Six days before the presentation was to be held, I decided that it would suck to present a FreeBSD project using Linux. So I announced to my wife that I’d install a different OS on it now, did a full backup, inserted a PC-BSD 10.2 cd and rebooted. What then happened is a story of its own… With FreeBSD 10.3 just around the corner I’ll wait until that is released and write about my experiences with PC-BSD in a future blog post. Just so much for now: I have PC-BSD installed on the laptop – and that’s what I use to write this post.

The presentation also succeeded more or less (had a problem with Libre Office). But the big issue was that I obviously chose a topic that was too much for my examiners. My documentation was “too technical” (!) for them and they would have liked to see “a comparison with other operating systems, like Windows (!)” – which simply was far beyond the scope of my project… I ended up with a medicore mark for the project which is in complete contrast to the final grade of the vocational school (where I missed a perfect average by 0.1).

Ok, I cannot say that this came completely unexpected. I had been warned. Just a few years earlier, another apprentice chose a Linux topic and even failed the final exam! He took action against the examiners and court decided in his favor. His work was reviewed by people with Linux knowledge – and all of a sudden he was no longer failing but in fact got a 1 (German equivalent to an A)! I won’t sue anybody since I have passed. Still my conclusion here is that we need more people who dare to bring *nix topics on the list. I would do it again anytime. If you’re in the same situation: Please consider it.

Oh, and for another small success: The former classmate who runs Antergos also tried out FreeBSD on his server after I recommended it. He has come to like jails, the ports system and package audit among other things. One new happy *BSD user may not be much. But it’s certainly a good thing! Also all of my former classmates now at least know that *BSD exists. I’ve held presentations about that and mentioned it in many cases. Awareness for *nix systems and what they can do may lead to giving it a try some time in the future.

Top things that I missed in 2015

Another year of blogging comes to an end. It has been quite full of *BSD stuff so that I’d even say: Regarding this blog it has been a BSD year. This was not actually planned but isn’t a real surprise, either. I’ve not given up on Linux (which I use on a daily basis as my primary desktop OS) but it’s clear that I’m fascinated with the BSDs and will try to get into them further in 2016.

Despite being a busy year, there were quite a few things that I would have liked to do and blog about that never happened. I hope to be able to do some of these things next year.

Desktops, toolkits, live DVD

One of the most “successful” (in case of hits) article series was the desktop comparison that I did in 2012. Now in that field a lot has happened since then and I really wanted to do this again. Some desktops are no longer alive others have become available since then and it is a sure thing that the amount of memory needed has changed as well… ๐Ÿ˜‰

Also I’ve never been able to finish the toolkit comparison which I stopped in the middle of writing about GTK-based applications. This has been started in 2013 so it would also be about time. However my focus has shifted away from the original intend of finding tools for a light-weight Linux desktop. I’ve become involved with the EDE project (“Equinox Desktop Environment”) that uses the FLTK toolkit and so people could argue that I’m not really unbiased anymore. Then again… I chose to become involved because that was the winner of my last test series – and chances are that the reasons for it are still valid.

And then there’s the “Desktop Demo DVD” subproject that never really took off. I had an Arch-based image with quite some desktops to choose from but there were a few problems: Trinity could not be installed alongside KDE, Unity for Arch was not exactly in good shape, etc. But the biggest issue was the fact that I did not have webspace available to store a big iso file.

My traffic statistics show that there has been a constant interest in the article about creating an Arch Linux live-CD. Unfortunately it is completely obsolete since the tool that creates it has changed substantially. I’d really like to write an updated version somewhen.

In fact I wanted to start over with the desktop tests this summer and had started with this. However Virtual Box hardware acceleration for graphics was broken on Arch, and since this is a real blocker I could not continue (has this been resolved since?).

OSes

I wrote an article about HURD in 2013, too, and wanted to re-visit a HURD-based system to see what happened in the mean time. ArchHURD has been in coma for quite some time. Just recently there was a vital sign however. I wish the new developer best luck and will surely do another blog post about it once there’s something usable to show off!

The experiments with Arch and an alternative libc (musl) were stopped due to a lack of time and could be taken further. This has been an interesting project that I’d like to continue some time in some form. I also had some reviews of interesting but lesser known Linux distros in mind. Not sure if I find time for that, though.

There has been a whole lot going about both FreeBSD and OpenBSD. Still I would have liked to do more in that field (exploring jails, ZFS, etc.). But that’s things I’ll do in 2016 for sure.

Hardware

I’ve played a bit with a Raspberry 2 and built a little router with it using a security orientated Linux distro. It was a fun project to do and maybe it is of any use to somebody.

One highlight that I’m looking forward to mess with is the RISC-V platform, a very promising effort to finally give us a CPU that is actually open hardware!

Other things

There are a few other things that I want to write about and hope to find time for soon. I messed with some version control tools a while back and this would make a nice series of articles, I think. Also I have something about devops in mind and want to do a brief comparison of some configuration management tools (Puppet, Chef, Salt Stack, Ansible – and perhaps some more). If there is interest in that I might pick it up and document some examples on FreeBSD or OpenBSD (there’s more than enough material for Linux around but *BSD is often a rather weak spot). We’ll see.

Well, and I still have one article about GPL vs. BSD license(s) in store that will surely happen next year. That and a few topics about programming that I’ve been thinking about writing for a while now.

So – goodbye 2015 and welcome 2016!

Happy new year everyone! As you can see, I have not run out of ideas. ๐Ÿ™‚