Setting up a FreeBSD/OpenBSD dual-boot with full disk encryption

A bit over a month ago, I bought my first refurbished laptop. Previously I used a ThinkPad (owned by the company I work for) for on-call duty. It’s running a Linux distro which would not be my first choice at all, it has a small screen and – it’s not my property. I wanted my own laptop and since we’re allowed to use whatever distro we prefer, I thought that I’d be going with Arch.

(I you’re just interested in the commands to enter, have a look at the end of this post where I put a list of them.)

*BSD in production

On a second thought: Why not use *BSD? For me it would mean going to use a *BSD desktop “in production” after only running it privately. Thanks to the great BSDNow! show I feel confident enough now to give it a try. The company that I work for is running some FreeBSD servers, too, so it’s not something entirely strange and unknown. I went with asking if using BSD for on-call was ok. The answer was what I expected: If I thought that it would work ok I should well try it. The only requirement was that I’d encrypt the disk (the same rule would apply to Linux, too, of course).

Next question: Which BSD to use? Since I’m just getting into *BSD, I’m not really familiar with all of them now. Net and Dragonfly would certainly be interesting, but since I need that box for work that’s not an option. I need something that I know enough to be able to work with. Of course it would be best if I could learn something at the same time… So, what’s the best way to learn more? Probably tracking -CURRENT! But what if something breaks? I cannot afford that. And which BSD to use anyway? I work with some FreeBSD servers, so more in-depth FreeBSD knowledge would make sense. Then again I’ve really come to like Puffy and all he stands for…

That would be a hard decision! Finally I decided not to decide – and to just install both instead. This also has the advantage of having a second system if either CURRENT should ever break!

Hardware: HP EliteBook

I bought an HP EliteBook 8470p. Why didn’t I go with Lenovo even though those are known to work best with *BSD and I obviously need something that seriously works? Well, there’s one reason for me: With the ThinkPads keyboards just totally suck. I have no idea who came up with that sad story of “Hey, let’s just put the Fn key where Ctrl belongs and vice versa!”. No idea whatsoever. But I know for sure that it drives me insane. No fun at all when you’re working on-call at four AM, barely awake, and nothing happens when you have to CTRL-C something quickly. I could never get used to it ever!

So for that very reason it had to be some other hardware. I had this older HP laptop that a friend sold me for a few bucks a while ago. I can’t remember which model exactly and cannot look it up since I don’t have it anymore. (When my mother’s old computer died as I was over on a visit, my father thought about replacing it with a Windows box since that’s the only thing that he knows. To avoid that, I set up said old HP laptop that I had with me as a replacement and gave it to her. She’s been using it happily since.) That laptop had been a pleasant experience when I had OpenBSD on it and so I decided to give that EliteBook a try.

It works fairly well for most things. On FreeBSD there was the problem with the Intel video driver but since I’m running 11-CURRENT video is all working great even when I quit X11. WiFi is detected according to dmesg but for some reason no iwn0 shows up if I run ifconfig. I didn’t have time to look into that further, however. On OpenBSD backlight gets turned off if I quit X and thus the screen is a bit dark then. Since I usually quit X to shut down the computer afterwards, anyway, that’s only a minor issue. WiFi is correctly detected and I confirmed it to work. Suspend works when I close the laptop but when it wakes up the keyboard does not work anymore. These are the only issues that I ran into so far.

What is the exact use case?

FreeBSD can use ZFS while OpenBSD cannot. I’m not sure if FreeBSD’s and OpenBSD’s UFS/FFS filesystems are compatible (I think OpenBSD’s implementation misses quite some of the newer features). The encryption methods used by the systems however are definitely not compatible. So it doesn’t matter anyway in this case and I’m free to choose whichever filesystem I want.

Since I’ll be compiling FreeBSD-CURRENT now and then (and in general plan to do some stuff that likes much memory to be available), I decided to go with UFS. Yes, there are scenarios where ZFS is simply overkill! There’s only one drive in the laptop, it’s not extremely big and it won’t hold any important data. I have no need any particular ZFS feature on that system, so going with UFS should be fine. (That plus the fact that I’m still reading Lucas’ and Jude’s excellent book on ZFS and intend to play with that filesystem on another machine)

Prior to version 5.9 (released after I originally wrote this), OpenBSD only really supported the MBR partitioning scheme so going with that was an easy choice. I’ll stick to it for now because I need some time to play with it first. I’m going to do everything again in a VM so I can take screenshots for this article.

Installing FreeBSD

The installation begins just like an ordinary FreeBSD install: Boot up the installer media and make your way through the setup questions. When the installer asks about the partitioning however, we’re going to do that by hand.

Choosing to partition by hand

The pure bourne shell is not very comfortable for interactive use, so it generally makes sense to use a more advanced shell (like tcsh) for convenience features like auto-completion. Should you not know which drives your machine has, camcontrol can help you. If you want to start with a clean drive, you can zero out everything with dd (when I bought my laptop it had Windows 7 on it that I wanted to get rid of).

Zeroing out the disk

If you’re not familiar with what partitions and slices are, you may want to have a look at an older post where I wrote up a little excursion about that topic.

First an MBR is created and then two slices are added to it. The first one gets 100 gigabytes, the other the rest (which is also 100 GB in my case). Both slices are created aligned to 4k sector size of the hard drive. Then a BSD disklabel is added in the first slice. After that, boot0 (a simple boot manager) is written on the drive and the standard bootcode into the first slice. Finally the first slice is marked as active for booting.

Slicing the disk

Now three partitions are created inside the BSD label: One for boot (which will hold the kernel and cannot be encrypted), one for swap and one for the system (which will be encrypted). Glabel is used to give these partitions a more meaningful name than ada0s1a and the like. Since the system partition will be encrypted, it makes sense to write some garbage all across it so that it is impossible to see which part holds data and which does not. This takes quite a while and you could of course skip this. As long as your patience lives up to paranoia, that little bit of extra security is worth the wait!

Creating and labeling BSD partitions

Next the system partition is initialized with GELI, one of FreeBSD’s two military grade encryption methods. I only use a passphrase to unlock but you can also use a key (or both) if you wish. After attaching the new GELI partition, a new GEOM provider, system.eli is available with the clear data for you (and your programs) to use.

Creating and attaching the GELI partition

Now it’s time to format the two data partitions (the swap partition does not need any formating). You could also use journaling UFS for the boot partition but it’s usually not necessary.

Creating filesystems

Copy over the boot directory and add two lines to loader.conf so that you’ll have the chance to unlock your GELI partition during system startup. What remains is writing a fstab. Notice that for some reason I’ve forgotten to put swap.eli in there on my screenshot (even though that’s what I have in my script). What this does is using a one-time key for your swap on each boot, thus making sure that any data that remains on the swap partition is useless once the system was powered down once. You do not have to initialize GELI for this. FreeBSD knows what to do when it sees swap.eli.

Mount the decrypted system partition on /mnt as that’s where the installer expects it. And don’t forget to create the clear directory as we demand that in fstab and the system would not boot up correctly if it was missing. Then exit the shell and continue with the installer.

Copying over /boot and writing loader.conf and fstab

Once the installation has finished, the installer will ask you if you wish to make any final modifications. Answer yes and it will drop you into a shell in a chroot of your new system. Delete /boot (that directory lives on the encrypted system partition and the bootloader could not find the kernel there anyway) and make it a symlink pointing to /clear/boot instead. This step is not actually required. But if you don’t do it, you won’t be able to update your system the normal way. If you want to only mount the real /boot by hand whenever you upgrade, that’s fine, too, of course.

Chosing to make final modifications

Exit the shell, reboot and remove the boot media. Then reboot. Your boot manager (boot0) will offer you two FreeBSD systems. Hit F1 to boot up FreeBSD. Don’t hit F2. There’s no system there, yet.

Installing OpenBSD

The OpenBSD installer is neither pretty nor does it offer any kind of menu system. However it is simple, effective and straight-forward. Choose to install OpenBSD, set your keymap, enter a hostname, configure the net and set a root password.

Hostname, network and password configuration

Choose to run an SSH server by default, whether to prepare the system for X11, if you want the display manager XDM to be started automatically. Create a user now or do so later. When asked for the timezone, give a ! instead to drop into a shell.

Going to a shell

If you don’t know your disks, look inside the dmesg for the name. Now use fdisk to change the type of the second partition from A5 (FreeBSD) to A6 (OpenBSD). Then use disklabel to create a swap partition and a main partition. Make absolutely sure that the later has the type RAID!

Partitioning for OpenBSD

Encrypt the new softraid with bioctl then exit the shell. Now enter the correct timezone and choose the newly created softraid for the installation! Dedicate the whole softraid disk to OpenBSD but edit the partitions to fit your need. You do not need a swap partition on the softraid because we created a separat one on the real disk, remember? For that reason, after OpenBSD formated the partitions you created, the installer will ask you if you want to add any other disks before you start the actual installation. You DO because there’s the swap area.

Preparing crypto softraid

Once the installer has finished, reboot the machine. Now the boot manager says “F1 – FreeBSD” and “F2 – BSD”. The second one is your OpenBSD. The manager knows only the partition type and has no clue which system is on there.

Plain text summary

Here’s what you could type in for the shell parts of both installers:

FreeBSD


In the partitioning shell:
tcsh
dd if=/dev/zero of=/dev/ada0 bs=1m
gpart create -s mbr ada0
gpart add -a 4k -t freebsd -s 98G ada0
gpart add -a 4k -t freebsd ada0
gpart create -s bsd ada0s1
gpart bootcode -b /boot/boot0 ada0
gpart bootcode -b /boot/boot ada0s1
gpart set -a active -i 1 ada0
gpart add -t freebsd-ufs -s 2G ada0s1
gpart add -t freebsd-swap -s 4G ada0s1
gpart add -t freebsd-ufs ada0s1
glabel label clear /dev/ada0s1a
glabel label swap /dev/ada0s1b
glabel label system /dev/ada0s1d
dd if=/dev/random of=/dev/label/system bs=1m
geli init -b -s 4096 -l 256 /dev/label/system
geli attach /dev/label/system
newfs /dev/label/clear
newfs -j /dev/label/system.eli
mount /dev/label/clear /media
cp -Rp /boot /media
echo 'vfs.root.mountfrom="ufs:/dev/label/system.eli"' >> /media/boot/loader.conf
echo 'geom_eli_load="YES"' >> /media/boot/loader.conf
echo '/dev/label/system.eli / ufs rw 1 1' >> /tmp/bsdinstall_etc/fstab
echo '/dev/label/swap.eli none swap sw 0 0' >> /tmp/bsdinstall_etc/fstab
echo '/dev/label/clear /clear ufs rw 1 1' >> /tmp/bsdinstall_etc/fstab
mount /dev/label/system.eli /mnt
mkdir /mnt/clear
exit
exit

In the "final modifications" chroot:

rm -r /boot
ln -s /clear/boot /mnt/boot

OpenBSD


i
de
puffy
em0
dhcp
none
done
password
no
yes
no
no
!
dmesg | grep [ws]d0
fdisk -e sd0
setpid 1
A6
quit
disklabel -E sd0
a b
ENTER
4G
swap
a a
ENTER
ENTER
RAID
w
q
bioctl -c C -l /dev/sd0a softraid0
exit
Europe/Berlin
sd1
whole
e
Your layout here
w
q
sd0
OpenBSD
w
q
done
http
none
openbsd.cs.fau.de
pub/OpenBSD/5.9/amd64
done
done

Precomp (or: How to compress already compressed data?)

It’s a kind of strange feeling, but while half of the IT world seems to either already burn (or to tremble with fear), I can choose freely whatever topic I want to write about this month. I haven’t had a Windows box for almost a decade now and people who I work or keep in contact with, are also mostly *nix only. So this post is not about encryption or ransomware at all. It is about useful, respectable compression. Or more precise: The art of re-compressing already compressed data!

In January Precomp, a precompression utility, has been open-sourced! The first two sections tell a bit about how I became interested in this topic and in Precomp. Skip them if you don’t want to read that kind of stuff.

Compressing compressed data?

When I was young and new to PCs, I once tried to compress a ZIP archive with ACE (a lesser known archiver that once was comparable to the more popular RAR). I knew that ACE offered stronger compression and so I thought that this should make the file smaller. Just imagine my surprise when it turned out that I was wrong!

I guess that most of us have a story like that to tell, a story from our childhood when compression was nothing short of magic. Later when I begun to understand that even though it in fact does start with “m”, it’s not magic but math (a subject that I totally sucked at in school – but fortunately I grasped enough to get a rough idea on how compression works ;)). Now there was no surprise anymore: The compressed data is not well fit for any other general purpose compression method, even if it’s compressed with a weak algorithm.

How to work around that? Well, decompressing the ZIP file and creating a new ACE archive does the trick in the case mentioned above. Of course things are not always that straight forward. If they were, I wouldn’t really have much to write about right now and this post would be really, really short!

For whatever reason, compression continued to fascinate me and I loved compressing things to sizes as tiny as possible. It was fun to try out new experimental compression programs specialized on some specific types of files. I did that for years – until I had to stop due to a lack of time.

Games

Let’s fast forward some years from that failed compression experiment with ACE; I had replaced DOS 6.22 with Win95 which I had replaced with Win98 (SE) that I had replaced with WinME, … On some day I wanted to install Quake ]|[ Arena (yes, friends, I once was 1337 young enough to spell it like that!) on my main computer to get into it again for a LAN party next weekend. So I went looking for the darn CD. It took me a while but I finally found the CD case. I opened it up and… the CD itself was missing. Oh great! Since I didn’t feel like looking into all the other cases to find out into which I might have put it accidentally, I decided to just copy it off an older computer which had it already installed (ID were nice people. I don’t remember which version of Q3A it was, but there eventually was an official patch which also removed the CD check for the game so there was no need for a crack or anything).

Now, different versions of Windows didn’t always play together too well on the LAN and since my Quake installation was on a computer with an older Windows (and I didn’t have another cable at hand), I decided that I’d just burn it to CD. It turned out however, that the other machine didn’t have vanilla Q3A installed but the expansion set as well. Together it was obviously too big to fit on one CD. There would have been easy solutions: Leave out the resource files for the expansion, burn two CDs, put the hard drive into the new computer, … Sure, easy solutions are nice and all. But sometimes they are also boring! And when you’re young and have some free time, you don’t do boring stuff. So of course I opted for the more challenging solution: Get it all on one cd!

Quake 3’s resource containers go by the file extension of .pk3 and, more importantly, are in fact ZIP files without any compression. This meant that they could be compressed well because there was no ZIP compression getting in the way. But guess what: Even after applying the most extreme compression programs, the result simply would not fit onto one CD…

Bad luck, eh? Well, not really. Unpacking the container files was in fact the solution in this case. Not because of weak compression but because it enabled me to test each of the files it contained separately with all compressors and could group together all files that compressed best with one compression utility or another! I think that I was able to shrink it down almost as much as needed with just a couple of megs over the CD limit. There were blank CDs with 800 MB capacity as well, so it would have fit onto one CD – but I didn’t have one of those. So I replaced the ID video with an empty video file and I was set.

Since I liked doing these things I begun doing backups like that for a lot of my favorite games, ripping apart (and later rebuild) resource containers, convert between file formats, decompress whatever could be decompressed before applying stronger compression, etc.

How Precomp works

The more I got into free and open source things, the more I wondered if some of them wouldn’t benefit from better compression. A friend and former classmate of mine invented Precomp and I of course was among the first to make use of it and provide feedback. But what is Precomp?

Precomp is what the name says: A pre-compressor. It is not directly meant to reduce the size of files. On the contrary: It can make some files even bigger than the original input. But that’s a good thing really! How’s that? Well, it’s meant to prepare files for compression so that eventually these files can be compressed to a smaller size than the original file could – without losing data of course!

What Precomp does is look for streams in its input file that are compressed with a compression method known to Precomp. It then decompresses and recompresses them so that they can be compared. If they are identical, Precomp will write the decompressed stream (plus how to recompress it properly) to its output file.

While this sounds quite simple in theory, it is in fact a bit more complex. The reason for that lies in the flexibility of some compression algorithms. Have you ever zipped up a file? Then you know that there are a lot of parameters that you can provide which affects how the file will be compressed: “fast”, “normal”, “strong” or “maximum” compression? What about the dictionary size? A lot of things like that. So either combination of compression parameters will result in a valid zip stream that can be decompressed by any zip-compatible utility. Replacing such a stream with a compatible one is fairly easy. Reproducing the exact, bit for bit identical stream, is not.

To be truly lossless, Precomp uses trial and error on each stream. If it can figure out the combination of parameters that result in the original stream: Great! If not, that stream has to be left untouched.

What Precomp can do

Early versions of Precomp were only available on Windows but there have been Linux versions for quite a while as well. I also use it on FreeBSD without any problems. The .PCF files are platform-independent. You can restore the original file on Windows from a file precompressed on Linux or BSD and vice versa.

While Precomp originally was only a pre-compressor for zlib streams (which are used in a variety of file formats like ZIP, GZIP, PNG, PDF, …), it can do more things now. It can use bzip2 to compress its input file after precompression. It can losslessly compress some JPEG pictures to smaller sizes (thanks to an external library). And in the current development version there’s even support for compressing MP3 music files further (also using an external lib)!

Currently, Precomp relies on temporary files for all the extracted streams and thus puts heavy load on your hard drive (and is a bit slow due to that bottleneck). SSDs obviously perform better, but it totally makes sense to use a memdrive if you can spare some RAM for it. I’ve forked the project on Github and added an experimental shell script to assist with the creation of such a memdrive. It’s currently FreeBSD only (I’ve migrated all of my boxes to *BSD and currently have no Linux machine remaining but will set up one for cases like that some time in the future). Feel free to take a look at it if you’re into portable shell scripting and please do tell me if you have any suggestions!

Precomp is not at all at the limit of its possibilities. There are a lot of things that can be tweaked, optimized or added. If you feel like that could be a fun project – go ahead and play with it, it’s on Github. Or perhaps you have an idea what this could be useful for? Please help yourself and use it. It’s free software after all (Apache licensed).

School, exams and… BSD!

Alright, January is already almost over, so there’s not much use in wishing my readers a happy new year, right? I wanted to have this month’s blog post out much earlier and in fact wanted to write about a completely different topic. But after January 27th it was pretty obvious for me what I’d have to write about – On that day I passed my final exam and now I’m a Computer Science Expert by profession. Time to take a look back at the apprenticeship and the status of *nix in German IT training today.

Spoiler: It’s Microsoft, Microsoft and again Microsoft. Only then there’s one drop of Linux in the ocean. I had left the (overly colorful) world of Windows in 2008. When I started the apprenticeship I was determined not to eat humble pie and come crawling back to that. While it was at times a rather tough fight, it was possible to do. And I’m documenting it here because I want to encourage other people to also take this path. The more people take the challenge the easier it will become for everyone. Besides: It is absolutely necessary to blaze the trail for better technology to actually arrive in mainstream business. This is of great importance if we do not want to totally fall behind.

Detours

I didn’t take the straight way into IT. While I had been hooked with computers since I was a little child, I also found that I had a passion to explain things to others. I gave private lessons after school for many years and after passing the Abitur (think of the British A levels) I chose to go to the university to become a teacher.

It took me a very long time of struggle to accept that I could not actually do that for a living. I am in fundamental opposition to how the German school system is being ruined and I could not spend all my work life faithfully serving an employer that I have not even the least bit of respect for.

The situation is as follows: We once had a school system in Germany that aimed at educating young people to be fit for whatever their life holds. The result was people who could stand on their own feet. Today the opposite is true: A lot of people who leave school have no idea how to find their way in life. Playing computer games is the only thing that a lot of young men (and an increasing number of women) actually do. They have not developed any character, they have no passion for anything (and thus no goals in life) and they often haven’t learned no empathy at all (and thus keep hurting other people – not even because of bad will but because of total ignorance).

At the same time things taught in school aim purely at making people available as workmen as soon as possible. Sounds contradictory? Sure thing. At the university I enjoyed the benefits of the old system where there was relatively large academic freedom and you were encouraged to take your time to learn things properly, to do some research if you hit topics of interest to you and to take courses from other faculties, etc. And this is pure insanity: All that is largely gone. New students are forced to hasten through their studies thanks to tight requirements (which semester to take which course in – very schoolish, no freedom at all)… In the name of “comparability” we did away with our own academic degrees only to adopt the inferior “master” (as well as the even more inferior “bachelor”).

Secondary schools are lowering their standards further and further so that almost anybody can get their A levels and flood the universities. At the same time there are not enough people remaining for other paths of education – and those who are far too often are completely useless to the companies: People who can be described as unreliable at best are of no use at all. I did not want to be part of that madness and so I finally decided to get out and do what I probably should have done right from the start.

Vocational school: Windows

The German vocational school system is a bit special: You only go to school one or two days (this varies among semesters). What about the other days? You spend them in a company you apply at before you can start the apprenticeship. That way you get to know the daily work routine right from the start (which is a really good thing). School is meant to teach some general skills and at work you learn practical things.

On the first day I went to vocational school, I kind of felt… displaced. Why? Well, coming back to school to teach children is something that takes a moment to adjust to. I enjoyed teaching in general (even though there are always horrible classes as well ;)) but becoming a student again afterwards is really strange. At least for a while.

Subject matter was extremely easy for me. But being almost 30 years old when I started the apprenticeship of course meant that I had a lot more of knowledge and experience than the typical 18 or 20 years old student. However this was a good thing for me since I also have a wife, two children and had to drive about 1.5 hours to school and the same distance back. Which meant that I had far less time for homework or learning than the others. In fact I only found a few hours to learn for the preliminary exam as well as for the final exam. But that’s it.

We had PCs with Windows XP and were required to work with that. Most of my classmates protested because they were used to Windows 7. I simply installed Cygwin, changed tho panel position to top and things were pretty much ok for me (it’s only for a few hours, right?). A while later we got new PCs with Windows 8(.1?) and new policies. The later made it impossible for me to use Cygwin. Since I had never touched anything after Windows XP, I took my time to take a look at that system. In fact I tried to be open for new things and since a lot of time passed since I left Windows, I no longer had any strong feelings towards it. Still Win 8 managed to surprise me: It was even worse than I had thought possible…

The UI was just plain laughable. I have no idea how anybody could do some actual work with it using the mouse. Now, I’m a console guy and I need no mouse to do stuff (if I at least have Cygwin that is). But that must have been a joke, right?

Then I found out that Windows still was not capable of even reading an EXT2 file system. Oh my. So I decided to format one USB key to FAT32 for school. But guess what? When I attached it, Windows made some message pop up that it was installing drivers – which then failed… I removed the USB key and inserted it again. Same story. A classmate told me to try another USB connector. I thought that he was fooling me but he insisted on it so I did it (expecting him to laugh at me any second). To my big surprise this time the driver could be installed! But the story does not end here. No drive icon appeared in the explorer. I removed the USB key again and reattached it once more. Nothing. My classmate took it out yet again and plugged it into the former connector (the one from which installing the driver failed). And this time the drive appeared in the explorer! It was that moment that I realized not too much had changed since XP – despite the even uglier looks. Bluescreens, program crashes and cryptic error messages that I had not seen in years all were back.

I decided that I could not work like that and decided to bring a laptop each school day. Just about all my classmates were fine with Windows however. But speaking of classmates: We lost five of them in the first two years. Two simply never showed up again, two more were fired by their companies (due to various misbehavings) and thus could not continue their apprenticeship and the other one had a serious problem with alcohol (being just 17 years old) and was also fired.

BYOD: Linux desktop

My laptop was running Linux Mint. When I bought it, it came with Mint pre-installed. My wife got used to that system and did not like my idea to install a different system (I mainly use Arch Linux as a desktop at work and on other PCs at home) and so Linux Mint stayed on there.

There were a few classmates interested in Linux in general. These quickly became the ones that I spend most of my time in school with. Three already had some experience with it but that’s it. One of them decided that it was time to switch to Linux about a year ago. I introduced him to Arch and he’s a happy Antergos (an Arch-based distro) user since then. Another classmate was also unhappy with Windows at home. I answered a few questions and helped with the usual little problems and she successfully made the switch and runs Mint now.

Some teachers couldn’t quite understand how one could be such a weirdo and not even have one single Windows PC. We were supposed to finish some project planning using some Microsoft software (forgot the name of it). I told the teacher that the required software wouldn’t run on any of my operating systems. Anything not Windows obviously wasn’t thinkable for him and he replied that in that case I’d really have to update! I explained to him that this was not the case since I ran a rolling-release distro which was not just up to date but in fact bleeding edge.

When he understood that I only had Linux at home, he asked me to install Windows in that case. Now I told him that I didn’t own any current version of Windows. He rolled his eyes and replied that I could sign up for some Microsoft service (“dream spark” or something?) where each student or apprentice could get it all for free. Then I objected that this would be of no use since I could not install Windows even if I had a license because I did not agree to Microsoft’s EULA. For a moment he did not know what to say. Then he asked me to please do it at work then. “Sorry”, I replied, “we don’t use Windows in the office either.” After that he just walked away saying nothing.

We were required to learn some basics about object-orientated programming – using C#. So I got mono as well as monodevelop and initially followed the course.

Another Laptop: Puffy for fun!

I got an older laptop for a really cheap price from a classmate and put OpenBSD on there. After having played a bit with that OS in virtual machines I wanted to run it on real hardware and so that seemed to be the perfect chance to do it. OpenBSD with full disk encryption and everything worked really nice and I even got monodevelop on there (even though it was an ancient version). So after a week I decided to use that laptop in school because it was much smaller and lighter (14″ instead of 18.3″!) – and also cheaper.😉

After upgrading to OpenBSD 5.6 however, I realized that the mono package had been updated from 2.10.9p3 to 3.4.0p1 which broke the ancient (2.4.2p3 – from 2011!) version of monodevelop. Now I had the option of bringing that big Linux laptop again or downgrade OpenBSD to 5.5 again. I decided to go with option 3 and complain about .NET instead. By now the programming course teacher already knew me and I received permission to do the exercises with C++ instead! He just warned me that I’d be mostly on my own in that case and that I’d of course have to write the classroom tests on C# just like everyone else. I could live with that and it worked out really well. Later when we started little GUI programs with winforms I would have been out of luck even on Linux and mono anyway. So I did these with C++ and the FLTK toolkit.

Around christmas I visited my parents for some days. My mother’s computer (a Linux machine I had set up for her) stopped working. As my father decided that he’d replace it with a new Windows box (as that’s what he knows), I gave up my OpenBSD laptop. I installed Linux on it again and gave it to my mother as a replacement to prevent her having to re-learn everything on a Windows computer…

Beastie’s turn

So for the last couple of weeks I was back on Linux. However the final exam consists of two parts: A written exam and an oral one. The later is mostly a presentation of a 35 hour project that we had to do last year. I took the chance and chose a project involving FreeBSD (comparing configuration management tools for use on that particular OS). We also had to hand in a documentation of that project.

Six days before the presentation was to be held, I decided that it would suck to present a FreeBSD project using Linux. So I announced to my wife that I’d install a different OS on it now, did a full backup, inserted a PC-BSD 10.2 cd and rebooted. What then happened is a story of its own… With FreeBSD 10.3 just around the corner I’ll wait until that is released and write about my experiences with PC-BSD in a future blog post. Just so much for now: I have PC-BSD installed on the laptop – and that’s what I use to write this post.

The presentation also succeeded more or less (had a problem with Libre Office). But the big issue was that I obviously chose a topic that was too much for my examiners. My documentation was “too technical” (!) for them and they would have liked to see “a comparison with other operating systems, like Windows (!)” – which simply was far beyond the scope of my project… I ended up with a medicore mark for the project which is in complete contrast to the final grade of the vocational school (where I missed a perfect average by 0.1).

Ok, I cannot say that this came completely unexpected. I had been warned. Just a few years earlier, another apprentice chose a Linux topic and even failed the final exam! He took action against the examiners and court decided in his favor. His work was reviewed by people with Linux knowledge – and all of a sudden he was no longer failing but in fact got a 1 (German equivalent to an A)! I won’t sue anybody since I have passed. Still my conclusion here is that we need more people who dare to bring *nix topics on the list. I would do it again anytime. If you’re in the same situation: Please consider it.

Oh, and for another small success: The former classmate who runs Antergos also tried out FreeBSD on his server after I recommended it. He has come to like jails, the ports system and package audit among other things. One new happy *BSD user may not be much. But it’s certainly a good thing! Also all of my former classmates now at least know that *BSD exists. I’ve held presentations about that and mentioned it in many cases. Awareness for *nix systems and what they can do may lead to giving it a try some time in the future.

Top things that I missed in 2015

Another year of blogging comes to an end. It has been quite full of *BSD stuff so that I’d even say: Regarding this blog it has been a BSD year. This was not actually planned but isn’t a real surprise, either. I’ve not given up on Linux (which I use on a daily basis as my primary desktop OS) but it’s clear that I’m fascinated with the BSDs and will try to get into them further in 2016.

Despite being a busy year, there were quite a few things that I would have liked to do and blog about that never happened. I hope to be able to do some of these things next year.

Desktops, toolkits, live DVD

One of the most “successful” (in case of hits) article series was the desktop comparison that I did in 2012. Now in that field a lot has happened since then and I really wanted to do this again. Some desktops are no longer alive others have become available since then and it is a sure thing that the amount of memory needed has changed as well…😉

Also I’ve never been able to finish the toolkit comparison which I stopped in the middle of writing about GTK-based applications. This has been started in 2013 so it would also be about time. However my focus has shifted away from the original intend of finding tools for a light-weight Linux desktop. I’ve become involved with the EDE project (“Equinox Desktop Environment”) that uses the FLTK toolkit and so people could argue that I’m not really unbiased anymore. Then again… I chose to become involved because that was the winner of my last test series – and chances are that the reasons for it are still valid.

And then there’s the “Desktop Demo DVD” subproject that never really took off. I had an Arch-based image with quite some desktops to choose from but there were a few problems: Trinity could not be installed alongside KDE, Unity for Arch was not exactly in good shape, etc. But the biggest issue was the fact that I did not have webspace available to store a big iso file.

My traffic statistics show that there has been a constant interest in the article about creating an Arch Linux live-CD. Unfortunately it is completely obsolete since the tool that creates it has changed substantially. I’d really like to write an updated version somewhen.

In fact I wanted to start over with the desktop tests this summer and had started with this. However Virtual Box hardware acceleration for graphics was broken on Arch, and since this is a real blocker I could not continue (has this been resolved since?).

OSes

I wrote an article about HURD in 2013, too, and wanted to re-visit a HURD-based system to see what happened in the mean time. ArchHURD has been in coma for quite some time. Just recently there was a vital sign however. I wish the new developer best luck and will surely do another blog post about it once there’s something usable to show off!

The experiments with Arch and an alternative libc (musl) were stopped due to a lack of time and could be taken further. This has been an interesting project that I’d like to continue some time in some form. I also had some reviews of interesting but lesser known Linux distros in mind. Not sure if I find time for that, though.

There has been a whole lot going about both FreeBSD and OpenBSD. Still I would have liked to do more in that field (exploring jails, ZFS, etc.). But that’s things I’ll do in 2016 for sure.

Hardware

I’ve played a bit with a Raspberry 2 and built a little router with it using a security orientated Linux distro. It was a fun project to do and maybe it is of any use to somebody.

One highlight that I’m looking forward to mess with is the RISC-V platform, a very promising effort to finally give us a CPU that is actually open hardware!

Other things

There are a few other things that I want to write about and hope to find time for soon. I messed with some version control tools a while back and this would make a nice series of articles, I think. Also I have something about devops in mind and want to do a brief comparison of some configuration management tools (Puppet, Chef, Salt Stack, Ansible – and perhaps some more). If there is interest in that I might pick it up and document some examples on FreeBSD or OpenBSD (there’s more than enough material for Linux around but *BSD is often a rather weak spot). We’ll see.

Well, and I still have one article about GPL vs. BSD license(s) in store that will surely happen next year. That and a few topics about programming that I’ve been thinking about writing for a while now.

So – goodbye 2015 and welcome 2016!

Happy new year everyone! As you can see, I have not run out of ideas.:)

Thea: The gain of giving away for free

This post is inspired by the game Thea: The Awakening. No, Eerie Linux has not mutated into a games blog. Yes, I will give a short description of the game. But what this post is really about is some thoughts about software development in the past, today and what could be a more open future.

Why Thea? Because the developers did something very uncommon: They decided to give the game away for free – if you’re a Linux user that is!

Thea: The Awakening

The game in question is a turn-based strategy game with a strong focus on survival. There’s a nice background story: The world had turned to darkness (playing the game you will discover why) and is haunted by creatures and spirits of the dark. Now the sun is rising again and the gods have returned but both are very weak and darkness will not give up without a fierce fight. Slavic mythology makes for a very nice and rather uncommon setting.

In case you want to give it a try, you can find a download link here. And yes, it is really completely free. You don’t need to buy the Windows version first or something.

I’ve successfully run the game on the Mint laptop that I share with my wife and can confirm that it works well. No luck on a 32-bit machine that I installed Arch on to give the 32-bit version of the game a try. It won’t start and the console messages give no clues why this may be. So if you’re still stuck with 32-bit only systems, you’re probably out of luck.😉

The developers stated that they have not even tested the Linux version themselves! So what works and what doesn’t? Most things seem to work surprisingly well in fact. Sound, graphics, even the intro video. I’ve experienced graphical glitches with some white pixels appearing for a second (nope, no AMD video card – it’s Intel!). But this happens just rarely and is a fairly minor issue. Far more annoying is the fact that you cannot really use the keyboard: A key press works but the release event doesn’t… This is a known issue with the version of the Unity engine that Thea uses. It may or may not be addressed in a future release. You can however get the keys released by ALT-TABbing out of the game and back in. That way you can at least always access the menu.

You choose one of the gods when starting a game. I’ve played scenarios for multiple gods now. The main story (“Cosmic Tree”) gets pretty repetitive soon since it’s always the same. This is also true for a lot of the other quests. However the game has options to skip a lot of the text in case you already know it which certainly was a good idea. Some of the quests are different depending of which god you chose which keeps things interesting story-wise. Maps, resources, encounters, etc. are randomly generated for each game. This together with a challenging survival, plenty of combinations to try for crafting items and interesting gameplay, Thea might still cause a rather high motivation to replay the game often.

Software development models

I’d like to separate some development approaches here and sum them up by giving their model as I see it a name. These are no official models (I’m not a game developer) but an attempt to sum up the whole thing in one heading.

The shareware model

There was once a time when software was developed in a purely closed manner. It was developed internally and when it was ready, a release was done and advertised. The good thing was that games were often cut into “episodes” and the first one given away as shareware so people could try out the game for free and might decide to buy the full product.

The public relations model

Advertising grew bigger and bigger as well as more and more aggressive. Top titles games were often announced as development begun and some material was released along the development process to keep people hooked. This worked in some cases and failed in others (say Duke Nukem forever announced in 1996).

It was a reasonable move to try to build up an audience interested in a certain title early. The problem with that is mainly two things: You cannot keep people hooked for an arbitrary amount of time and such a continuing advertising campaign costs a whole lot of money way before you start earning anything from sales.

These problems lead to a new one, however. It puts very high pressure on the developers to meet deadlines to stay on schedule. And sometimes people in charge may even decide to release a half-baked product which almost always is a very bad idea… (what was the latest example? That Batman game perhaps?)

The community-aware model

It’s not a new insight that it is rather helpful for any title to have a large community. Some studios provide forums in an attempt to simplify building up of a community. And it’s also common knowledge today that feedback from that community is extremely valuable: Knowing your audience better helps a lot to provide the perfect product after all!

The most important point of this model is that interacting with the players is now bidirectional: There’s advertising targeting them but you certainly want to have (and honor) feedback provided by them. And it also makes sense to think about designing the game and/or providing the tools to easily modify the game and thus make it as easy as possible to create mods for the game. This can also be a huge plus when it leads to a bigger, more active and longer living community!

Independent of a single title, there is a possibility for a studio to get itself a good name by opening the source code for older games. This may require some cleaning up work first but some studios have also released code as-is (which can be rather terrible). But usually the community figures out what to do with it and before long the game is ported to new platforms, receives technical updates and enhancements. This has totally made some titles immortal: There are still new episodes, mods and total conversions for Wolfenstein being released. Yes, for a game from 1992 with extremely “poor graphics” (320×240, 8bit) by today’s standards! And there’s not one week without new maps for the mighty DooM (1993).

The community-supported model

There’s this interesting trend of “early access” games: Players are given the opportunity to playtest games before they are ready for release. People know they have to expect bugs but they can try out a game of their interest early and if they are very committed to it, they can report bugs as they encounter them.

This is a classical win-win situation: The developers get a broad testing done for free and the players can have a peak into the game early. Oh, and any form of interaction is of course always a good thing.

The community-backed model

That’s a rather new thing and basically means that some developers try to get their game crowd-funded. This can succeed and this can fail. There are examples for both cases. But while this is clearly a development model since it has a lot of impact on it, I’d say that it’s also more of a special case than a general model.

The future?

MuHa Games have made one clever step ahead with Thea as the gain of giving the title away for free on Linux is really considerable. How’s that? Well, if there was no Linux version, Linux people wouldn’t have bought the game, either. So giving it away is no actual loss: The number of people of the “hey, I would have bought it for Windows but why should I since I can play it for free on Linux!” kind are most likely extremely rare – if they exist at all.

No loss is fine, but where’s the actual gain? Well, there’s the “Just bought the Windows version. Besides: I don’t run Windows at all” type of guy. These people alone should suffice to cover the costs of the additional efforts to package a Linux release and upload it somewhere. But that’s not the main point at all: Can you say “Free advertising”? People talk about the game and people write about the game, many of which would not have done it if it had just been an ordinary game! Now with the free Linux release the game, MuHa managed to make it stand out (and that is not too easy today).

For these reasons giving it away proves to be a very sensible PR action! I do not mind if that was intended or not. That doesn’t change the facts.

Community-assisted model?

So what could the future hold? I can imagine that making the community engage even more would be a big benefit. From a studio’s perspective, fans do unpaid work because they love the product. And from the fan’s perspective it’s just cool to be part of one of your favorite games and help improve it.

What could this look like? My vision is to sort of blend closed source development with what we learned from open source development. It’s cool that people playtesting a game can report bugs via forum or email. But when will the first project set up a public bugtracker along with a tutorial on how to use that for bug reports and maybe (sensible) feature requests?

Then: What about translation? Open source achieved made very, very good results using translation frameworks like Transifex. Now Thea is only available in English. My native language is German and I would not have minded at all to dedicate some time translating a few strings (I got a nice game for free after all!). There’s a lot of potential in this.

And along that it would totally make sense to avoid using proprietary containers for files. I did not bother to try to extract text out of whatever format it is that MuHa uses for Thea. In 1999 ID Software did a clever thing for Quake III Arena: They used container files called “.pk3” – which were simply renamed, uncompressed Zip files. The benefit is obvious: Everybody can extract the resources, modify them and put things back together. Great! I noticed a lot of spelling mistakes in Thea. If I had had access to the game text you’d have received a series of patches from me (and by applying they you’d instantly see which ones are still valid and fixing mistakes). Wouldn’t that be a great way to improve the game?

Licensed Open Source model?

Can open source work for a commercial game? Well, why not? Open source alone does mean just that: The source is open. It does not say under which license and it does not say that it’s free. Now I generally support as much freedom as possible – but that last word there is important. A more open development is a nice improvement IMO. There’s no reason to demand more than that.

In this model the customers pay for the game data without which you obviously cannot play the games but the program source is open (or perhaps semi-open where it is included with the copy of the game you get when you buy it and you’re free to distribute a series of patches but not the source itself). I’m pretty sure that this can work. One potential problem here may be deadlines. Often the code in commercial games must be horrible – not because the programmers suck but because unrealistic deadlines blow. A lot of studios may hesitate to open up their code just for that very reason…

Addressing the problem could however also be easy: You sell games in early access? Buyers get the code and know that it’s early and may not be in perfect shape (and can actually help improving it). Again both sides win: The studio gets code review and maybe some patches plus some people may even attempt to port the game to platforms unsupported by the studio. The players get better games they can help to improve, take modding to the next level and even a chance to see what coding is like and get yourself some reference work if you intent to work in that industry!

There’s one other issue, though. In many cases studios will want to hide some things from competitors. That may be old (and at some point hopefully obsolete) thinking but we have to accept it as a present fact. So what about this? Well, those things could be put into libraries… It’s far better to have the program code open and make it use closed libraries than having nothing open at all!

Time for change

Who’s stepping forward making the next step in game development? I’m really curious if something in the direction of what I wrote here happens any time in the future. For each step there’s good press to catch for free again, you know?😉 Perhaps some small studio dares to make the move.

Update: I wrote this in a hurry on 11/30 to rush out my November post. And then I once again forgot to make it public. But now it is…

Exploring FreeBSD (3/3) – a tutorial from the Linux user’s perspective

This is the third and last post of a series of introducing FreeBSD to Linux users. You might want to take a look at the first post (talking about some things different from Linux) and the second one (about binary updating and package, user and service management) if you have not done so already.

If you’re all new to FreeBSD (or the BSDs in general) I tried to sum up the most important things to know about this OS family in another post. And if you want to know how to install FreeBSD (and what disklabels are as well as some other *BSD specific stuff), there’s yet another post dealing with it.

So what are we up to this time? There are a few topics left that I want to write about (and quite some more that I would like to touch on, too – but it doesn’t make sense to try and put too much into too little space): Updating binary packages, the ports system and updating “world” (the OS itself) from source.

Updating packages

In the last post we installed bash via FreeBSD’s port system (pkg). About one month has passed since then and a new version of bash has been released in the mean time (just as I hoped it would!). So let’s see how to update packages, right?

The most common case is that you want to update all your packages. There are two commands you should know in this regard:

# pkg update

This updates the repository catalogue so that the system knows which package versions are available in the remote repo. You don’t normally have to run this explicitly since FreeBSD will automatically fetch the latest catalogue if it thinks that the local one is too old.

# pkg upgrade

This will tell you which packages can be updated and perform the actual update if you choose to do it.

Updating binary packages

In this case, a new version of the package management tool was also released. Pkg must be updated before any other updates can happen but other than that it works just like any other update does.

The ports system

What are “ports”? The process of making a software (for which the source code is available) build on a system that it was not necessarily meant for is called porting. Depending on the piece of software this can be easy (the program builds out of the box) or extremely challenging (a lot of code needs to be patched to make it work). In order to make things easier for everybody, FreeBSD developed the ports system which is basically a directory for each application that was ported and a Makefile as well as some support files in it. These contain everything needed to build the respective application on FreeBSD simply by issuing make inside that dir. The directories make up what is known as the ports tree.

Fetching a port snapshot

The ports system originated in early FreeBSD and quickly spread to the other BSDs as well. And even on Linux there are people who like concept: Gentoo Linux for example is based on portage which builds on the very same concept (but works rather differently in the end). Well, since I told you to deselect the ports tree during the installation you do not have it on your system. So let’s first get it in place!

Getting the ports

All newer versions of FreeBSD offer the portsnap command which makes that very easy:

# portsnap fetch

If you do not have the ports tree on your system this downloads a snapshot, verifies it and also fetches any patches for ports changed after the snapshot was created. You can use the same command to fetch the newest patches if you already have a ports tree and receive any changes made in the meantime.

# portsnap extract

With this command you tell the system to actually unpack the snapshot and populate the ports tree. Only use this the first time you install the ports tree to your system. It doesn’t make sense to use it afterwards!

# portsnap update

You do not need this if you have just installed the ports tree for the first time. It is used to update the local ports tree after downloading any patches with fetch. If you wish you can also combine the two parameters to update the ports tree (portsnap fetch update).

Extracting the ports tree

You could also get the ports tree via Subversion. But portsnap is just so convenient to use that there’s barely any reason to do so.

Finding your way around the ports tree

So now let’s take a look at it! Where are all the files? They are in subdirectories of /usr/ports. We’ve installed bash in the last post using binary packages. Where would we find it in case we wanted to build it from ports? Being a shell, /usr/ports/shells/bash is quite a logical place, don’t you think? And where would you look for, say, the ruby interpreter? You’ll find multiple versions of it in /usr/ports/lang/ruby2x (ruby 2.0, 2.1, 2.2).

If you work with the ports tree for a while you’ll get at least an idea where things belong to. But what is the best way to locate a specific port? You can use the whereis command followed by the program name and it will tell you where the port lives! Just make sure you type in the right name. You won’t find php for example. But you will find the port if you look for php55 or php56 instead.

Finding applications in the ports tree

Still having trouble? Perhaps the page FreshPorts can help you. You can search there and chances are good that you find what you are looking for and can find out the category and port name that way.

Building from ports

The first question is of course: Why should you build programs from ports? The ports system was invented to automate the build process when there were no binary packages available and you had to build every program from source. Today you can easily work with FreeBSD without ever touching ports.

But when does it make sense to use ports? The simple answer: If you have special needs! The binary packages are pre-build and there’s no way to change any compile-time options. If you’ve ever manually build a program on e.g. Linux, there’s a good chance that you have met configure which takes options like –prefix=/usr –without-package-xz –enable-newest-feature and so on. If you need some program feature that the pre-packaged program does not come with on FreeBSD, you can use ports. Or if you do not want a certain feature built-in which is selected by default, you can also use ports.

Selecting build options for a port

For packages that can be built with different options which the author of the port thought were interesting, you will be given a nice dialog window in which you can select or deselect certain options. Just navigate into the directory of the ports tree where the files for the application you want to build live and issue make.

This will bring up the configuration window if there are any options to set. Please note that your selection will be saved so you are not asked the next time you build the port. If you changed your mind and want to reconfigure the options, you can use make config.

Building links from ports

If you order the port to “make”, the source code will be downloaded from a known location (you do not have to do this yourself), decompressed, probably applied some patches and then built. Once the build is complete, you can use make install to install the program and make clean to clear the build directory of files remaining from the built.

It is also possible to combine several commands which the program make takes (these are called targets and are defined in a file called Makefile). So you can build, install and clean up one port by issuing make install clean.

You also don’t have to worry about dependencies. If a port needs other programs (or libraries) which are not present on the system, they will automatically be built from ports, too. And one more important thing: Don’t hesitate to mix binary packages and ports on your system. You don’t have to choose one and stick with that all the time. In fact the ports produce custom binary packages which are then installed using the normal package system. That’s why pkg is aware of any program that you installed via ports and can for example remove it from the system if you tell it to. You could also go to the port’s directory and use make deinstall.

Recursive operations

If you want to build a complex program that has lots and lots of dependencies (like e.g. Libre Office), it is a good idea to let FreeBSD build it overnight. There is, however, a big problem that you’ll face if you try out large unattended builds: Every now and then, when a new port is built as a dependency, FreeBSD displays the configuration window and pauses until you make your choice…

This is why there are recursive targets: You can use make config-recursive and the ports system will go through all the dependencies and display the configuration. So you can select all the options that you need at once before you use just make to build all those programs.

Recursive source fetching

Mind one thing, though: If you enable more options, you may want to run config-recursive again. Why? Because the options that you selected may have pulled in new dependencies which are not yet configured. Running config-recursive will only display the configuration dialog for ports that were not configured previously. If you need to re-configure all ports, you can use the make rmconfig-recursive target to delete the stored configuration for the port and all dependencies and configure them again afterwards.

And in case you want to pre-load all source tarballs before starting an unattended build, there are the make fetch and make fetch-recursive targets. In very rare cases it can happen that all the sources that one port knows for its tarballs are no longer available (this is more likely to happen if you’re using a no longer supported version of FreeBSD and/or an out-of-date ports tree). You can fix this if you simply find another source of the needed file on the net and download it to the /usr/ports/distfiles/ directory where all those source tarballs for the ports live.

Updating the system from source

Just like with ports the first question ought to be: Why should you? And in this case the answer is even more: You probably shouldn’t. There are people who like to build from source and that’s ok. But if binary updates work for you, in general you should stick to them.

When do you need to compile the system from source? Well, obviously this is the method of choice if you are a developer who needs to build the absolutely newest code. But if that’s the case you’re probably not reading this tutorial anyways, right?

So – why should you do it? There are basically three main cases:

  • To have it done once
  • Because you want to aim for the stable branch
  • You want to customize e.g. your kernel configuration

Do not laugh at the first one. It is a perfectly valid reason. While building FreeBSD from source is extremely easy, it is good to have done it at least once. It will help you to get a little bit closer to your system.

FreeBSD comes in several branches. You can decide to follow another branch and compile the code for it. We’ll talk about that in a minute.

And last but not least if you have special requirements and want to customize your system for that. E.g. you man decide to compile your firewall of choice (FreeBSD offers three of them) into the kernel. In that case you have to build from source.

Getting the source

We cannot discuss scenario three (customizing FreeBSD) here. That would require its own post (or even more). Besides – I’m not too knowledgeable in that field.

Installing the certificate bundle

Let’s assume we want to follow the stable branch. First we need the appropriate source code. FreeBSD uses Subversion for version control and a slimmed version of it comes with the system (“svnlite”).

You may want to install the certificate bundle first so using a secure connection does not result in an error because the certificate is unknown. To do that you can simply use the following command: # pkg install ca_root_nss.

Next we need to checkout the current version of stable code with svn. FreeBSD source code always goes into /usr/src.

Checking out system source with svn

Start the checkout process with

svnlite checkout https://svn.freebsd.org/base/stable/10 /usr/src

and wait for Subversion to finish. This can take quite a while because the source code is quite large.

Once it’s done, you’re set. Go to /usr/src and issue make buildworld. This will build the userland part of FreeBSD (and – depending on your CPU – take a long time to finish).

System source checkout completed

What gets build goes into /usr/obj, btw. So the source code is kept separate from it and anything in /usr/obj can be easily removed anytime before doing a clean new build.

Building the FreeBSD userland from source

When the world build has completed, it’s time to build the kernel as well: make buildkernel – this does not take such a long time to complete.

Now both parts of the system need to be installed with make installkernel and make installworld. Always remember the correct order:

  1. Build world
  2. Build kernel
  3. Install kernel
  4. Install world

The reason is that “buildworld” needs to run first, is that it uses the system compiler to bootstrap the new compiler which is then used to build the whole userland and, after that, the kernel. And the reason that the kernel should be installed first is that after updating the userland you really should reboot. You’ll probably get away without rebooting if you just updated within the same release version but updating to a new release from source will mean that you cannot count on the system to just keep running like before due to incompatible changes made. In theory you are even encouraged to boot into single user mode to do the update! But I have not found that this is really required. Just mind the right order and stick to it.

Building the FreeBSD kernel from source

After rebooting you should find that the system is running on the new kernel. Now we’re on FreeBSD stable. However… That does not at all mean what you’re probably thinking it does!

New kernel is running

FreeBSD branches

I’ve stated before that there are multiple branches of FreeBSD, one of which is stable. Let’s take a look at what they are.

First there’s release. If you followed this tutorial along, version 10.1.0 was the system that we started with. Uname denotes the kernel as 10.1-RELEASE. Release is just that: A certain release. It will stay as-is forever, no changes applied to it.

Then there’s the patch branch or “releng“. This is “release + patches” and in fact the most stable branch available due to error corrections and security fixes. Uname will report something as kernel 10.1-RELEASE-p12. The patch branch is meant for conservative production systems.

We’ve already touched stable and even updated to it. If the patch branch is the most stable version, why is this one called “stable”? Yes, it is a bit confusing, I know. The reason is that this branch receives new features (which the patch branch does not) but the APIs are kept stable. Hence the name. This branch is not officially recommended for production use but the company that I work for has used servers with stable for years and they behaved absolutely fine.

Finally there’s current (called “head” in the repository). This is where the development takes place. If you’re not a developer or somebody who wants to test the newest features as early as possible, this is not for you.

What’s left

I would very much have liked to cover file flags & secure levels as well as jails. I’d liked to have written about tools like portmaster and system components like the three firewalls. But that might or might not happen in a future post…

What’s next?

In exactly one month I’m going to write my final exams to become a qualified IT specialist. So I’ll have to see what topic (if any) I manage to write about next month. Since I’ve always wanted to write the followup to my post about licenses, this may be a good candidate.

Exploring FreeBSD (2/3) – a tutorial from the Linux user’s perspective

This is the second post of the “Exploring FreeBSD” tutorial. If you didn’t do so already, may want to read the first part before this one. And if you are completely unfamiliar with FreeBSD, the posts installing FreeBSD and the general introduction of the OS may also be of interest for you.

In the first part we have configured SSH insecurely to allow root login, set up port forwarding in VirtualBox and briefly explored commands and the default shell on FreeBSD. Now it’s time to continue our journey!

Since some screenshots show a lot of lines I sometimes cut out the relevant part to save a bit of space.

Updating the system

FreeBSD is an operating system well-known for its reliability. But of course just like any other complex systems, it is not perfect – there are bugs and security holes. These are addressed rather quickly since FreeBSD takes those issues seriously. To always have the currently most secure and stable system you need to perform updates. As updating is important, let’s get right to it!

FreeBSD binary update

In FreeBSD there are several branches, but we will ignore this for now and save it for the last part of this series. There are also two supported methods of updating the operating system: Using binary updates and building from source. We’ll just cover the former here and also take a look at the later in the next post.

Updating your system within the current release basically boils down to issuing two commands (well, actually one command and two parameters): freebsd-update fetch and freebsd-update install.

Summary of files to update

Once the fetching is complete, freebsd-update will display a list of changes that will be applied to the system. There you can take a look at which files will be replaced with newer ones. If you agree with the changes, you can install the actual update and – if the kernel was affected as well – reboot.

Installing the update and rebooting

Updating the operating system is actually as easy as that. Keep in mind however, that with the BSDs operating system (“world”) and installed software (“packages”) are things separate from each other! We’ll cover updating packages in the next post.

Release upgrade from 10.1 to 10.2

But what if you want to update to a new release? We’re lucky here: Since this series of articles started with installing FreeBSD 10.1, a new release has happened: 10.2! So let’s update to the new release version, shall we?

Again the freebsd-update command is our friend. It can also do a binary upgrade from one release to another. It’s freebsd-update upgrade this time and with the -r 10.2 option we choose to upgrade to that particular release version.

The release upgrade needs a lot of patches and files!

This process takes a whole lot longer because there are of course far more files affected. But it is essentially the same process: Fetching patches, fetching new files and displaying three lists: files that will be removed, added and modified if the upgrade is installed.

After the upgrade: Rebooting again

After installing the upgrade, just reboot the system. A moment later you should be greeted by your new 10.2 FreeBSD system!

The updated 10.2 system

Adding users

User management is surprisingly easy on FreeBSD. There’s the powerful pw command that can do just about anything user related. And for adding users there’s also the convenient adduser script that makes adding users to the system a breeze.

Adding a new user

Most things are completely self-explanatory. What may however be new to you is the login class concept. Chances are that you won’t need them, but it’s nevertheless good to know that they exist and what they are. On a system with multiple users it could be that some of these users want to use e.g. different locale settings. FreeBSD allows you to create login classes that control localization – only affecting users who have that login class set for their account.

Take a look at the available shells. Missing something? In that case the shell is not installed on the system. If it was, adduser would offer it to you. All shells present on the system are recorded in /etc/shells by the way. Feel free to cat it out and compare it to what adduser offers!

Now let’s switch to the new user and try to become root again. Nope, sudo is not part of the base system. We could install it, but for now we’ll go without it. Fortunately we know the root password. So let’s su to root!

Only “wheel” members are allowed to “su”!

“Sorry”? Now what’s that? Certainly not a very helpful error message! Well, you just met another peculiarity of FreeBSD: You need to be part of the wheel group for su to allow you to become root.

So let’s try that out. And indeed: After logging out, adding the user to the group using pw usermod [username] -G wheel and logging in – su let’s us become root.

Package management

Traditionally programs have been built from source in an automated manner using something called the ports tree. We’ll cover that in the last part of this article series.

The other choice obviously is to use binary packages. FreeBSD has used what is used the pkg_*tools for a long time. Up to the still supported FreeBSD 9.x, these are the default package management utilities. You use pkg_add -r to add packages to the system, pkg_info to display information about them, pkg_delete to remove them, and so forth.

On FreeBSD 8.4 and 9.x the new pkg-ng tool could be optionally used. Since release 10.0 it is the new standard tool for dealing with packages. It is however not part of the base system and thus does not come installed by default.

It will however be binary-bootstrapped (a package manager is needed to install the first package, too, right?) if you first try to use it. For that process it doesn’t make a difference if you provide any parameters to pkg or not.

Bootstrapping the pkg binary package manager

Pkg-ng uses just the unified pkg binary which allows for subcommands. Once you have it on your system, you can use pkg install to add packages to your system, pkg info to view package-related information and so on.

Let’s just install the BASH shell for now. It is as easy as typing pkg install bash. Pkg will list the package and its dependencies and ask for confirmation. If you give it, it will download and install the packages. That’s really all there is to it.

Package management is of course an important topic. The new pkg-ng tool is so easy to use however, that I won’t make any additional examples here. Be sure to have a look at the man pages or read the handbook article.

Installing a familiar shell: Bash

User management again

Ok, we have BASH installed now. Just a moment ago I told you that it should now be available if you add a new user. Now what to do if our already created user should get it as the default shell?

What we could do is pw userdel on it and then re-create it. We could even manually mess with the /etc/passwd file. But this is not the best way to do it. In fact it is much simpler than that.

About to change the shell for the current user

There’s a program that let’s you conveniently change user information of existing users. It goes by multiple names and allows editing different user information. In our case we want to use chsh – “change shell”. It will fire up an editor and let’s you edit the user’s login shell among other things.

DO MIND that FreeBSD keeps packages separate from the base system! There is no /usr/bin/bash! If you enter that as the path to your login shell, the user won’t be able to login anymore. In FreeBSD you’ll find it in /usr/local/bin/bash. The same is of course true for other shells that are not part of the base system and for any other software that is installed from packages in general!

Altering user information

It is also noteworthy that you should NOT change the login shell for root. Leave it as it is and start bash manually if you really have to. On a toy system it may not be much of an issue, but there is no need to grow bad habits in the first place. If you ever need to repair a damaged system and you have changed the default shell for root, you have a good chance that it will bite you.

BASH is now the new shell for my user

System services

We have one more topic to deal with in this article. Knowing how to manage users and adding packages is nice, but being unable to mess with services kind of makes FreeBSD useless for you. So let’s take a brief look at how FreeBSD works with services.

You can simply ps -aux like on linux to take a look at what is running on your system. But how to manipulate daemons properly?

For a while now (it was introduced sometime in the 8.x release versions, if I remember correctly) FreeBSD comes with the service command. It is a valuable tool that you should start using right away. Sure, you can use the init scripts by hand, too. But service has the advantage that you don’t have to think if something belongs to the base system (and is thus located in /etc) or not (in which case you’d have to look in /usr/local/etc)!

Taking a look at services

It also provides a few more nice features. First let’s have a look at which system services are currently enabled (run automatically on startup). To do this, simply type service -e.

If you want to know all services on the system (which you could start), use service -l. This produces a long list, so it might be a good idea to use a pager like less or grep for something if you already know what you are looking for. In our example let’s look for the ntp daemon: service -l | grep ntp. No surprise: It’s called ntpd.

It won’t keep running without the right parameter because my clock difference is too big. But we’re not covering ntpd here, right? It’s just an example and you can of course use other services as well.

How to enable a service?

First let’s ask the system about ntpd’s status: service ntpd status. Now that error message tells us that ntpd is not enabled in the system configuration. It could still be started manually but as long as it is not enabled in the rc config file, FreeBSD keeps reminding you of that fact (which is actually a good thing).

Just like it suggested, we can use service ntpd onestatus. Truthfully it tells us that the daemon is not running. We can start it despite not being enabled in rc.conf using service ntpd onestart. Now onestatus lets us know that it’s running. Keep in mind however that such a manually started process will not be started when the system boots!

We could stop the daemon again using the service command. But to show off the init script way we’ll do it without it one time: /etc/rc.d/ntpd onestop.

And now finally let’s take a look at the configuration file and how we can enable any service. Fire up any editor on /etc/rc.conf and add the line ntpd_enable=”YES” to it. That’s all in fact. In case you want to give any parameters to the daemon, you can do so by adding an optional line like the following: ntpd_flags=”-x”.

Where services are configured: The “rc.conf” file

There’s a lot more to know about services and I encourage you to take a look at the FreeBSD handbook on that topic. But for our short introduction of the very basics, that’s it.

What’s next?

The next post will be the last one of the FreeBSD introduction. It will deal with the ports system, updating from source and a few other things.