Version control (pt. 2): Generations and intended use

In the previous post I gave a little introduction to Version control, explaining a few basics that help to understand the topic. This post assumes you know these things that version control systems have in common. So now it’s time to discuss what sets them apart – not individually, yet, but in terms of characteristics some of them share with each other.

Generations

Each version control system (VCS) has its unique advantages and disadvantages. However there are some traits which are common to several of them. And since the programs that share those traits were typically released rather close to each other, it makes sense to speak of generations of version control systems.

So far there are three of them with the first obviously being the oldest and the third generation the newest. What may surprise you is the fact that the earlier versions, even though being much more limited, have not completely disappeared. How come? Well, just keep it in mind while we take a look at those generations. Perhaps you can see where tools of older generations may still make sense!

The previous article discussed manual “version control” and its limitations. In short: It can work for you if your project is rather small, you’re working on it alone and you’ve got the discipline to always do proper file backup before you make bigger changes. In a world of more sophisticated software it’s quite unlikely that many projects meet all of those requirements. Therefore it totally makes sense to develop programs that would assist you in doing proper version control on your projects.

The first generation

The first generation did exactly that: It preserved any changes you made to a single file. Yes, it is one characteristic trait of the first generation that it works on a per-file basis. Version control is entirely separate for each and every file that you choose to record changes for. For each file it manages, the VCS creates a “history file” which contains all the differences from one version to the next plus a comment.

Originally, it was common to use VCS of the first generation on multiuser systems. If multiple users can work on the same project at the same time (via different logins), it’s quite possible that conflicts arise. If two persons make changes to the same file, the one who saves last “wins” – overwriting all changes that somebody else may have made in the meantime. To avoid that, locking was invented. Files can be locked while they are being edited. In case somebody decides to edit a file, causes a lock and then does something else, the file remains locked. An administrator can however break a lock if something like this happens.

Today VCS of the first generation are more or less obsolete. There are niches where they managed to survive and are still being used. One example is management of configuration files on *nix systems without centralized configuration management. These configuration files are only relevant to the system they exist on so that missing networking capabilities (which the second generation introduced) do not mean any disadvantage. And most of the time these configuration files are separate entities not related to other files and for that reason not even the limitation of only managing a single file is a problem here.

The second generation

There are two important aspects that set tools of the second generation apart from those of the first: They offer network capabilities and they can manage multiple files in one project! The later ability solves a whole bunch of problems which made 1st generation tools hard to work with on anything but very small projects. Managing each file separately does not sound too bad at first. But think about it for a minute.

Let’s imagine, we work on a simple project. Nothing too fancy: A few source code files, one header file. Currently the program is broken and we decided to go back to a working version. Good thing that we have version control, right? Right… Sort of. The file main.c is currently at revision 96, foo.c at revision 44, bar.c at 24 and baz.h at revision 7. See the problem? After we found out that revision 89 broke the program and we reverted back to 88, how do we find out which revision number of the other files belongs to revision 88 of main.c? Yes, we have time stamps and we can find out which revisions all of our files had when the program was working when the main file was at revision 88. Maybe it’s not even that bad when we only have four files. But what if we have 20? 100? It’s cumbersome and really a waste of time. Keep things like this in mind and you’ll definitely come to appreciate the ability to manage multiple files together in one project where the revision number increases whatever file was changed and however many files were modified!

Now that larger projects are possible because the whole project is managed together in one repository, it makes sense to use the network as well. This networking capability is achieved by providing one centralized repository which all project members (or even everybody interested in the project) can checkout to create a local working copy of the latest revision (or any older one if needed). Changes can be made locally and after committing them they are checked in back into the centralized repository. Since the tools of the second generation will only allow checking in if nobody else did a check-in in the meantime (if somebody did, you need to update your working copy first and merge your and the remote changes if at least one file was modified by both) locking is also not necessary anymore!

Today tools of the second generation still play an important role. Their attractivity is declining, however. This is due to a few shortcomings which the third generation tries to address.

The third generation

The big innovation that is common to all tools of the third generation is that they work decentralized. Users usually don’t checkout files from a central (probably remote) repository. Instead they clone the full repository and then checkout the files from their local clone. Since the local repository is exactly the same as the original one, there’s no longer one central repository – at least from a technical view. And while cloning requires to transfer a lot more data over the wire (especially for large projects), there are some huge benefits to it.

If you have a local clone, you can work on the project even when you’re not online. You can see the complete history and checkout earlier revisions if you need to – all without having to access a central repository. If you’re online, you can always sync your local clone with the original repo (pull down changes) or even the original one with your local repository (push up changes) if you have write access.

One of the biggest advantages of decentralized tools is that they make forking much easier. While forking a project has been something not well liked in the past, you’ll often see projects asking you to fork and play with their code today (e.g. the well-known “fork me on github”). Experience has shown that quite some people fork a project, add a feature they need and then give their code back to the project (this is done by creating a pull request which invites the administrators of the original project to pull in the changes if they want).

Which one to use?

If you like software history, there’s nothing wrong with trying out tools of the older generations. But if you’re just starting out with version control and you want to learn something now, it makes sense to choose a tool of the third generation. Which one would I recommend? I can only give the usual answer to such a question: It depends. Each one has its strengths and weaknesses. In the next blog posts we’ll take a closer look at some of the open source VCS of all generations. This might help you to choose the right one for your purpose.

Version control (pt. 1): An introduction

This post is the first part of a series on version control. It provides an introduction by explaining what that actually is, why you should probably use it and how it works in general.

Important terms

Version control (also: revision control) is a means of preserving various versions of a file or of multiple files. This can be done in a lot of different ways but over time some best-practices have emerged that are more or less followed in all modern version control systems.

There are lots of cases where version control makes sense. One of the most common ones is software development where using version control is virtually mandatory. That is why there’s even a separate term describing this form of version control: source (code) control or source code management.

We can group various version control systems together in two groups: local as well as network-based systems. The latter can be further differentiated between centralized and distributed ones.

In short: Why you may need it

Depending on the choice of the tool there can be various situation where you can benefit from version control:

  • Version control enables you to quickly return to an older, known-good state in case something broke
  • Version control gives you the possibility to precisely document any changes you made and thus provides both tractability and a quick overview of changes
  • Version control solves a lot of the problems that occur if multiple people work on the same files at the same time
  • Version control makes it simple to clearly find out the author of any change

Or as a former colleague of mine put it (in a very vivid way):
Why to use version control!

Manual version control

The simplest form of version control is working with a backup copy. You make a copy of e.g. a configuration file before making a change to the live file. Afterwards you test your changes and if they seem to work you either delete the backup or keep it for reference. If the changes had undesired effects, the original is overwritten with the backup (and the latter usually deleted again). This is actually a (rather primitive but sometimes sufficient) form of version control: Thanks to the backup copy you have two versions of the file at your hands!

Backup copy – we’ve all done it

Another variant is to make a copy after a fixed amount of time (or at random). Often people prepend the date to the file name. If all you want to accomplish is that e.g. the data as it was at of the first of each month is preserved for one year, that’s also a sufficient method (along with rotating the backup copies so that you don’t keep around more of them than you need).

Those means of manual version control are however pretty limited. And worse: There’s plenty of room for making mistakes!

Manual version control

How a version control system (VCS) works

Let’s think about a simplified versioning process by pretending to do things by hand. For each recorded change of a file you’d make a copy of the file and keep all the old versions of it. A VCS does not forget a single version of a file it monitors! That’s what it is meant to do, after all. Each new version of the file gets a comment that is meant to briefly sum up the changes that were made.

Keeping probably dozens or even hundreds of files around because one file has that many revisions, would really clutter your disk. Also it would not make sense to keep nearly the same file twice if only one line was changed! That’s why local VCS which also version on a per-file base, keep a “history file” around for each versioned file. That file records only the changes between the various versions as well as the comments.

Network-based VCS are able to organize a whole project (multiple files together instead of each one separately). They also record the changes instead of the full files for each revision as well as the comments. All of that data is collected in a so-called repository.

If a new team member wants to start working on the project, he or she first needs to get the files of that project. For centralized VCS this is done by checking out the most current revisions of all the project files from the remote repository. By doing so, a local working copy of the project files is created to be worked on by the user. When using a distributed VCS the remote repository is cloned instead (thus receiving the full repository with all revisions and not just the most current version of each file). The working copy is then checked out from the local repository clone.

At the beginning of a new project there is no repository, yet. In this case either an empty repository is created, checked out and the new working directory is populated with files. In a next step, those files are placed under version control (which means that the VCS is told to watch them and record changes). Then all changes (all of the files since they are all new right now) are placed into the repository by doing a commit.

After every change made to the project you do a commit again, recording the changed state inside the repository. Other project members can now get a current copy from the repository. This way it’s easy to work together on the same project without the risk of (unknowingly) get into the way of somebody else.

OpenBSD/FreeBSD (ZFS) dual-boot & thoughts about GPT/EFI

In the previous post I wrote about how to get a computer up and running with a dual-boot of FreeBSD and OpenBSD while using full disk encryption. This worked quite well but a bit later I decided that it would be a good time to do the FreeBSD installation again – this time going the modern way and using ZFS on root. This has led to a surprisingly large amount of problems. In the end I got a working system that uses ZFS, so that I’ve actually got an alternative howto for FreeBSD (the OpenBSD part remains the same as before and can be looked up in the previous post). But it’s all a bit different from what I first thought it would be.

Doing things today’s way: GPT

GPT (GUID Partition Table) is a more modern partitioning scheme meant to replace the old MBR style partitioning, redeeming us from limitations we had to live with in the past. It can deal with drives up to 8 Zettabyte (10²¹ Bytes) instead of being limited to 2 Terabyte (10¹² Bytes). That limitation used to be no problem and it still isn’t too bad for most home users since drives bigger than 2 TB are not that cheap, yet. But it obviously won’t be too long before this will be a common issue.

OpenBSD bootloader chainloaded by boot0

A more serious limitation is that MBR only supports 4 partitions. Yes, we all used to call these “primary partitions” and made use of “extended partitions” nested inside one of them to get around that limit. BSD users created disklabels instead to embed more partitions inside disk slices (“MBR partitions”). With GPT this is trouble of the past and you are free to create just about as many partitions as you think you need. There’s no need for embedding BSD disklabels or doing any MBR trickery – which is nice.

Also GPT supports naming partitions. So by using them we can do away with the old glabel mechanism that FreeBSD offers and use native GPT labeling instead. That advantage comes with a little disadvantage, though. By default FreeBSD’s gpart shows such partitions multiple times: Once by their label and once by GPT-id. This can be a bit confusing as you first come to the GPT world. Fortunately there’s a simple solution: Setting a sysctl you can simply disable GTP-ids if you opt for labels (which you should since they are much more meaningful).

GPT is a requirement for machines that use EFI booting. The good news is that all recent x86 hardware comes with EFI (and thus GPT) support. The bad news is that while some machines do support GPT when booting from EFI, they don’t if you choose to boot from legacy BIOS emulation. So if you want to stick to BIOS you may want to check if it is capable of booting a system on a GPT partition. My EliteBook 8470p is fine with GPT partitions in BIOS mode. So I was good to give GPT a try.

Problems with GPT

FreeBSD works great with GPT (and has done so for quite some time now) so there’s nothing wrong installing it using GPT partitions. The first obstacle that I encountered was the boot manager. FreeBSD comes with the nice and simple boot0 tool that I used in my previous howto. Too bad that it’s MBR only! So to continue down the path with GPT, it’s using another boot manager. Usually GRUB is used for that purpose. And while I would certainly prefer to go without a boot manager that has “grand” in its name, I think that using it is acceptable.

The FreeBSD bootloader

A bigger problem awaits, however. OpenBSD didn’t support GPT prior to the current version 5.9. Since it does now, everything should be fine, right? Wrong. If I didn’t miss anything, OpenBSD supports GPT only in conjunction with EFI booting! I did not find a way to use GPT partitions with OpenBSD in BIOS mode. Should anybody have more information on this, I’d very much like to know if it either is possible or if support for this is planned for the future.

Choosing EFI?

This means that not too far down the road there’s already a solid blocker. I had next to no knowledge about the EFI complex and I successfully avoided that topic in the past. So my two options to go on were to either give up on GPT and simply stick with MBR or to make the bold move and go for EFI. I would prefer to try out things one after another but meh… I decided to read a bit about the basics of EFI and then give it a try. To be honest, I don’t like what I read too much. Sure, there are quite a few interesting things that EFI can do. But then again I do not believe in unneccessary complexity. And above all: I don’t like the security implications of it. Not a bit. Trust is not something that I give away for free. Trust has to be earned. Unfortunately closed source vendors have done very little in the past that makes me think I want to trust them. Anyway… EFI is the (near?) future and it will not be easy to avoid it altogether in the next few years.

Alright. So I turned off legacy BIOS emulation on my machine and booted FreeBSD (there are separate EFI images for FreeBSD 10.3 – make sure you pick one of those if you want to go with EFI!) into the installer. Everything worked smoothly and after a couple of minutes my new FreeBSD system had booted. It was so simple and dull that there’s not too much I could write about (which is a good thing since everything really just worked).

Next step: Installing OpenBSD. I read that the memstick image supports EFI booting and I can confirm that installing works just as well as with FreeBSD. Again the new system works like a charm – the OpenBSD people have obviously done a good job for 5.9. OpenBSD was able to cope with GPT just like you would expect.

FreeBSD desktop: EDE, pcmanfm, terminator, smplayer

So far everything looked good. Final step: Make the computer offer a means of booting either system! Honestly I did not expect that step to be a show-stopper. It turned out that it is. Most people seem to use the EFI boot manager rEFInd. You can install it from Windows, OS X and Linux. Yes, that’s it. Now, I was not surprised that it was not ported to OpenBSD. But it did surprise me that it’s not available for FreeBSD!

It may be a quite simple thing to toss in a Linux CD and install an EFI bootloader – or maybe not. I have no clue if there are any other obstacles waiting. But this was the point where I stopped. I wanted a dual-boot BSD system and I don’t want to have to use Linux to do that. I’ve got my pride, too, you know. :p No seriously, at that point I decided to give up EFI for now and continue another time. Maybe that will be worth its own blog post. Who knows.

Back to BIOS / MBR… for now

Here we are, back to using BIOS / MBR. I wanted to use a purely ZFS setup for FreeBSD but another problem showed up: The bootcode to load a kernel from ZFS is quite a bit more complicated than its UFS equivalent. For that reason it doesn’t fit into the boot sector of a partition but needs its own small partition instead. However with gpart the type freebsd-boot seems to be only supported for GPT…

To get anything up and running again (for the time being my on-call laptop was broken after all and my next duty was around the corner!) I settled with a UFS boot partition for the unencrypted /boot partition.

Manual install: FreeBSD with ZFS, GELI

The auto_ashift adjustment is needed for ZFS to adhere the 4k alignment. And to be able to set that sysctl, the ZFS kernel module has to be loaded. That’s a little detail but it took me a moment to figure out what’s going on. Everybody advices to set the sysctl but the installer kept on telling me that there was no such sysctl… Which makes sense once you know that ZFS is not loaded into the kernel by default.

For the layout of the datasets I followed the defaults as used by the automated root-on-ZFS installation (zpool history is a very interesting command! If you don’t know it – try it!). I just changed the order so that similar datasets follow each other which avoids a bit of typing by bringing back the previous command and editing it.

It’s more likely than not that this is not the best way to do things. But it does work. Any suggestions or other comments are welcome of course!


In the partitioning shell:
tcsh
dd if=/dev/zero of=/dev/ada0 bs=1m
gpart create -s mbr ada0
gpart add -a 4k -t freebsd -s 98G ada0
gpart add -a 4k -t freebsd ada0
gpart create -s bsd ada0s1
gpart bootcode -b /boot/boot0 ada0
gpart bootcode -b /boot/boot ada0s1
gpart set -a active -i 1 ada0
gpart add -t freebsd-ufs -s 2G ada0s1
gpart add -t freebsd-swap -s 4G ada0s1
gpart add -t freebsd-zfs ada0s1
glabel label clear /dev/ada0s1a
glabel label swap /dev/ada0s1b
glabel label system /dev/ada0s1d
newfs /dev/label/clear
dd if=/dev/random of=/dev/label/system bs=1m
geli init -b -s 4096 -l 256 /dev/label/system
geli attach /dev/label/system
kldload zfs
sysctl vfs.zfs.min_auto_ashift=12
zpool create -o altroot=/mnt -O compress=lz4 -O \
atime=off -m none -f zroot /dev/label/system.eli
zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
zfs create -o mountpoint=/usr -o canmount=off zroot/usr
zfs create -o mountpoint=/var -o canmount=off zroot/var
zfs create -o mountpoint=/tmp -o exec=on -o setuid=off \
zroot/tmp
zfs create zroot/usr/home
zfs create zroot/usr/src
zfs create -o setuid=off zroot/usr/ports
zfs create -o setuid=off zroot/var/tmp
zfs create -o exec=off -o setuid=off zroot/var/audit
zfs create -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/log
zfs create -o atime=on zroot/var/mail
zfs set mountpoint=/zroot zroot
exit
exit

In the "final modifications" chroot:
mkdir /realboot
mount /dev/label/clear /realboot
mv /boot /realboot
ln -s /realboot/boot /boot
echo 'geom_eli_load="YES"' >> /boot/loader.conf
echo 'zfs_load="YES"' >> /boot/loader.conf
echo 'vfs.root.mountfrom="zfs:zroot/ROOT/default"' > \
/boot/loader.conf
echo '/dev/label/swap.eli none swap sw 0 0' >> \
/etc/fstab
echo '/dev/label/clear /realboot ufs rw 1 1' >> \
/etc/fstab
sysrc zfs_enable="YES"

Setting up a FreeBSD/OpenBSD dual-boot with full disk encryption

A bit over a month ago, I bought my first refurbished laptop. Previously I used a ThinkPad (owned by the company I work for) for on-call duty. It’s running a Linux distro which would not be my first choice at all, it has a small screen and – it’s not my property. I wanted my own laptop and since we’re allowed to use whatever distro we prefer, I thought that I’d be going with Arch.

(I you’re just interested in the commands to enter, have a look at the end of this post where I put a list of them.)

*BSD in production

On a second thought: Why not use *BSD? For me it would mean going to use a *BSD desktop “in production” after only running it privately. Thanks to the great BSDNow! show I feel confident enough now to give it a try. The company that I work for is running some FreeBSD servers, too, so it’s not something entirely strange and unknown. I went with asking if using BSD for on-call was ok. The answer was what I expected: If I thought that it would work ok I should well try it. The only requirement was that I’d encrypt the disk (the same rule would apply to Linux, too, of course).

Next question: Which BSD to use? Since I’m just getting into *BSD, I’m not really familiar with all of them now. Net and Dragonfly would certainly be interesting, but since I need that box for work that’s not an option. I need something that I know enough to be able to work with. Of course it would be best if I could learn something at the same time… So, what’s the best way to learn more? Probably tracking -CURRENT! But what if something breaks? I cannot afford that. And which BSD to use anyway? I work with some FreeBSD servers, so more in-depth FreeBSD knowledge would make sense. Then again I’ve really come to like Puffy and all he stands for…

That would be a hard decision! Finally I decided not to decide – and to just install both instead. This also has the advantage of having a second system if either CURRENT should ever break!

Hardware: HP EliteBook

I bought an HP EliteBook 8470p. Why didn’t I go with Lenovo even though those are known to work best with *BSD and I obviously need something that seriously works? Well, there’s one reason for me: With the ThinkPads keyboards just totally suck. I have no idea who came up with that sad story of “Hey, let’s just put the Fn key where Ctrl belongs and vice versa!”. No idea whatsoever. But I know for sure that it drives me insane. No fun at all when you’re working on-call at four AM, barely awake, and nothing happens when you have to CTRL-C something quickly. I could never get used to it ever!

So for that very reason it had to be some other hardware. I had this older HP laptop that a friend sold me for a few bucks a while ago. I can’t remember which model exactly and cannot look it up since I don’t have it anymore. (When my mother’s old computer died as I was over on a visit, my father thought about replacing it with a Windows box since that’s the only thing that he knows. To avoid that, I set up said old HP laptop that I had with me as a replacement and gave it to her. She’s been using it happily since.) That laptop had been a pleasant experience when I had OpenBSD on it and so I decided to give that EliteBook a try.

It works fairly well for most things. On FreeBSD there was the problem with the Intel video driver but since I’m running 11-CURRENT video is all working great even when I quit X11. WiFi is detected according to dmesg but for some reason no iwn0 shows up if I run ifconfig. I didn’t have time to look into that further, however. On OpenBSD backlight gets turned off if I quit X and thus the screen is a bit dark then. Since I usually quit X to shut down the computer afterwards, anyway, that’s only a minor issue. WiFi is correctly detected and I confirmed it to work. Suspend works when I close the laptop but when it wakes up the keyboard does not work anymore. These are the only issues that I ran into so far.

What is the exact use case?

FreeBSD can use ZFS while OpenBSD cannot. I’m not sure if FreeBSD’s and OpenBSD’s UFS/FFS filesystems are compatible (I think OpenBSD’s implementation misses quite some of the newer features). The encryption methods used by the systems however are definitely not compatible. So it doesn’t matter anyway in this case and I’m free to choose whichever filesystem I want.

Since I’ll be compiling FreeBSD-CURRENT now and then (and in general plan to do some stuff that likes much memory to be available), I decided to go with UFS. Yes, there are scenarios where ZFS is simply overkill! There’s only one drive in the laptop, it’s not extremely big and it won’t hold any important data. I have no need any particular ZFS feature on that system, so going with UFS should be fine. (That plus the fact that I’m still reading Lucas’ and Jude’s excellent book on ZFS and intend to play with that filesystem on another machine)

Prior to version 5.9 (released after I originally wrote this), OpenBSD only really supported the MBR partitioning scheme so going with that was an easy choice. I’ll stick to it for now because I need some time to play with it first. I’m going to do everything again in a VM so I can take screenshots for this article.

Installing FreeBSD

The installation begins just like an ordinary FreeBSD install: Boot up the installer media and make your way through the setup questions. When the installer asks about the partitioning however, we’re going to do that by hand.

Choosing to partition by hand

The pure bourne shell is not very comfortable for interactive use, so it generally makes sense to use a more advanced shell (like tcsh) for convenience features like auto-completion. Should you not know which drives your machine has, camcontrol can help you. If you want to start with a clean drive, you can zero out everything with dd (when I bought my laptop it had Windows 7 on it that I wanted to get rid of).

Zeroing out the disk

If you’re not familiar with what partitions and slices are, you may want to have a look at an older post where I wrote up a little excursion about that topic.

First an MBR is created and then two slices are added to it. The first one gets 100 gigabytes, the other the rest (which is also 100 GB in my case). Both slices are created aligned to 4k sector size of the hard drive. Then a BSD disklabel is added in the first slice. After that, boot0 (a simple boot manager) is written on the drive and the standard bootcode into the first slice. Finally the first slice is marked as active for booting.

Slicing the disk

Now three partitions are created inside the BSD label: One for boot (which will hold the kernel and cannot be encrypted), one for swap and one for the system (which will be encrypted). Glabel is used to give these partitions a more meaningful name than ada0s1a and the like. Since the system partition will be encrypted, it makes sense to write some garbage all across it so that it is impossible to see which part holds data and which does not. This takes quite a while and you could of course skip this. As long as your patience lives up to paranoia, that little bit of extra security is worth the wait!

Creating and labeling BSD partitions

Next the system partition is initialized with GELI, one of FreeBSD’s two military grade encryption methods. I only use a passphrase to unlock but you can also use a key (or both) if you wish. After attaching the new GELI partition, a new GEOM provider, system.eli is available with the clear data for you (and your programs) to use.

Creating and attaching the GELI partition

Now it’s time to format the two data partitions (the swap partition does not need any formating). You could also use journaling UFS for the boot partition but it’s usually not necessary.

Creating filesystems

Copy over the boot directory and add two lines to loader.conf so that you’ll have the chance to unlock your GELI partition during system startup. What remains is writing a fstab. Notice that for some reason I’ve forgotten to put swap.eli in there on my screenshot (even though that’s what I have in my script). What this does is using a one-time key for your swap on each boot, thus making sure that any data that remains on the swap partition is useless once the system was powered down once. You do not have to initialize GELI for this. FreeBSD knows what to do when it sees swap.eli.

Mount the decrypted system partition on /mnt as that’s where the installer expects it. And don’t forget to create the clear directory as we demand that in fstab and the system would not boot up correctly if it was missing. Then exit the shell and continue with the installer.

Copying over /boot and writing loader.conf and fstab

Once the installation has finished, the installer will ask you if you wish to make any final modifications. Answer yes and it will drop you into a shell in a chroot of your new system. Delete /boot (that directory lives on the encrypted system partition and the bootloader could not find the kernel there anyway) and make it a symlink pointing to /clear/boot instead. This step is not actually required. But if you don’t do it, you won’t be able to update your system the normal way. If you want to only mount the real /boot by hand whenever you upgrade, that’s fine, too, of course.

Chosing to make final modifications

Exit the shell, reboot and remove the boot media. Then reboot. Your boot manager (boot0) will offer you two FreeBSD systems. Hit F1 to boot up FreeBSD. Don’t hit F2. There’s no system there, yet.

Installing OpenBSD

The OpenBSD installer is neither pretty nor does it offer any kind of menu system. However it is simple, effective and straight-forward. Choose to install OpenBSD, set your keymap, enter a hostname, configure the net and set a root password.

Hostname, network and password configuration

Choose to run an SSH server by default, whether to prepare the system for X11, if you want the display manager XDM to be started automatically. Create a user now or do so later. When asked for the timezone, give a ! instead to drop into a shell.

Going to a shell

If you don’t know your disks, look inside the dmesg for the name. Now use fdisk to change the type of the second partition from A5 (FreeBSD) to A6 (OpenBSD). Then use disklabel to create a swap partition and a main partition. Make absolutely sure that the later has the type RAID!

Partitioning for OpenBSD

Encrypt the new softraid with bioctl then exit the shell. Now enter the correct timezone and choose the newly created softraid for the installation! Dedicate the whole softraid disk to OpenBSD but edit the partitions to fit your need. You do not need a swap partition on the softraid because we created a separat one on the real disk, remember? For that reason, after OpenBSD formated the partitions you created, the installer will ask you if you want to add any other disks before you start the actual installation. You DO because there’s the swap area.

Preparing crypto softraid

Once the installer has finished, reboot the machine. Now the boot manager says “F1 – FreeBSD” and “F2 – BSD”. The second one is your OpenBSD. The manager knows only the partition type and has no clue which system is on there.

Plain text summary

Here’s what you could type in for the shell parts of both installers:

FreeBSD


In the partitioning shell:
tcsh
dd if=/dev/zero of=/dev/ada0 bs=1m
gpart create -s mbr ada0
gpart add -a 4k -t freebsd -s 98G ada0
gpart add -a 4k -t freebsd ada0
gpart create -s bsd ada0s1
gpart bootcode -b /boot/boot0 ada0
gpart bootcode -b /boot/boot ada0s1
gpart set -a active -i 1 ada0
gpart add -t freebsd-ufs -s 2G ada0s1
gpart add -t freebsd-swap -s 4G ada0s1
gpart add -t freebsd-ufs ada0s1
glabel label clear /dev/ada0s1a
glabel label swap /dev/ada0s1b
glabel label system /dev/ada0s1d
dd if=/dev/random of=/dev/label/system bs=1m
geli init -b -s 4096 -l 256 /dev/label/system
geli attach /dev/label/system
newfs /dev/label/clear
newfs -j /dev/label/system.eli
mount /dev/label/clear /media
cp -Rp /boot /media
echo 'vfs.root.mountfrom="ufs:/dev/label/system.eli"' >> /media/boot/loader.conf
echo 'geom_eli_load="YES"' >> /media/boot/loader.conf
echo '/dev/label/system.eli / ufs rw 1 1' >> /tmp/bsdinstall_etc/fstab
echo '/dev/label/swap.eli none swap sw 0 0' >> /tmp/bsdinstall_etc/fstab
echo '/dev/label/clear /clear ufs rw 1 1' >> /tmp/bsdinstall_etc/fstab
mount /dev/label/system.eli /mnt
mkdir /mnt/clear
exit
exit

In the "final modifications" chroot:

rm -r /boot
ln -s /clear/boot /mnt/boot

OpenBSD


i
de
puffy
em0
dhcp
none
done
password
no
yes
no
no
!
dmesg | grep [ws]d0
fdisk -e sd0
setpid 1
A6
quit
disklabel -E sd0
a b
ENTER
4G
swap
a a
ENTER
ENTER
RAID
w
q
bioctl -c C -l /dev/sd0a softraid0
exit
Europe/Berlin
sd1
whole
e
Your layout here
w
q
sd0
OpenBSD
w
q
done
http
none
openbsd.cs.fau.de
pub/OpenBSD/5.9/amd64
done
done

Precomp (or: How to compress already compressed data?)

It’s a kind of strange feeling, but while half of the IT world seems to either already burn (or to tremble with fear), I can choose freely whatever topic I want to write about this month. I haven’t had a Windows box for almost a decade now and people who I work or keep in contact with, are also mostly *nix only. So this post is not about encryption or ransomware at all. It is about useful, respectable compression. Or more precise: The art of re-compressing already compressed data!

In January Precomp, a precompression utility, has been open-sourced! The first two sections tell a bit about how I became interested in this topic and in Precomp. Skip them if you don’t want to read that kind of stuff.

Compressing compressed data?

When I was young and new to PCs, I once tried to compress a ZIP archive with ACE (a lesser known archiver that once was comparable to the more popular RAR). I knew that ACE offered stronger compression and so I thought that this should make the file smaller. Just imagine my surprise when it turned out that I was wrong!

I guess that most of us have a story like that to tell, a story from our childhood when compression was nothing short of magic. Later when I begun to understand that even though it in fact does start with “m”, it’s not magic but math (a subject that I totally sucked at in school – but fortunately I grasped enough to get a rough idea on how compression works ;)). Now there was no surprise anymore: The compressed data is not well fit for any other general purpose compression method, even if it’s compressed with a weak algorithm.

How to work around that? Well, decompressing the ZIP file and creating a new ACE archive does the trick in the case mentioned above. Of course things are not always that straight forward. If they were, I wouldn’t really have much to write about right now and this post would be really, really short!

For whatever reason, compression continued to fascinate me and I loved compressing things to sizes as tiny as possible. It was fun to try out new experimental compression programs specialized on some specific types of files. I did that for years – until I had to stop due to a lack of time.

Games

Let’s fast forward some years from that failed compression experiment with ACE; I had replaced DOS 6.22 with Win95 which I had replaced with Win98 (SE) that I had replaced with WinME, … On some day I wanted to install Quake ]|[ Arena (yes, friends, I once was 1337 young enough to spell it like that!) on my main computer to get into it again for a LAN party next weekend. So I went looking for the darn CD. It took me a while but I finally found the CD case. I opened it up and… the CD itself was missing. Oh great! Since I didn’t feel like looking into all the other cases to find out into which I might have put it accidentally, I decided to just copy it off an older computer which had it already installed (ID were nice people. I don’t remember which version of Q3A it was, but there eventually was an official patch which also removed the CD check for the game so there was no need for a crack or anything).

Now, different versions of Windows didn’t always play together too well on the LAN and since my Quake installation was on a computer with an older Windows (and I didn’t have another cable at hand), I decided that I’d just burn it to CD. It turned out however, that the other machine didn’t have vanilla Q3A installed but the expansion set as well. Together it was obviously too big to fit on one CD. There would have been easy solutions: Leave out the resource files for the expansion, burn two CDs, put the hard drive into the new computer, … Sure, easy solutions are nice and all. But sometimes they are also boring! And when you’re young and have some free time, you don’t do boring stuff. So of course I opted for the more challenging solution: Get it all on one cd!

Quake 3’s resource containers go by the file extension of .pk3 and, more importantly, are in fact ZIP files without any compression. This meant that they could be compressed well because there was no ZIP compression getting in the way. But guess what: Even after applying the most extreme compression programs, the result simply would not fit onto one CD…

Bad luck, eh? Well, not really. Unpacking the container files was in fact the solution in this case. Not because of weak compression but because it enabled me to test each of the files it contained separately with all compressors and could group together all files that compressed best with one compression utility or another! I think that I was able to shrink it down almost as much as needed with just a couple of megs over the CD limit. There were blank CDs with 800 MB capacity as well, so it would have fit onto one CD – but I didn’t have one of those. So I replaced the ID video with an empty video file and I was set.

Since I liked doing these things I begun doing backups like that for a lot of my favorite games, ripping apart (and later rebuild) resource containers, convert between file formats, decompress whatever could be decompressed before applying stronger compression, etc.

How Precomp works

The more I got into free and open source things, the more I wondered if some of them wouldn’t benefit from better compression. A friend and former classmate of mine invented Precomp and I of course was among the first to make use of it and provide feedback. But what is Precomp?

Precomp is what the name says: A pre-compressor. It is not directly meant to reduce the size of files. On the contrary: It can make some files even bigger than the original input. But that’s a good thing really! How’s that? Well, it’s meant to prepare files for compression so that eventually these files can be compressed to a smaller size than the original file could – without losing data of course!

What Precomp does is look for streams in its input file that are compressed with a compression method known to Precomp. It then decompresses and recompresses them so that they can be compared. If they are identical, Precomp will write the decompressed stream (plus how to recompress it properly) to its output file.

While this sounds quite simple in theory, it is in fact a bit more complex. The reason for that lies in the flexibility of some compression algorithms. Have you ever zipped up a file? Then you know that there are a lot of parameters that you can provide which affects how the file will be compressed: “fast”, “normal”, “strong” or “maximum” compression? What about the dictionary size? A lot of things like that. So either combination of compression parameters will result in a valid zip stream that can be decompressed by any zip-compatible utility. Replacing such a stream with a compatible one is fairly easy. Reproducing the exact, bit for bit identical stream, is not.

To be truly lossless, Precomp uses trial and error on each stream. If it can figure out the combination of parameters that result in the original stream: Great! If not, that stream has to be left untouched.

What Precomp can do

Early versions of Precomp were only available on Windows but there have been Linux versions for quite a while as well. I also use it on FreeBSD without any problems. The .PCF files are platform-independent. You can restore the original file on Windows from a file precompressed on Linux or BSD and vice versa.

While Precomp originally was only a pre-compressor for zlib streams (which are used in a variety of file formats like ZIP, GZIP, PNG, PDF, …), it can do more things now. It can use bzip2 to compress its input file after precompression. It can losslessly compress some JPEG pictures to smaller sizes (thanks to an external library). And in the current development version there’s even support for compressing MP3 music files further (also using an external lib)!

Currently, Precomp relies on temporary files for all the extracted streams and thus puts heavy load on your hard drive (and is a bit slow due to that bottleneck). SSDs obviously perform better, but it totally makes sense to use a memdrive if you can spare some RAM for it. I’ve forked the project on Github and added an experimental shell script to assist with the creation of such a memdrive. It’s currently FreeBSD only (I’ve migrated all of my boxes to *BSD and currently have no Linux machine remaining but will set up one for cases like that some time in the future). Feel free to take a look at it if you’re into portable shell scripting and please do tell me if you have any suggestions!

Precomp is not at all at the limit of its possibilities. There are a lot of things that can be tweaked, optimized or added. If you feel like that could be a fun project – go ahead and play with it, it’s on Github. Or perhaps you have an idea what this could be useful for? Please help yourself and use it. It’s free software after all (Apache licensed).

School, exams and… BSD!

Alright, January is already almost over, so there’s not much use in wishing my readers a happy new year, right? I wanted to have this month’s blog post out much earlier and in fact wanted to write about a completely different topic. But after January 27th it was pretty obvious for me what I’d have to write about – On that day I passed my final exam and now I’m a Computer Science Expert by profession. Time to take a look back at the apprenticeship and the status of *nix in German IT training today.

Spoiler: It’s Microsoft, Microsoft and again Microsoft. Only then there’s one drop of Linux in the ocean. I had left the (overly colorful) world of Windows in 2008. When I started the apprenticeship I was determined not to eat humble pie and come crawling back to that. While it was at times a rather tough fight, it was possible to do. And I’m documenting it here because I want to encourage other people to also take this path. The more people take the challenge the easier it will become for everyone. Besides: It is absolutely necessary to blaze the trail for better technology to actually arrive in mainstream business. This is of great importance if we do not want to totally fall behind.

Detours

I didn’t take the straight way into IT. While I had been hooked with computers since I was a little child, I also found that I had a passion to explain things to others. I gave private lessons after school for many years and after passing the Abitur (think of the British A levels) I chose to go to the university to become a teacher.

It took me a very long time of struggle to accept that I could not actually do that for a living. I am in fundamental opposition to how the German school system is being ruined and I could not spend all my work life faithfully serving an employer that I have not even the least bit of respect for.

The situation is as follows: We once had a school system in Germany that aimed at educating young people to be fit for whatever their life holds. The result was people who could stand on their own feet. Today the opposite is true: A lot of people who leave school have no idea how to find their way in life. Playing computer games is the only thing that a lot of young men (and an increasing number of women) actually do. They have not developed any character, they have no passion for anything (and thus no goals in life) and they often haven’t learned no empathy at all (and thus keep hurting other people – not even because of bad will but because of total ignorance).

At the same time things taught in school aim purely at making people available as workmen as soon as possible. Sounds contradictory? Sure thing. At the university I enjoyed the benefits of the old system where there was relatively large academic freedom and you were encouraged to take your time to learn things properly, to do some research if you hit topics of interest to you and to take courses from other faculties, etc. And this is pure insanity: All that is largely gone. New students are forced to hasten through their studies thanks to tight requirements (which semester to take which course in – very schoolish, no freedom at all)… In the name of “comparability” we did away with our own academic degrees only to adopt the inferior “master” (as well as the even more inferior “bachelor”).

Secondary schools are lowering their standards further and further so that almost anybody can get their A levels and flood the universities. At the same time there are not enough people remaining for other paths of education – and those who are far too often are completely useless to the companies: People who can be described as unreliable at best are of no use at all. I did not want to be part of that madness and so I finally decided to get out and do what I probably should have done right from the start.

Vocational school: Windows

The German vocational school system is a bit special: You only go to school one or two days (this varies among semesters). What about the other days? You spend them in a company you apply at before you can start the apprenticeship. That way you get to know the daily work routine right from the start (which is a really good thing). School is meant to teach some general skills and at work you learn practical things.

On the first day I went to vocational school, I kind of felt… displaced. Why? Well, coming back to school to teach children is something that takes a moment to adjust to. I enjoyed teaching in general (even though there are always horrible classes as well ;)) but becoming a student again afterwards is really strange. At least for a while.

Subject matter was extremely easy for me. But being almost 30 years old when I started the apprenticeship of course meant that I had a lot more of knowledge and experience than the typical 18 or 20 years old student. However this was a good thing for me since I also have a wife, two children and had to drive about 1.5 hours to school and the same distance back. Which meant that I had far less time for homework or learning than the others. In fact I only found a few hours to learn for the preliminary exam as well as for the final exam. But that’s it.

We had PCs with Windows XP and were required to work with that. Most of my classmates protested because they were used to Windows 7. I simply installed Cygwin, changed tho panel position to top and things were pretty much ok for me (it’s only for a few hours, right?). A while later we got new PCs with Windows 8(.1?) and new policies. The later made it impossible for me to use Cygwin. Since I had never touched anything after Windows XP, I took my time to take a look at that system. In fact I tried to be open for new things and since a lot of time passed since I left Windows, I no longer had any strong feelings towards it. Still Win 8 managed to surprise me: It was even worse than I had thought possible…

The UI was just plain laughable. I have no idea how anybody could do some actual work with it using the mouse. Now, I’m a console guy and I need no mouse to do stuff (if I at least have Cygwin that is). But that must have been a joke, right?

Then I found out that Windows still was not capable of even reading an EXT2 file system. Oh my. So I decided to format one USB key to FAT32 for school. But guess what? When I attached it, Windows made some message pop up that it was installing drivers – which then failed… I removed the USB key and inserted it again. Same story. A classmate told me to try another USB connector. I thought that he was fooling me but he insisted on it so I did it (expecting him to laugh at me any second). To my big surprise this time the driver could be installed! But the story does not end here. No drive icon appeared in the explorer. I removed the USB key again and reattached it once more. Nothing. My classmate took it out yet again and plugged it into the former connector (the one from which installing the driver failed). And this time the drive appeared in the explorer! It was that moment that I realized not too much had changed since XP – despite the even uglier looks. Bluescreens, program crashes and cryptic error messages that I had not seen in years all were back.

I decided that I could not work like that and decided to bring a laptop each school day. Just about all my classmates were fine with Windows however. But speaking of classmates: We lost five of them in the first two years. Two simply never showed up again, two more were fired by their companies (due to various misbehavings) and thus could not continue their apprenticeship and the other one had a serious problem with alcohol (being just 17 years old) and was also fired.

BYOD: Linux desktop

My laptop was running Linux Mint. When I bought it, it came with Mint pre-installed. My wife got used to that system and did not like my idea to install a different system (I mainly use Arch Linux as a desktop at work and on other PCs at home) and so Linux Mint stayed on there.

There were a few classmates interested in Linux in general. These quickly became the ones that I spend most of my time in school with. Three already had some experience with it but that’s it. One of them decided that it was time to switch to Linux about a year ago. I introduced him to Arch and he’s a happy Antergos (an Arch-based distro) user since then. Another classmate was also unhappy with Windows at home. I answered a few questions and helped with the usual little problems and she successfully made the switch and runs Mint now.

Some teachers couldn’t quite understand how one could be such a weirdo and not even have one single Windows PC. We were supposed to finish some project planning using some Microsoft software (forgot the name of it). I told the teacher that the required software wouldn’t run on any of my operating systems. Anything not Windows obviously wasn’t thinkable for him and he replied that in that case I’d really have to update! I explained to him that this was not the case since I ran a rolling-release distro which was not just up to date but in fact bleeding edge.

When he understood that I only had Linux at home, he asked me to install Windows in that case. Now I told him that I didn’t own any current version of Windows. He rolled his eyes and replied that I could sign up for some Microsoft service (“dream spark” or something?) where each student or apprentice could get it all for free. Then I objected that this would be of no use since I could not install Windows even if I had a license because I did not agree to Microsoft’s EULA. For a moment he did not know what to say. Then he asked me to please do it at work then. “Sorry”, I replied, “we don’t use Windows in the office either.” After that he just walked away saying nothing.

We were required to learn some basics about object-orientated programming – using C#. So I got mono as well as monodevelop and initially followed the course.

Another Laptop: Puffy for fun!

I got an older laptop for a really cheap price from a classmate and put OpenBSD on there. After having played a bit with that OS in virtual machines I wanted to run it on real hardware and so that seemed to be the perfect chance to do it. OpenBSD with full disk encryption and everything worked really nice and I even got monodevelop on there (even though it was an ancient version). So after a week I decided to use that laptop in school because it was much smaller and lighter (14″ instead of 18.3″!) – and also cheaper.😉

After upgrading to OpenBSD 5.6 however, I realized that the mono package had been updated from 2.10.9p3 to 3.4.0p1 which broke the ancient (2.4.2p3 – from 2011!) version of monodevelop. Now I had the option of bringing that big Linux laptop again or downgrade OpenBSD to 5.5 again. I decided to go with option 3 and complain about .NET instead. By now the programming course teacher already knew me and I received permission to do the exercises with C++ instead! He just warned me that I’d be mostly on my own in that case and that I’d of course have to write the classroom tests on C# just like everyone else. I could live with that and it worked out really well. Later when we started little GUI programs with winforms I would have been out of luck even on Linux and mono anyway. So I did these with C++ and the FLTK toolkit.

Around christmas I visited my parents for some days. My mother’s computer (a Linux machine I had set up for her) stopped working. As my father decided that he’d replace it with a new Windows box (as that’s what he knows), I gave up my OpenBSD laptop. I installed Linux on it again and gave it to my mother as a replacement to prevent her having to re-learn everything on a Windows computer…

Beastie’s turn

So for the last couple of weeks I was back on Linux. However the final exam consists of two parts: A written exam and an oral one. The later is mostly a presentation of a 35 hour project that we had to do last year. I took the chance and chose a project involving FreeBSD (comparing configuration management tools for use on that particular OS). We also had to hand in a documentation of that project.

Six days before the presentation was to be held, I decided that it would suck to present a FreeBSD project using Linux. So I announced to my wife that I’d install a different OS on it now, did a full backup, inserted a PC-BSD 10.2 cd and rebooted. What then happened is a story of its own… With FreeBSD 10.3 just around the corner I’ll wait until that is released and write about my experiences with PC-BSD in a future blog post. Just so much for now: I have PC-BSD installed on the laptop – and that’s what I use to write this post.

The presentation also succeeded more or less (had a problem with Libre Office). But the big issue was that I obviously chose a topic that was too much for my examiners. My documentation was “too technical” (!) for them and they would have liked to see “a comparison with other operating systems, like Windows (!)” – which simply was far beyond the scope of my project… I ended up with a medicore mark for the project which is in complete contrast to the final grade of the vocational school (where I missed a perfect average by 0.1).

Ok, I cannot say that this came completely unexpected. I had been warned. Just a few years earlier, another apprentice chose a Linux topic and even failed the final exam! He took action against the examiners and court decided in his favor. His work was reviewed by people with Linux knowledge – and all of a sudden he was no longer failing but in fact got a 1 (German equivalent to an A)! I won’t sue anybody since I have passed. Still my conclusion here is that we need more people who dare to bring *nix topics on the list. I would do it again anytime. If you’re in the same situation: Please consider it.

Oh, and for another small success: The former classmate who runs Antergos also tried out FreeBSD on his server after I recommended it. He has come to like jails, the ports system and package audit among other things. One new happy *BSD user may not be much. But it’s certainly a good thing! Also all of my former classmates now at least know that *BSD exists. I’ve held presentations about that and mentioned it in many cases. Awareness for *nix systems and what they can do may lead to giving it a try some time in the future.

Top things that I missed in 2015

Another year of blogging comes to an end. It has been quite full of *BSD stuff so that I’d even say: Regarding this blog it has been a BSD year. This was not actually planned but isn’t a real surprise, either. I’ve not given up on Linux (which I use on a daily basis as my primary desktop OS) but it’s clear that I’m fascinated with the BSDs and will try to get into them further in 2016.

Despite being a busy year, there were quite a few things that I would have liked to do and blog about that never happened. I hope to be able to do some of these things next year.

Desktops, toolkits, live DVD

One of the most “successful” (in case of hits) article series was the desktop comparison that I did in 2012. Now in that field a lot has happened since then and I really wanted to do this again. Some desktops are no longer alive others have become available since then and it is a sure thing that the amount of memory needed has changed as well…😉

Also I’ve never been able to finish the toolkit comparison which I stopped in the middle of writing about GTK-based applications. This has been started in 2013 so it would also be about time. However my focus has shifted away from the original intend of finding tools for a light-weight Linux desktop. I’ve become involved with the EDE project (“Equinox Desktop Environment”) that uses the FLTK toolkit and so people could argue that I’m not really unbiased anymore. Then again… I chose to become involved because that was the winner of my last test series – and chances are that the reasons for it are still valid.

And then there’s the “Desktop Demo DVD” subproject that never really took off. I had an Arch-based image with quite some desktops to choose from but there were a few problems: Trinity could not be installed alongside KDE, Unity for Arch was not exactly in good shape, etc. But the biggest issue was the fact that I did not have webspace available to store a big iso file.

My traffic statistics show that there has been a constant interest in the article about creating an Arch Linux live-CD. Unfortunately it is completely obsolete since the tool that creates it has changed substantially. I’d really like to write an updated version somewhen.

In fact I wanted to start over with the desktop tests this summer and had started with this. However Virtual Box hardware acceleration for graphics was broken on Arch, and since this is a real blocker I could not continue (has this been resolved since?).

OSes

I wrote an article about HURD in 2013, too, and wanted to re-visit a HURD-based system to see what happened in the mean time. ArchHURD has been in coma for quite some time. Just recently there was a vital sign however. I wish the new developer best luck and will surely do another blog post about it once there’s something usable to show off!

The experiments with Arch and an alternative libc (musl) were stopped due to a lack of time and could be taken further. This has been an interesting project that I’d like to continue some time in some form. I also had some reviews of interesting but lesser known Linux distros in mind. Not sure if I find time for that, though.

There has been a whole lot going about both FreeBSD and OpenBSD. Still I would have liked to do more in that field (exploring jails, ZFS, etc.). But that’s things I’ll do in 2016 for sure.

Hardware

I’ve played a bit with a Raspberry 2 and built a little router with it using a security orientated Linux distro. It was a fun project to do and maybe it is of any use to somebody.

One highlight that I’m looking forward to mess with is the RISC-V platform, a very promising effort to finally give us a CPU that is actually open hardware!

Other things

There are a few other things that I want to write about and hope to find time for soon. I messed with some version control tools a while back and this would make a nice series of articles, I think. Also I have something about devops in mind and want to do a brief comparison of some configuration management tools (Puppet, Chef, Salt Stack, Ansible – and perhaps some more). If there is interest in that I might pick it up and document some examples on FreeBSD or OpenBSD (there’s more than enough material for Linux around but *BSD is often a rather weak spot). We’ll see.

Well, and I still have one article about GPL vs. BSD license(s) in store that will surely happen next year. That and a few topics about programming that I’ve been thinking about writing for a while now.

So – goodbye 2015 and welcome 2016!

Happy new year everyone! As you can see, I have not run out of ideas.:)