Running OpenBSD on SPARC64 (HTTPd, packages, patching, X11, …)

In my previous post I described the process of installing OpenBSD 6.0 on a SPARC64 machine and updating it all the way to 6.5. Now it’s time to actually do something with it to get an idea of how well OpenBSD works on this architecture!

OpenBSD’s base system

The OpenBSD team takes pride in providing an ultra-secure operating system. It’s a well-known fact that the project’s extremely high standards only apply to the base system. Every now and then critics pop up and claim that this basically defeats the whole idea and even accuse the project of “keeping their base system so small that it’s useless by itself” to keep up their defined goals.

There’s some truth to it: The base system is kept (relatively) small if you compare it to some of the fatter operating systems out there. But that’s about it because actually these allegation could not be further from the truth. The base system includes doas, a simpler sudo replacement. It comes with tmux. OpenBSD even maintains it’s own fork of X.org, called Xenocara (not even FreeBSD comes with an X11 server by default) and there’s in fact a lot that you can achieve with the base system alone! Let’s look at one such possibility.

HTTPd

Since the OpenBSD developers are convinced that a webserver is something to keep around all the time, there’s one in base. Originally they used the Apache HTTPd for this. The problem was that at some point, the Apache Foundation decided to give up their Apache 1.0 license and replace it with version 2.0 (they had been criticized a lot for being incompatible with the GPL and the new version solved that problem). The newer version also made the license less simple and permissive than it had been before and OpenBSD did not like the new license. For that reason they basically stayed with the old Apache 1.3 webserver for a long time. They maintained and patched it all that time, but the software really begun to show it’s age.

So for version 5.6, OpenBSD finally removed the old Apache webserver in base and replaced it with Nginx. One release later, they did away with that, too, because they felt that it was starting to become too bloated for their needs. They imported OpenBSD HTTPd into base instead: A home-grown, very simple webserver. It evolved over time, but even though it having gotten more features implemented and becoming a fine little webserver, it strives to keep it simple.

The developers resist the temptation to add new features just because they could and have even made a list of things that some people might want which however will never be implemented because they would raise complexity to an unacceptable level. OpenBSD HTTPd does not want to be a webserver for everyone. It wants to be a ultra-secure webserver that does enough to be useful to many people. If you have any needs above what it offers – get another one.

Simple static website configuration of OpenHTTPd

The simplicity of HTTPd adds a lot to its beauty. I’ve written some HTML for a test page (see screenshot). All of the configuration that I need to do for HTTPd is as follows:

server spaffy.local {
    listen on egress port 80
}

Yes, that’s all that is required: I basically define a vHost (“Server” in HTTPd lingo) and have the application listen on the HTTP default port 80 on egress (a keyword which means whatever interface has the default route). Let’s check if that configuration really is valid by issuing

httpd -n

And it is! Impossible? No. Remember that OpenBSD comes with sane defaults. For that reason there’s usually pretty little that you need to configure. You could, of course. And we’ll be doing that a little later.

Now let’s force-start httpd (we need -f since the service is not enabled, yet, and we want to manually start it once):

rcctl -f start httpd

I’ve edited the /etc/hosts file on my laptop to be able to use the spaffy.local name. So now I can just type that into the address bar of my browser and reach the test page that the SPARC64 machine hosts. OK, a static page probably doesn’t impress you so much. Fortunately that’s not all that we can do in just relying on what base offers!

Static test page displayed in browser

CGI

OpenBSD also comes with Perl as part of the default install. I got that Lama book several years ago, read through about 2/3 of it and then decided that I didn’t like Perl too much. For that reason I never really did anything with it, but here I want to do something with what OpenBSD provides me with, so Perl is a logical choice and I might finally do something with it. Here’s what I came up with:

#!/usr/bin/perl
use strict;
use warnings;

my $osname = `uname -s`;
my $osver = `uname -r`;
my $osarch = `uname -m`;
chomp($osname, $osver, $osarch);

my @months = qw( Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec );
my @days = qw( Sun Mon Tue Wed Thu Fri Sat Sun );
my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday ) = gmtime();

print "Content-type: text/html\n\n";
print "<html><head><title>Greetings!</title></head>";
print "<body>Hello from <strong>$osname $osver</strong> on <strong>$osarch</strong>!";
print "<br><br>This page was created by Perl $^V on $days[$wday], $months[$mon] $mday";

if (length($mday < 2)) {
  if (substr($mday, -1) == "1") {
    print "st"; }
  elsif (substr($mday, -1) == "2") {
    print "nd"; }
  elsif (substr($mday, -1) == "3") {
    print "rd"; }
  else {
    print "th"; }
} else {
  if ((substr($mday, 0, 1) ne "1") and (substr($mday, -1) == "1")) {
    print "st"; }
  if ((substr($mday, 0, 1) ne "1") and (substr($mday, -1) == "2")) {
    print "nd"; }
  if ((substr($mday, 0, 1) ne "1") and (substr($mday, -1) == "3")) {
    print "rd"; }
  else {
    print "th"; }
}

print ", $hour:";
if (length($min) == 1) {
  print "0";
}
print "$min (UTC)</body></html>";

Nothing too fancy, but for a first attempt at writing Perl it’s probably OK. After making the script executable, I can run it on the system and get the expected output. Things get a little more complex, though. HTTPd runs in a chroot for security reasons. And just copying the script into the chroot and trying it to execute in a chrooted environment fails with “no such file or directory”.

Huh? I just copied it there, didn’t I? I sure did. The reason for this happening is that the Perl interpreter is not available in the chroot. So let’s copy that one over as well and try again. Abort trap! How do they say? Getting a different error can be considered progress…

Perl CGI script failing due to chroot

Ok, now Perl is there, but it’s not functional. It requires some system libraries not present in the chroot. Using ldd on the Perl executable, I learn which libraries it needs. And after providing them, I can run the script in the chroot! There is a new problem, though: Perl is complaining about missing modules. The simplest solution in our case is to just remove them from the demo script as they are not strictly (haha!) necessary.

Providing Perl dependencies in the web chroot

On to the next step. Here’s a little addition to the HTTPd configuration:

    location "/cgi-bin/*" {
        root "/"
        fastcgi
    }

It basically adds different rules for the case that anything below /cgi-bin is being requested. It changes the document root for this and enables fastcgi. Now I only need to start the slowcgi service (OpenBSD’s shrewdly named fastcgi implementation) and restart HTTPd. My Perl program makes uses of the system’s uname command, so that should be made accessible in the chroot, too, of course.

Finishing the dynamic webpage setup

And that’s it. The script is executed in the webserver and the expected resulting page generated ,which is then served properly:

Dynamically created webpage displayed in beowser

I think this is pretty cool. Try to do that with just the default install of other operating systems! BTW: Want to make HTTPd and slowcgi start automatically after boot? No problem, just put the following into /etc/rc.conf.local:

httpd_flags=""
slowcgi_flags=""

This makes the init system start both daemons by default (and you can of course drop the “-f” flag to rcctl if you need to interact with them).

Binary packages

For OpenBSD 6.5, pre-built packages are offered for 8 of the 13 supported architectures – among them SPARC64. There’s just a couple short of 9,500 packages available (on amd64 it’s 10,600 – so in fact most packages are there)!

Things like GCC 8.3 and even GNAT 4.9 (the Ada part of GCC which is interesting because it’s written in Ada and thus needs to be bootstrapped to every new architecture by means of cross-compiling) are among the packages, as is LLVM 7.0. When it comes to desktop environments, you can choose e.g. between recent versions of Xfce, MATE and Gnome.

Actually, SPARC64 is one of only 4 architectures (the others being the popular ones amd64, i386 and arm64) that are receiving updates to the packages via the packages-stable repository. In there you’ll find newer versions of e.g. PHP, Exim (which had some pretty bad remote exploits fixed), etc.

Basic OpenBSD package management

I choose to install the sysclean package. Remember when I said that I skipped deleting the obsolete files when updating the OS in my last post? This program helps in finding files that should be deleted. However it’s not too intelligent – it just compares a list for a fresh system to the actual system on disk. For that reason it also lists a lot of files that I wouldn’t want to delete. Still it’s helpful to find out obsolete files that you might have forgot to remove.

Sysclean shows a lot of possible remove candidates

Errata patches

While OpenBSD tries it’s very best at providing a safe to use operating system, there really is nothing both useful and free from errors in the IT. If problems with some component of the system are found later, an erratum is published for it. If you are using OpenBSD in production, you are supposed to keep an eye on errata as they are released. Usually they consist of a patch or set of patches for system source code as well as instructions on how to apply it and recompile the needed parts.

Since version 6.1, OpenBSD comes with a handy utility called syspatch(8), which can e.g. be used to fetch binary patches for all known errata that have not been applied to the OS on the respective machine. This is nice – but it’s only available for amd64, i386 and arm64. So on SPARC64 we still have to deal with the old manual way keeping the system secure. However errata patches are also applied to the -STABLE branch and we can use that to get all the fixes.

No syspatch on SPARC64 – tracking -STABLE manually as it used to be

To upgrade our installation to 6.5-STABLE, the first step is to get the operating source of the current release (the sys tarball contains the kernel and src the rest of the base system). After extracting those, CVS is used to update the code to the latest 6.5-STABLE.

Done getting the stable changes from CVS

Once that’s done, it’s time to build the new (non-SMP) kernel:

# cd /sys/arch/$(machine)/compile/GENERIC
# make obj
# make config
# make && make install
# reboot

Building a 6.5-STABLE kernel

On my SunFire v100 the kernel build took 1h 20m. I was curious enough to build the userland as well, just to see how long it would take… The answer is: 85h 17m! I think that LLVM alone took about three days. The rest of the system wasn’t much of a problem for this old machine, but LLVM certainly was.

BTW, I had problems with “permission denied” when trying to “make obj”. After reading the manpage for release(8), I found out that /usr/obj should be owned by build:wobj with 770 permissions which had not been the case on my system.

Kernel build complete

Having done that, I thought that I might build Xenocara as well, to compare how long it takes to build. So I got the sources for that, too, updated them via CVS and built it. It took 9h 26m to build and install.

Xenocara built from (-STABLE) source

X11 on SPARC64

I had left out all X11-related distribution sets when installing OpenBSD. But after having installed Xenocara from source, I had it all available. So I decided to just do something with it. Since the server does not have a graphics card, I cannot run any X program on it directly, because the xserver won’t run. I decided to get a graphical application that is not part of Xenocara installed first. After browsing through the list, I settled on Midori, a WebKitGTK-based webbrowser.

Installing the Midori browser via packages

It took a moment to install all the dependencies, but everything worked. As the next step I enabled SSH X11 forwarding and restarted SSH.

Midori is installed, allowing X11 forwarding for SSH

After connecting to the SPARC64 machine via SSH and checking that the DISPLAY environment variable was set, I could just launch Midori and have it sent over to my laptop that I used to SSH into the other box. So the browser is being executed on the SPARC64 server but displayed on my other machine.

SSHing into the SPARC64 machine and forwarding Midori to my amd64 laptop

Everything worked well, I could even visit the OpenBSD homepage and it was rendered correctly.

The webkit-based browser works well on SPARC64!

Conclusion

OpenBSD is a fine operating system for people who value quality. The SPARC64 port of it seems to be in pretty good shape: Most packages and even stable-package updates are available. What is missing, is syspatch support – but only three architectures have that right now. Also the system compiler is still the ancient GCC version 4.2 which was the last one before the project switched the license to GPLv3.

OpenBSD 6.6 has been released one day after I finished compiling 6.5-STABLE. On amd64 I could now use sysupgrade(8) to upgrade to the new release even easier than before. This is also not supported on SPARC64. But these two little shortcommings just mean a little extra work that all OpenBSD users on any platform had to do anyway until not that long ago.

For 6.6 there are even more packages available for SPARC64. E.g. the Rust compiler has been bootstrapped on this architecture which definitely is great news. Maybe the system compiler will change to LLVM/Clang one day, too. Right now the SPARC64 backend for Clang is incomplete upstream at the LLVM project, if I understood things right. But we’ll see. Maybe it’ll become available in the future. I guess I’ll really have to get a newer SPARC64-based machine with a faster processor. Luckily OpenBSD supports quite a few of them.

OpenBSD on SPARC64 (6.0 to 6.5)

Earlier this year I came by an old SunFire v100 that I wrote about in my previous article. After taking a look at the hardware and the LOM, it’s time to actually do something with it! And that of course means to install an operating system first.

OpenBSD

OpenBSD, huh? Yes, I usually write about FreeBSD and that’s in fact what I tried installing on the machine first. But I ran into problems with it very early on (never even reached single user mode) and put it aside for later. Since I powered up the SunFire again last month, I needed an OS now and chose OpenBSD for the simple reason that I have it available.

First I wanted to call this article simply “OpenBSD on SPARC” – but that would have been misleading since OpenBSD used to support 32-bit SPARC processors, too. The platform was just put to rest after the 5.9 release.

OpenBSD 6.0 CD set

Version 6.0 was the last release of OpenBSD that came on CD-ROM. When I bought it, I thought that I’d never use the SPARC CD. But here was the chance! While it is an obsolete release, it comes with the cryptographic signatures to verify the next release. So the plan is to start at 6.0 as I can trust the original CDs and then update to the latest release. This will also be an opportunity to recap on some of the things that changed over the various versions.

Preparations

I had already prepared the machine for installation previously, so I only had to make a serial connection and everything was good to go. If you’re in need of doing this and don’t feel like reading the whole previous article, here’s the important steps:

  1. Attach power to go to the lom prompt
  2. Issue boot forth and then poweron to go to the loader
  3. At the ok prompt use setenv boot-device cdrom disk to set the boot order
  4. Set an alias for the CD-ROM device with nvalias cdrom /pci@1f,0/ide@d/cdrom@3,0:f
  5. Reset the machine with reset-all or powerdown and then poweron again

Booting up the OpenBSD 6.0 sparc64 CD

Insert the OpenBSD installation CD for SPARC64 and after just a moment you should be in the installation program.

Installing 6.0

OpenBSD’s installation program is very simple. It’s basically an installation script that asks the user several questions and then goes ahead and does the things required for the desired options. In the Linux world e.g. Alpine Linux does the same, and I’ve always liked that approach.

OpenBSD 6.0 installer started

On a casual installation, the script would ask for the keyboard layout. But since we’re installing over serial here, that doesn’t matter. It asks for the kind of terminal instead. Since our CPU architecture is SPARC64, OpenBSD assumes we’re using a Sun Terminal. Well, I don’t, so I choose Xterm.

Of course we need a hostname for the new system. Since it’s Puffy (the OpenBSD mascot) on SPARC here, I settled on spaffy. 😉

Choosing the root password

Next is network configuration. DHCP is fine for this test machine. Then the root password is being set.

Of course I want to access the box over SSH later, so that I don’t need the serial connection anymore and can put the machine in a different room. Compared to many x86 servers it’s not as loud as those, but still quite a bit louder than you would want a machine sitting directly next to you to be. Allowing root over SSH is very bad practice, so I create a user next and disallow remote root logins.

Selecting the partitioning

Then I choose my timezone. Next is deciding on the partitioning. There I noticed a difference compared to i386/amd64 installations. I have a habit of creating partition B first (to put the swap space on the beginning of the drive). When I tried to do this, the installer told me that this architecture didn’t allow doing that. I assume that limitation is due to Sun’s partitioning scheme VTOC that is being used on the SPARC machines. So I created them in order.

What you can see on the screenshot is OpenBSD’s default partitioning. It’s more complex than many people may be used to, but for a good reason. Remember that you can mount filesystems with different options? That way you can e.g. have /tmp mounted noexec. OpenBSD makes good use of this, e.g. enabling or disabling W^X protection on a filesystem-wide base. This is not a production machine, though, and the drive is fairly small for today’s needs. So in the end I went with a much simpler way of dividing the drive.

Selecting the distribution sets to install

Finally I need to choose what to install. OpenBSD offers so-called “sets” for various parts of the full operating system. Since I’m only installing 6.0 as a starting point, I go with the minimum required options: The kernel (bsd) and the base system.

I have no use for the install (ramdisk) kernel (bsd.rd) or the SMP-enabled multi processor kernel (bsd.mp). Also I don’t need the system compiler (comp), manpages (man) or small games (game). Of course I also don’t need the X11-related sets.

Installation finished!

Then the installer goes off and prepares everything. When it has finished, the only thing that is left is rebooting the system (and removing the CD). Now we can also change the boot order in the ok prompt, to set it to booting from disk only, speeding up the boot time minimally:

ok> setenv boot-device disk

And that’s it! Now I have an old but known good version of OpenBSD on my SunFire box.

Freshly installed OpenBSD 6.0 booted up

Updating to 6.1

Alright. What’s next? Running a 3 years old version of OpenBSD is probably not that good an idea if newer versions are available for this architecture – and they are.

So the first thing to do is fetching the ramdisk kernel of version 6.1 and the signature for it. Then I check the integrity of the kernel with signify(1). Everything is fine, so I go on and replace the standard kernel with the install kernel for the newer version. There’s probably a better way to do this, but the SPARC bootcode seems to have “bsd” as the kernel file name hard-coded and I admittedly didn’t dig very deep to figure out a different way of booting alternate kernels.

Getting 6.1 ramdisk kernel and verifying signature

After restarting, the systems boots into the install kernel. This time I select upgrade instead of install, of course. The installer then checks the existing operating system (or at least the root partition).

I then select http for the location of the sets and point the installer to a mirror that still holds the old releases.

Installer started in upgrade mode

Next is selecting the distribution sets to be installed. Again I choose only the bare minimum, since the upgrade is just an intermediary step to upgrading all the way to a current release.

In earlier versions of OpenBSD, etc was a separate set. Since the files required to check newer releases are in /etc, I’d have chosen a different installation strategy if they were still available separately. However the etc set has been included in the big base set for a while now.

Necessary sets updated

After the sets have been downloaded and extracted the upgrade is mostly complete. The remaining things are done in the live system. So it’s time to complete this step and reboot.

Configuration files get updated on first boot after the OS upgrade

OpenBSD automatically updates various configuration files for the new release. If you pay attention, you’ll see that there is one case where the changes could not be merged automatically. So we will I need to see to that myself.

The system also looked if newer firmware was available. However this was not the case (which really is no wonder on this old machine).

Merging OpenSSH config and adding installurl

After doing the manual merge of the OpenSSH configuration, it’s time to do the final tasks to complete the upgrade. OpenBSD keeps a detailed upgrade guide for each version that lists the required manual steps. In fact you should read it before doing the upgrade, since it can involve steps that need to be done prior to booting the install kernel and updating the base system! I skipped them, because they didn’t apply in my case – e.g. I hadn’t installed the manpages anyway.

I chose to only set the installurl since that one is really convenient. Actually I should remove some obsolete files from the filesystem, too. But I decided to leave this for later as there is another method to do so.

Updating to 6.2

Getting the system updated to 6.2 means repeating what I did for the 6.1 update: Get the ramdisk kernel for the new release as well as the signature and verify it. Once that’s done, another reboot is in order.

Downloading and preparing OpenBSD 6.2 install kernel

One thing that’s different is that the installer now defaults to fetching from the web and not from CD. And thanks to setting the installurl before I rebooted, it also knows the default mirror to get the sets from. Which makes the process of upgrading even more straight-forward and convenient.

OpenBSD 6.2 installer: Now knows the URL to fetch from

Finishing the upgrade after the actual unpacking of the new files takes a bit longer for this version. After making all known device nodes, the installer re-links the kernel! This is due to a new feature called KARL (Kernel Address Randomized Link). The idea here is that the objects that make up the kernel are linked in random order for each reboot, essentially creating a new and unique kernel every time. This makes it much harder or even impossible to use parts of the kernel otherwise known to be in certain memory regions for sophisticated attacks.

OpenBSD 6.2 introduced Kernel re-linking (“KARL”)

Oh, and did you notice that the bsd.mp set is gone? This machine only has a single-core CPU and therefore the SMP kernel doesn’t make much sense. The installer detected the CPU and did not offer to install the SMP kernel (even though it of course is still available for machines with multiple cores).

As always, the system needs to rebooted after the upgrade is complete. Just a moment later I’m greeted by my new OpenBSD 6.2! Again I’m skipping the manual steps to be taken afterwards.

OpenBSD 6.2 booted up

Updating to 6.3

Preparing and doing the upgrade for 6.3 is just like you’ve seen twice now, so I’m not going to repeat it. There’s one new feature in the installer that could be mentioned, though: After the upgrade is complete, the reboot option is now the default thing that the installer offers instead of just dropping you to a shell. This means you can save another 6 keystrokes when updating! Yay! 😉

OpenBSD 6.3 install kernel: Rebooting after completion is now the default choice

Updating to 6.5

The upgrade to 6.4 is simply more of the same. Of course I did that step, but I’m cutting it out here. 6.5 is the most recent release as I’m writing this (though 6.6 is already around the corner). This means I’m going to do one more upgrade, following the process that we know pretty well by now: Get and verify bsd.rd, boot it and select “Upgrade”.

Choosing all the sets except for X11-related ones for 6.5

This time I decide to install all the sets except for anything X11-related. The SunFire v100 is a server-class machine which does not even have a graphics card! For that reason there’s no VGA port to connect a monitor to, either. And while X11 could still be of some use, it’s simply not needed at all.

Upgrade to OpenBSD 6.5 complete

Again the upgrade process takes a bit longer, but that’s only thanks to the additional sets (as well as the base distribution getting a little bigger and bigger with each release). After just a little while everything is done and there’s one more reboot to make.

OpenBSD 6.5 booted up and ready

All done! I now have a fine OpenBSD 6.5 system up and running on my old SPARC64 box. And even better: Everything has been cryptographically verified to be the data that I want and no bad person has tempered with it. Sure, the system has not been cleaned up, yet – and it’s just 6.5-RELEASE with no errata fixes applied. Still I’d say: We’re off to a good start! Aren’t we?

What’s next?

In the next post I intend to explore the system a little and find out where there are differences from a common amd64 installation of OpenBSD.

A SPARC in the night – SunFire v100 exploration

While we see a total dominance of x86_64 CPUs today, there are at least some alternatives like ARM and in the long run hopefully RISC-V. But there are other interesting architectures as well – one of them is SPARC (the Scalable Processor ARChitecture).

This article is purely historic, I’m not reviewing new hardware here. It’s more of a “20 years ago” thing (the v100 is almost that old) written for people interested in the old Sun platform. The intended audience is persons who are new to the Sun world, who are either to young like me (while I had a strong interest in computers back in the day, I hadn’t even finished school, yet, and heck… I was still using Windows!) or never had the chance to work with that kind of hardware in their professional career. Readers who know machines like that quite well and don’t feel like reading this article for nostalgic reasons might just want to skip it.

The SPARC platform

SPARC is a Reduced Instruction Set Computing (RISC) Instruction Set Architecture (ISA) developed by Sun Microsystems and Fujitsu in 1986. Up to the Sun-3 series of computers, Sun had used the m68k processors but with Sun-4 started to use 32-bit SPARC processors instead. The first implementation is known as SPARCv7. In 1992 Sun introduced machines with v8, also known as SuperSPARC and in 1995 the first processors of SPARCv9 became available. Version 9, known as UltraSPARC, is a 64-bit architecture that is still in use today.

SunFire v100: Top and front view

SPARC is a fully open ISA, taken care of by SPARC International. Architecture licenses are available for free (only an administration fee of 99$ has to be payed) and thus any interested corporation could start designing, manufacturing and marketing components conforming to the SPARC Architecture. And Sun did really mean it with OpenSPARC: They released the Verilog code for their T1 and T2 processors under the GPLv2, making them the first ever 64 bit processors that were open-sourced. And not enough with that – they also released a lot of tools along with it like a verification suite, a simulator, hypervisor code and such!

After Sun was acquired by Oracle in 2010, the future of the platform became unclear. Initially, Oracle continued development of SPARC processors, but in 2017 completely terminated any further efforts and laid off employees from the SPARC team.

Fujitsu has made official statements that they are continuing to develop the SPARC-based servers and even about a “100 percent commitment”. In the beginning of this year, they even wrote about a Resurgence of SPARC/Solaris on the company’s blog and since they are the last one to provide SPARC servers (which are still highly valued by some customers), chances are that they will continue improving SPARC. According to their roadmap, even a new generation is due for 2020.

So while SPARC is not getting a lot of attention these days, it’s not a dead platform either. But will it survive in the long run? Time will tell.

SunFire v100

I’m working for company that offers various hosting services. We run our own data center where we also provide colocation for customers who want that. Years ago a customer ran a root server with an (now) old SunFire v100 machine. I don’t remember when it was decommissioned and removed from the rack, but that must have been quite a while ago.

SunFire v100: Back view

That customer was meant to come over to collect the old hardware and so we put the machine in the storage room. For whatever reason, he never came to get it. Since it had been sitting there for years now, I decided to mail the customer and asked if he still wanted the machine. He didn’t and would in fact prefer to have us to dispose of it. So I asked if he’d be ok with us shreddeing the hard drives and me taking the actual machine home. He didn’t have any objections and thus I got another interesting machine to play with.

The SunFire v100 is a 1U server that was introduced in 2001 and went EOL in 2006. According to the official documentation, the machine came with 64 bit Solaris 8 pre-installed. It was available with an UltraSPARC IIe or IIi processor and had a 40 GB, 7200 RPM IDE HDD built-in. My v100 has 1GB of RAM and a 550 MHz UltraSPARC IIe. I also put a 60 GB IBM HDD into it.

It has a single PDU, two ethernet ports as well as two USB ports. It also features two serial ports – and these are a little special. Not only are they RJ-45, but they have two different uses cases. One is for the LOM (we’ll come to that a little later), the other one is a regular serial port that can be used e.g. to upload data uninterrupted (i.e. not going to be processed by the LOM). The serial connection uses 9600 baud, no parity, one stop bit and full duplex mode.

RJ-45 to DB9 cable and DB9 to USB cable

The other interesting thing is the system configuration card. It stores host ID and MAC address of the server as well as NVRAM settings. What is NVRAM? It’s an acronym for Non-Volatile Random-Access Memory, a means for storing information that must not be lost when the power goes off like regular RAM does. If you’re thinking “CMOS” in PC terms, you’re right – except it seems that Sun used a proper means of NVRAM and not an in fact volatile source made “non-volatile” by keeping the data alive with the help of a battery. The data is stored on a dedicated chip, or in this case on a card. The advantage of the latter is that it can be easily transferred to another system, taking all the important configuration with it! Pretty neat.

Inside the v100

When I opened up the box, I was actually astonished by how much space there was inside. I know some old 1U x86 servers from around that time (or probably a little later) that really are a pain to work with. Fitting two drives into them? It’s sure possible, but certainly not fun at all. At least I hated doing anything with them. And those at least used SATA drives – I haven’t seen any IDE machines in our data center, not even with the oldest replacement stuff (it was all thrown out way before I got my job). But this old Sun machine? I must say that I immediately liked it.

SunFire v100: Inside view

Taking out the HDD and replacing it with another drive was a real joy compared to what I had feared that I’d be in for. The drive bays are fixed using a metal clamp that snaps into a small plastic part (the lavender ones in the picture). I’ve removed the empty bay and leaned it against the case so that it’s easier to see what they look like. It belongs where the ribbon cable lies – rotated 90 degrees of course.

Old x86 server for comparison – getting two drives in there is very unpleasant to do…

All the other parts are easily accessible as well: The PDU in the upper left corner of the picture, the CDROM drive in the lower right, as well as the RAM modules in the lower left one. It’s all nicely laid out and well assembled. Hats off to Sun, they really knew what they were doing!

Lights out!

I briefly mentioned the LOM before. It’s short for Lights-Out Management. You might want to think IPMI here. While this LOM is specific to Sun, its basic idea is the same as the wide-spread x86 management system: It allows you do things to the machine even when it’s powered off. You can turn it on for example. Or change values stored in the NVRAM.

LOM starting up

How do we access it? Well, the machine has a RJ-45 socket for serial connections appropriately labeled “LOM”. The server came with two cables to use with it, one RJ-45 to DB26 (“parallel port”) used with e.g. a Sun Workstation, and a RJ-45 to DB9 (“serial port” a.k.a. “COM port”). Then you can use any of the various tools usually used for serial connections like cu, tip or even screen.

Just plug your cable into say your laptop and the other end into the A/LOM port, then you can then access the serial console. If you plug in the power cable of the SunFire machine now, you will see the LOM starting up. Notice that the actual server is still off. It’s in standby mode now but the LOM is independent of that.

LOM help text

By default, the LOM port operates in mixed mode, allowing to access both the LOM and the serial console. These two things can be separated if desired; then the A port is dedicated to the LOM only and the console can be accessed via the B port.

In case you have no idea how to work with the LOM, there’s a help command available to at least give you an idea what commands are supported. Most of these commands have names that make it pretty easy to guess what they do. Let’s try out some!

LOM monitoring overview (powered off)

Viewing the environment gives some important information about the system. Here it reveals that ALARM 3 is set. Alarm 1, 2 and 3 are software flags that don’t do anything by themselves. They can be set and used by software installed on the Solaris operating system that came with the machine.

I really have no idea why the alarm is set. It was that way when I got the server. Even though it’s harmless, let’s just clear it.

Disabling alarm, showing users and booting to the ok prompt

The LOM is pretty advanced in even supporting users and privileges. Up to four LOM users can be created, each with an individual password. There are four privileges that these can have: A for general LOM administration like setting variables, U for managing LOM users, C to allow console access as well as R for power-related commands (e.g. resetting the machine). When no users are configured, the LOM prompt is not protected and has full privileges.

OpenBoot prompt

It is also possible to set the boot mode in the LOM. By doing this, the boot process can e.g. be interrupted at the OpenBoot prompt which (for obvious reasons) is also called the ok prompt. In case you wonder why the command is “boot forth” – this is because of the programming language Forth which the loader is written in (and can be programmed in).

ok prompt help

In the ok prompt you can also get help if you are lost. As you can see, it is also somewhat complex and you can get more help on the respective areas that interest you.

Resetting defaults and probing devices

OpenBoot has various variables to control the boot sequence. Since I got a used machine, it’s probably a good idea to reset everything to the defaults.

From the ok prompt it’s also possible to probe for devices built into the server. In this case, an HDD and a CDROM drive were found which is correct.

Setting NVRAM variables, escaping to LOM, returning to the ok prompt and resetting the machine

The ok prompt allows for setting variables, too, of course. Here I create an alias for the CDROM drive to get rid of working with the long and complex device path. Don’t ask me about the details of the latter however. I found this alias on the net and it worked. I don’t know enough about Solaris’ device naming to explain it.

Next I set the boot order to CDROM first and then HDD. Just to show it off here, I switch back to the LOM – using #. (hash sign and dot character). That is the default LOM escape sequence, however it can be reconfigured if desired. In the LOM I use the date command to display how long the LOM has been running and then switch back to the ok prompt using break.

LOM monitoring overview while the machine is running

Finally I reset the machine, so that the normal startup process is initiated and an attempt at booting from the CDROM is being made. I threw in a FreeBSD CD and escaped to the FreeBSD bootloader (which was also written in Forth until it was replaced with a LUA-based one recently).

Showing the monitoring overview while the machine is actually running is much more interesting of course. Here we can see that all the devices still work fine which is great.

LOM log and date, returning to console and powering off

Finally I wanted to show the LOM log and returning to the console. The latter shows the OK prompt now. Mind the case here! It’s OK and not ok. Why? Because this is not the OpenBoot prompt from the SunFire but the prompt from the FreeBSD loader which is the second-stage loader in my case!

That’s it for the exploring this old machine’s capabilities and special features. I just go back to the LOM again and power down the server.

Conclusion

The SunFire v100 is a very old machine now and probably not that useful anymore (can you say: IDE drive?). Still it was an interesting adventure for me to figure out what the old Sun platform would have been like.

While I’m not entirely sure if this is useful knowledge (SPARC servers in the wild are more exotic then ever – and who knows what the platform has evolved into in almost 20 years!), I enjoy digging into Unix history. And Sun’s SPARC servers are most definitely an important mosaic in the big picture!

What’s next?

Reviewing this old box without installing something on there would feel very incomplete. For that reason I plan to do another article about installing a BSD and something Solaris-like on it.

The history of *nix package management

Very few people will argue against the statement that Unix-like operating systems conquered the (professional) world due to a whole lot of strong points – one of which is package management. Whenever you take a look at another *nix OS or even just another Linux distro, one of the first things (if not the first!) is to get familiar with how package management works there. You want to be able to install and uninstall programs after all, right?

If you’re looking for another article on using jails on a custom-built OPNsense BSD router, please bear with me. We’re getting there. To make our jails useful we will use packages. And while you can safely expect any BSD or Linux user to understand that topic pretty well, products like OPNsense are also popular with people who are Windows users. So while this is not exactly a follow-up article on the BSD router series, I’m working towards it. Should you not care for how that package management stuff all came to be, just skip this post.

When there’s no package manager

There’s this myth that Slackware Linux has no package manager, which is not true. However Slackware’s package management lacks automatic dependency resolving. That’s a very different thing but probably the reason for the confusion. But what is package management and what is dependency resolving? We’ll get to that in a minute.

To be honest, it’s not very likely today to encounter a *nix system that doesn’t provide some form of package manager. If you have such a system at hand, you’re quite probably doing Linux from Scratch (a “distribution” meant to learn the nuts and bolts of Linux systems by building everything yourself) or have manually installed a Linux system and deliberately left out the package manager. Both are special cases. Well, or you have a fresh install of FreeBSD. But we’ll talk about FreeBSD’s modern package manager in detail in the next post.

Even Microsoft has included Pkgmgr.exe since Windows Vista. While it goes by the name of “package manager”, it turns pale when compared to *nix package managers. It is a command-line tool that allows to install and uninstall packages, yes. But those are limited to operating system fixes and components from Microsoft. Nice try, but what Redmond offered in late 2006 is vastly inferior to what the *nix world had more than 10 years earlier.

There’s the somewhat popular Chocolatey package manager for Windows and Microsoft said that they’d finally include a package manager called “one-get” (apt-get anyone?) with Windows 10 (or was it “nu-get” or something?). I haven’t read a lot about it on major tech sites, though, and thus have no idea if people are actually using it and if it’s worth to try out (I would, but I disagree with Microsoft’s EULA and thus I haven’t had a Windows PC in roughly 10 years).

But how on earth are you expected to work with a *nix system when you cannot install any packages?

Before package managers: Make magic

Unix begun its life as an OS by programmers for programmers. Want to use a program on your box that is not part of your OS? Go get the source, compile and link it and then copy the executable to /usr/local/whatever. In times where you would have just some 100 MB of storage in total (or even less), this probably worked well enough. You simply couldn’t go rampage and install unneeded software anyways, and sticking to the /usr/local scheme you separate optional stuff from the actual operating system.

More space became available however and software grew bigger and more complex. Unix got the ability to use libraries (“shared objects”), ELF executables, etc. To solve the task of building more complicated software easily, make was developed: A tool that read a Makefile which told it exactly what to do. Software begun shipping not just with the source code but also with Makefiles. Provided that all dependencies existed on the system, it was quite simple to build software again.

Compilation process (invoked by make)

Makefiles also provide a facility called “targets” which made a single file support multiple actions. In addition to a simple make statement that builds the program, it became common to add a target that allowed for make install to copy the program files into their assumed place in the filesystem. Doing an update meant building a newer version and simply overwriting the files in place.

Make can do a lot more, though. Faster recompiles by to looking at the generated file’s timestamp (and only rebuilding what has changed and needs to be rebuilt) and other features like this are not of particular interest for our topic. But they certainly helped with the quick adoption of make by most programmers. So the outcome for us is that we use Makefiles instead of compile scripts.

Dependency and portability trouble

Being able to rely on make to build (and install) software is much better than always having to invoke compiler, linker, etc. by hand. But that didn’t mean that you could just type “make” on your system and expect it to work! You had to read the readme file first (which is still a good idea, BTW) to find out which dependencies you had to install beforehand. If those were not available, the compilation process would fail. And there was more trouble: Different implementations of core functionality in various operating systems made it next to impossible for the programmers to make their software work on multiple Unices. Introduction of the POSIX standard helped quite a bit but still operating systems had differences to take into account.

Configure script running

Two of the answers to the dependency and portability problems were autoconf and metaconf (the latter is still used for building Perl where it originated). Autoconf is a tool used to generate configure scripts. Such a script is run first after extracting the source tarball to inspect your operating system. It will check if all the needed dependencies are present and if core OS functionality meets the expectations of the software that is going to be built. This is a very complex matter – but thanks to the people who invested that tremendous effort in building those tools, actually building fairly portable software became much, much easier!

How to get rid of software?

Back to make. So we’re now in the pleasant situation that it’s quite easy to build software (at least when you compare it to the dark days of the past). But what would you do if you want to get rid of some program that you installed previously? Your best bet might be to look closely at what make install did and remove all the files that it installed. For simple programs this is probably not that bad but for bigger software it becomes quite a pain.

Some programs also came with an uninstall target for make however, which would delete all installed files again. That’s quite nice, but there’s a problem: After building and installing a program you would probably delete the source code. And having to unpack the sources again to uninstall the software is quite some effort if you didn’t keep it around. Especially since you probably need the source for exactly the same version as newer versions might install more or other files, too!

This is the point where package management comes to the rescue.

Simple package management

So how does package management work? Well, let’s look at packages first. Imagine you just built version 1.0.2 of the program foo. You probably ran ./configure and then make. The compilation process succeeded and you could now issue make install to install the program on your system. The package building process is somewhat similar – the biggest difference is that the install destination was changed! Thanks to the modifications, make wouldn’t put the executable into /usr/local/bin, the manpages into /usr/local/man, etc. Instead make would then put the binaries e.g. into the directory /usr/obj/foo-1.0.2/usr/local/bin and the manpages into /usr/obj/foo-1.0.2/usr/local/man.

Installing tmux with installpkg (on Slackware)

Since this location is not in the system’s PATH, it’s not of much use on this machine. But we wanted to create a package and not just install the software, right? As a next step, the contents of /usr/obj/foo-1.0.2/ could be packaged up nicely into a tarball. Now if you distribute that tarball to other systems running the same OS version, you can simply untar the contents to / and achieve the same result as running make install after an unmodified build. The benefit is obvious: You don’t have to compile the program on each and every machine!

So far for primitive package usage. Advancing to actual package management, you would include a list of files and some metadata into the tarball. Then you wouldn’t extract packages by hand but leave that to the package manager. Why? Because it would not only extract all the needed files. It will also record the installation in its package database and keep the file list around in case it’s needed again.

Uninstalling tmux and extracting the package to look inside

Installing using a package manager means that you can query it for a list of installed packages on a system. This is much more convenient than ls /usr/local, especially if you want to know which version of some package is installed! And since the package manager keeps the list of files installed by a package around, it can also take care of a clean uninstall without leaving you wondering if you missed something when you deleted stuff manually. Oh, and it will be able to lend you a hand in upgrading software, too!

That’s about what Slackware’s package management does: It enables you to install, uninstall and update packages. Period.

Dependency tracking

But what about programs that require dependencies to run? If you install them from a package you never ran configure and thus might not have the dependency installed, right? Right. In that case the program won’t run. As simple as that. This is the time to ldd the program executable to get a list of all libraries it is dynamically linked against. Note which ones are missing on your system, find out which other packages provide them and install those, too.

Pacman (Arch Linux) handles dependencies automatically

If you know your way around this works ok. If not… Well, while there are a lot of libraries where you can guess from the name which packages they would likely belong to, there are others, too. Happy hunting! Got frustrated already? Keep saying to yourself that you’re learning fast the hard way. This might ease the pain. Or go and use a package management system that provides dependency handling!

Here’s an example: You want to install BASH on a *nix system that just provides the old bourne shell (/bin/sh). The package manager will look at the packaging information and see: BASH requires readline to be installed. Then the package manager will look at the package information for that package and find out: Readline requires ncurses to be present. Finally it will look at the ncurses package and nod: No further dependencies. It will then offer you to install ncurses, readline and BASH for you. Much easier, eh?

Xterm and all dependencies downloaded and installed (Arch Linux)

First package managers

A lot of people claim that the RedHat Package Manager (RPM) and Debian’s dpkg are examples of the earliest package managers. While both of them are so old that using them directly is in fact inconvenient enough to justify the existence of another program that allows to use them indirectly (yum/dnf and e.g. apt-get), this is not true.

PMS (short for “package management system”) is generally regarded to be the first (albeit primitive) package manager. Version 1.0 was ready in mid 1994 and used on the Bogus Linux distribution. With a few intermediate steps this lead to the first incarnation of RPM, Red Hat’s well-known package manager which first shipped with Red Hat Linux 2.0 in late 1995.

FreeBSD 1.0 (released in late 1993) already came with what is called the ports tree: A very convenient package building framework using make. It included version 0.5 of pkg_install, the pkg_* tools that would later become part of the OS! I’ll cover the ports tree in some detail in a later article because it’s still used to build packages on FreeBSD today.

Part of a Makefile (actually for a FreeBSD port)

Version 2.0-RELEASE (late 1994) shipped the pkg_* tools. They consisted of a set of tools like pkg_add to install a package, pkg_info to show installed packages, pkg_delete to delete packages and pkg_create to create packages.

FreeBSD’s pkg_add got support for using remote repositories in version 3.1-RELEASE (early 1999). But those tools were really showing their age when they were put to rest with 10.0-RELEASE (early 2014). A replacement has been developed in form of the much more modern solution initially called pkg-ng or simply pkg. Again that will be covered in another post (the next one actually).

With the ports tree FreeBSD undoubtedly had the most sophisticated package building framework of that time. Still it’s one of the most flexible ones and a bliss to work with compared to creating DEB or RPM packages… And since Bogus’s PMS was started at least a month after pkg_install, it’s even entirely possible that the first working package management tool was in fact another FreeBSD innovation.

Building a BSD home router (pt. 2): The serial console (excursion)

The previous post touched on the topic of why you might want to build your own router, discussed some hardware considerations and showed how to assemble an APU2 bundle. Now that the box is ready for use, we have to do something with it – but there’s a small problem.

People who want to play with inexpensive hardware love their Raspberry Pis. Those make a great little toy: Just attach a keyboard and a monitor, insert an SD card and you’re done. Power on the device and you can start tinkering. With the APU2 it’s not that much harder actually, but it works quite differently. Why? Take a closer look at its back and you’ll see that the outer left port is not VGA as you might have thought when you saw it from a different angle. It’s a COM port. There are two USB ports which would allow to attach a keyboard (the board doesn’t have a keyboard controller, though, which means that wouldn’t be of much use). But how to attach a screen to it? The RPi offers HDMI output, the APU2 does not. No VGA, no DVI, no HDMI and no DisplayPort either!

So how to access this thing? Typing blindly? Of course not. This post is about the serial console and since they call the COM port a serial port, too, it’s not too hard to figure out what we’re going to use. If you were born after 1980 (like me), chances are that you’ve never used a serial console before. Or rather you might not know exactly what it actually is. Heck, the younger ones might not even have heard about it! So if you’re either old enough or for whatever reason have experience with the serial console you could skip a lot of the following text. If you know what the real TTYs were, what a terminal and a system console is, this whole post probably isn’t going to teach you anything new. But since there are more and more younger people around for whom this is entirely new territory, I’m going to explain not only what we have to do in case of the APU2 but also a bit about how this all came to be.

Once upon a time… in 2017

Huh, 2017? That’s this year! Exactly. But then this headline doesn’t make sense, does it? Trust me, it does. Consider this:

% uname -a
FreeBSD pc0.local 11.1-PRERELEASE FreeBSD 11.1-PRERELEASE #0 r319447: Fri Jun  2 03:47:52 CEST 2017     root@pc0.local:/usr/obj/usr/src/sys/GENERIC  amd64

% ls /dev/tty*
/dev/ttyu2      /dev/ttyv0      /dev/ttyv3      /dev/ttyv6      /dev/ttyv9
/dev/ttyu2.init /dev/ttyv1      /dev/ttyv4      /dev/ttyv7      /dev/ttyva
/dev/ttyu2.lock /dev/ttyv2      /dev/ttyv5      /dev/ttyv8      /dev/ttyvb

Here’s an up-to-date FreeBSD system and it has quite some device nodes that bear “tty” in their name. The abbreviation TTY means Teletypewriter (or “teletype” for short) and with that we’re speaking of a technology that has its roots far back in the first half of the 19th (!) century. As I stated in my first post about FreeBSD 4.11 and legacy systems: Once some technology is established and in use, it’s extremely hard to get rid of it again. And while it’s hard to believe that concepts from almost two hundred years ago still influence today’s Unix machines, there’s some truth to it.

So what is a teletypewriter? Think a classical typewriter first. One of those mechanical machines that allow to type text instead of writing it: You press a key and that makes a small metal arm swing and its end hammer against the paper. That end bears an embossed letter. Between this head with the letter (or rather: character) and the paper there’s an ink ribbon and thus your key press results in a readable imprint on the paper.

Man has long striven to be able to transfer information across a long distance. Morse code made it possible to encode text and then send a message e.g. via optical means like blinking light. The telegraph (from Greek tele + graphein = far + write) achieves the same goal, however by sending signals over a cable.

A simple teletypewriter combines both: Imagine a typewriter with all the usual keys but no paper that is connected to a remote device which has the paper feed, ink ribbon and so on but no keys. This second device is called a teleprinter. The big limitation to this is that the information flow is one-way only.

Where computers fit into this

Sounds familiar? Sure thing: The keyboard that you use essentially does exactly that! Now consider a more advanced approach where two teletypewriters that both have input and output capabilities are connected to each other. In this bi-directional use you can type something that the person on the other end can read as well as write an answer which then you can read (and write again).

It’s obvious how this supports distributing information across large distances. But what does all this have to do with the COM port? Please bear with me. The first type of computers were big and extremely expensive mainframe machines. It makes sense that not everybody would be allowed physical access to them. Thus you wouldn’t sit down before them and attach a screen and keyboard! Instead you’d use a teletype in another room to interact with it. You input any commands and get a response in form of the printed output. More sophisticated teletypes allowed saving input or output on punch cards or read them in again and send the data.

Whatever combination of gear that allows sending and receiving data to/from a computer is called a terminal. Bi-directional typewriters are sometimes referenced as “hard-copy terminals” (because they printed the output on paper). While it’s hard to believe, this procedure has been state of the art at some point in time. Obviously this is not ideal, wasting a lot of paper. Fortunately there’s a solution for this: Enter the electronic terminal! Those replaced the printing on paper with displaying on a screen.

Now this is much more of what we’re familiar with, just in a special machine that is connected to the actual computer in a different place. The great thing about those terminals was that they allowed for adding more and more exciting new features. The bad thing was that they added more and more exciting new features. Usually there was no gain in having those features on the client side only. The server side had to know about it, too. Before long there were a lot of terminals out there, all with nice new features but all incompatible with each other…

This sucked. If you had this expensive new terminal that was unknown to your computer however, you couldn’t use its additional features. Good thing that we don’t use terminals today! That makes this an issue of the past. Right, that is a good thing. But no, it’s not a historic issue. Yes, we’ve replaced terminals quite a while ago. But what did we replace them with? Today we use terminal emulators. Your *nix operating system comes with programs that (by default) use the keyboard for input and the screen for output and thus simulate… a terminal.

Terminal emulators

Those terminal emulators can emulate various types of physical terminals and have additional features of their own. So the problems are back (or rather: They were never really gone). How do programs know what features are available? That used to be really tricky (and often lead to discarding features that were not commonly available on all popular terminals…). Let’s take a look at my shell’s environment:

% env
TMUX_PANE=%2
TMUX=/tmp/tmux-1001/default,1164,0
TERM=screen
[...]

My shell is running inside the terminal multiplexer tmux which set the TERM variable to “screen”. GNU screen was a somewhat similar program and since tmux is compatible with it, it announces that terminal-wise it works like screen. What does this look like if I quit tmux?

% env
OSTYPE=FreeBSD
MM_CHARSET=UTF-8
LANG=de_DE.UTF-8
DISPLAY=unix:0.0
[...]
SHELL=/bin/tcsh
WINDOWID=58720259
TERM=xterm
[...]

Here my terminal emulator claims to support the feature set that xterm is known for. I’m not using xterm, but my terminal emulator (terminator) is compatible with it. You probably know htop. I’ve just installed it to make two screenshots of it. This is what it looks like if I run it in my terminal emulator:

htop running with $TERM=xterm

Now let’s claim that we’re using another terminal type, shall we? I’m going to pretend that my terminal is an old, crufty vt100:

% setenv TERM vt100
% htop

Afterwards the application looks like this:

htop running with $TERM=vt100

The difference is pretty obvious – the vt100 did not support color output (among other things). Let’s be even more radical and simply not announce our terminal type to any applications:

% unsetenv TERM
% tmux
open terminal failed: terminal does not support clear

Tmux won’t even run (and htop would screw it all up badly since it obviously doesn’t handle that case properly)! If the terminal type (and thus its capabilities) are unknown, programs need to fallback to the absolute baseline of functionality that each and every terminal is known to support. And that’s disturbingly little… In this case tmux would be unusable because the terminal might not even be able to clear the screen! Yes, not even the seemingly most basic things were implemented in all terminals.

Imagine you’re the author of a simple *nix program. You were supposed to know what terminals existed and what capabilities they had – and to adopt your program to whatever terminal it ran on! Fortunately Bill Joy wrote termcap, a library that helped dealing with various terminals. This extremely important functionality (which originated in BSD Unix, BTW!) evolved over time. Today we have ncurses which does just about everything to make life easy when programming console applications.

(Serial) console

Which leads to the question: What is the console? The system console (or just console) is a device used to imput and output text. A PC’s BIOS offers a console, the boot loader does and the OS kernel does, too: It uses it to print out messages and to take commands. It’s present even when you don’t see it. A system might for example work in so-called headless mode which means that there is nothing attached to the console. Plug in a keyboard and connect a monitor and you can make use of the console.

Unix-like operating systems often configure multiple virtual consoles (also called: virtual terminals). We saw those when we listed the tty* nodes in /dev at the beginning of this blog post. Together those form your computer’s system console to which usually keyboard and screen are attached.

But you need not attach those two devices to the console. You can also attach a serial device to it which can then be used to transfer commands from and messages to another computer that is connected to the other end of the serial cable. Then you can use that other computer’s keyboard to type commands and its screen to read the output. A serial console is simply a console that can make use of a serial connection to redirect the console. If the BIOS of your machine supports it, you can use the serial console to control everything on that machine using another one over a serial connection.

If you’re completely new to it, this probably was quite a bit of information. However we’ve only talked about the very basics here! If you like to know more about how TTYs, pseudo TTYs, processes, jobs and even signals work, I recommend Linus Åkesson’s article The TTY demystified which is an excellent read. He explains much more in-depth how Linux does all this.

What’s next?

In the next post we’ll prepare a USB memstick and attach a serial console to the APU2 to update the firmware.