Using FreeBSD with Ports (1/2): Classic way with tools

Installing software on Unix-like systems originally meant compiling everything from source and then copying the resulting program to the desired location. This is much more complicated than people unfamiliar with the subject may think. I’ve written an article about the history of *nix package management for anyone interested to get an idea of what it once was like and how we got what we have today.

FreeBSD pioneered the Ports framework which quickly spread across the other BSD operating systems which adopted it to their specific needs. Today we have FreeBSD Ports, OpenBSD Ports and NetBSD’s portable Pkgsrc. DragonflyBSD currently still uses DPorts (which is meant to be replaced by Ravenports in the future) and there are a few others like e.g. mports for MidnightBSD. Linux distributions have developed a whole lot of packaging systems, some closer to ports (like Gentoo’s Portage where even the name gives you an idea where it comes from), most less close. Ports however are regarded the classical *BSD way of bringing software to your system.

While the ports frameworks have diverged quite a bit over time, they all share a basic purpose:They are used to build binary packages that can then be installed, deinstalled and so on using a package manager. If you’re new to package management with FreeBSD’s pkg(8), I’ve written two introduction articles about pkg basics and pkg common usage that might be helpful for you.

Why using Ports?

OpenBSD’s ports for example are maintained only to build packages from and the regular user is advised to stay away from it and just install pre-built packages. On FreeBSD this is not the case. Using Ports to install software on your machines is nothing uncommon. You will read or hear pretty often that in general people recommend against using ports. They point to the fact that it’s more convenient to install packages, that it’s so much quicker to do so and that especially updating is much easier. These are all valid points and there are more. You’re now waiting for me to make a claim that they are wrong after all, aren’t you? No they are right. Use packages whenever possible.

But when it’s not possible to use the packages that are provided by the FreeBSD project? Well, then it’s best to… still use packages! That statement doesn’t make sense? Trust me, it does. We’ll come back to it. But to give you more of an idea of the advantages and disadvantages of managing a FreeBSD system with ports, let’s just go ahead and do it anyway, shall we?

Just don’t do this to a production system but get a freshly installed system to play with (probably in a VM if you don’t have the right hardware lying around). It still is very much possible to manage your systems with ports. In fact at work I have several dozen of legacy systems some of which begun their career with something like FreeBSD 5.x and have been updated since then. Of course ports were being used to install software on them. Unfortunately machines that old have a tendency to grow very special quirks that make it not feasible to really convert them to something easier to maintain. Anyways… Be glad if you don’t have to mess with it. But it’s still a good thing to know how you’d do it.

So – why would you use Ports rather than packages? In short: When you need something that packages cannot offer you. Probably you need a feature compiled in that is not activated by default and thus not available in FreeBSD’s packages. Perhaps you totally need the latest version in ports and cannot wait for the next package run to complete (which can take a couple of days). Maybe you require software licensed in a way that binary redistribution is prohibited. Or you’ve got that 9-STABLE box over there which you know quite well you shouldn’t run anymore. But because you relay on special hardware that offers drivers only for FreeBSD 9.x… Yeah, there’s always things like that and sometimes it’s hard to argue away. But when it comes to packages – since FreeBSD 9 is EOL for quite some time there are no packages being built for it anymore. You’ll come up with any number of other reasons why you can’t use packages, I’m not having any doubts.

Using Ports

Alright, so let’s get to it. You should have a basic understanding of the ports system already. If you haven’t, then I’d like to point you to another two articles written as an introduction to ports: Ports basics and building from ports.

If you just read the older articles, let me point out two things that have happened since then. In FreeBSD 11.0 a handy new option was added to portsnap. If you do

# portsnap auto

it will automatically figure out if it needs to fetch and extract the ports tree (if it’s not there) of if fetching the latest changesets and updating is the right thing to do.

The other thing is a pretty big change that happened to the ports tree in December 2017 not too long after I wrote the articles. The ports framework gained support for flavors and flavored ports. This is used heavily for Python ports which can often build for Python 2.7, 3.5, 3.6, … Now there’s generally only one Port for both Python2 and Python3 which can build the py27-foo, py36-foo, py37-foo and so on packages. I originally intended to cover tool-assisted ports management shortly after the last article on Ports, but after that major change it wasn’t clear if the old tools would be updated to cope with flavors at all. They were eventually, but since then I never found the time to revisit this topic.

Scenario

I’ve setup a fresh FreeBSD 11.2 system on my test machine and selected to install the ports that came with this old version. Yes, I deliberately chose that version, because in a second step we’re going to update the system to FreeBSD 11.3.

Using two releases for this has the advantage that it’s two fixed points in time that are easy to follow along if you want to. The ports tree changes all the time thanks to the heroic efforts that the porters put into it. Therefor it wouldn’t make sense to just use the latest version available because I cannot know what will happen in the future when you, the reader, might want to try it out. Also I wanted to make sure that some specific changes happened in between the two versions of the ports tree so that I can show you something important to watch out for.

Mastering ports – with Portmaster

There are two utilities that are commonly used to manage ports: portupgrade and portmaster. They pretty much do the same thing and you can choose either. I’m going to use portmaster here that I prefer for the simple reason of being more light-weight (portupdate brings in Ruby which I try to avoid if I don’t need it for something else). But there’s nothing wrong with portupgrade otherwise.

Building portmaster

You can of course install portmaster as a package – or you can build it from ports. Since this post is about working with ports, we’ll do the latter. Remember how to locate ports if you don’t know their origin? Also you can have make mess with a port without changing your shell’s pwd to the right directory:

# whereis portmaster
portmaster: /usr/ports/ports-mgmt/portmaster
# make -C /usr/ports/ports-mgmt/portmaster install clean

Portmaster build options

As soon as pkg and dialog4ports are in place, the build options menu is displayed. Portmaster can be built with completions for bash and zsh, both are installed by default. Once portmaster is installed and available, we can use it to build ports instead of doing so manually.

Building / installing X.org with portmaster

Let’s say we want to install a minimal X11 environment on this machine. How can we do it? Actually as simple as telling portmaster the origin of the port to install:

# portmaster x11/xorg-minimal

Port options…

After running that command, you’ll find yourself in a familiar situation: Lots and lots of configuration menus for port options will be presented, one after another. Portmaster will recursively let you configure the port and all of its dependencies. In our case you’ll have 42 ports to configure before you can go on.

Portmaster in action

In the background portmaster is already pretty busy. It’s gathering information about dependencies and dependencies of dependencies. It starts fetching all the missing distfiles so that the compilation can start right away when it’s the particular port’s turn. All the nice little things that make building from ports a bit faster and a little less of a hassle.

Portmaster: Summary for building xorg-minimal (1/2)

Once it has all the required information gathered and sorted out, portmaster will report to you what it is about to do. In our case it wants to build and install 152 ports to fulfill the task we gave it.

Portmaster: Summary for building xorg-minimal (2/2)

Of course it’s a good idea to take a look at the list before you acknowledge portmaster’s plan.

Portmaster: distfile deletion?

Portmaster will ask whether to keep the distfiles after building. This is ok if you’re building something small, but when you give it a bigger task, you probably don’t want to sit and wait to decide on distfiles before the next port gets built… So let’s cancel the job right there by hitting Ctrl-C.

Portmaster: Ctrl-C report (1/2)

When you cancel portmaster, it will try to exit cleanly and will also give a report. From that report you can see what has already been done and what still needs to be done (in form of a command that would basically resume building the leftovers).

Portmaster: Ctrl-C report (2/2)

It’s also good to know that there’s the /tmp/portmasterfail.txt file that contains the command should you need it later (e.g. after you closed your terminal). Let’s start the building process again, but this time specify the -D argument. This tells portmaster to keep the distfiles. You could also specify -d to delete them if you need the space. Having told portmaster about your decision helps to not interrupt the building all the time.

# portmaster -D x11/xorg-minimal

Portmaster: Build error

All right, there we have an error! Things like this happen when working with older ports trees. This one is a somewhat special case. Texinfo depends on one file being fetched that changes over time but doesn’t have any version info or such. So portmaster would fetch the current file, but that has silently changed since the distfile information was put into our old ports tree. Fortunately it’s nothing too important and we can simply “fix” this by overwriting the file’s checksum with the one for the newer file:

# make makesum -C /usr/ports/print/texinfo
# portmaster -D x11/xorg-minimal

Manually fetching missing distfile

Next problem: This old version of mesa is no longer available in the places that it used to be. This can be fixed rather easily if you can find the distfile somewhere else on the net! Just put it into its place manually. Usually this is /usr/ports/distfiles, but there are ports that use subdirectories like /usr/ports/distfiles/bash or even deeper structures like e.g. /usr/ports/distfiles/xorg/xserver!

# fetch https://mesa.freedesktop.org/archive/older-versions/17.x/mesa-17.3.9.tar.xz -o /usr/ports/distfiles/mesa-17.3.9.tar.xz
# portmaster -D x11/xorg-minimal

Portmaster: Success!

Eventually the process should end successfully. Portmaster gives a final report – and our requested program has been installed!

What’s next?

Now you know how to build ports using portmaster. There’s more to that tool, though. But we’ll come to that in the next post which will also show how to use it to update ports.

Advertisements

Testing OmniOSce on real hardware

[edit 07-02]Added images to make the article a little prettier[/edit]

Last year I decided to take a first look at OmniOSce. It’s a community-driven continuation of the open source operating system that originated at OmniTI. It is also an Illumos distribution and as such a continuation of the former Open Solaris project by Sun that was shut down after Oracle acquired the former company.

In my First post I installed the OS on a VM and showed what the installation procedure was like. Post two took a look at how the man pages are organized, system services with SMF, as well as user management. And eventually post three was about doing network configuration by hand.

One reader mentioned on Reddit that he’d be more interested in an article about an installation on real hardware. I wanted to write that article, too – and here it is.

Scope of this article

While it certainly could be amusing to accompany somebody without any clue on what he’s doing as he tries to find his way around in a completely unfamiliar operating system, there’s no guarantees for that. And actually there’s a pretty big chance of it coming out rather boring. This is why I decided to come up with a goal instead of taking a look at random parts of the operating system.

Beautiful loader with r151030

Last time I wanted to bring up the net, make SSH start and add an unprivileged user to the system so I could connect to the OmniOS box from my FreeBSD workstation. All of that can be done directly from the installer, however and while I went the hard way before, I’m going to use those options this time. My new goal is to take a look at an UEFI installation, the Solaris way of dealing with privileged actions as well as a little package management and system updates. That should be enough for another article!

OmniOSce follows a simple release schedule with two stable releases per year where every fourth release is an LTS one. When I wrote my previous articles, r151022 was the current LTS release and r151026 the newest stable release (which I installed). In the meantime, another stable release has been released (r151028) and the most recent version, r151030, is the new LTS release. Since I want to do an upgrade, I’m going to install r151028 rather than r151030.

To boot or not to boot (UEFI)

My test machine is configured for UEFI with Legacy Modules enabled. Going to the boot menu I see the OmniOSce CD only under “legacy boot”. I choose it, the loader comes up, boots the system and just a little later I’m in the installer. It detects the hard drive and will let me install to it. The default option is to install using the UEFI scheme, so I accept that. After the installation is complete, I reboot – and the system cannot find any bootable drives…

Ok, let’s try the latest version. Perhaps they did some more work on UEFI? They did: This time the CD is listed in the UEFI boot sources, too and a beautiful loader greets me after I selected it. The text and color looks a bit nicer in the EFI mode console, too. I repeat the installation, reboot… And again, the hard drive is not bootable!

This machine does not support “pure” UEFI mode. I switch to legacy mode and try the older CD image again. Installing to GPT has the same effect as before: The system is not bootable. I do not really want to use MBR anymore, but fortunately the OmniOS installer has two more options to work around the quirks of some EFI implementations. Let’s try the first one, GPT+Active. Once again: The system is not bootable… But then there’s GPT+Slot1 – and that finally did the trick! The system boots from hard disk.

At this point I decided to sacrifice another machine for my tests. It doesn’t even recognize the newer r151030 ISO as an UEFI boot source – neither in mixed mode nor in pure UEFI mode. But things are getting even more weird: I install OmniOS for the UEFI scheme again and the system does recognize the drive and is able to boot it – however only in legacy mode!

UEFI is a topic for itself – and actually a pretty strange one. It has meant headaches for me before, so I wouldn’t overrate OmniOS not working with the UEFI on those particular two machines. My primary laptop will try to PXE-boot if a LAN cable is attached – even though PXE-booting is disabled in the EFI. Another machine is completely unable to even detect (!) hard drives partitioned with GPT when running in legacy mode… To me it looks that most EFI implementations have their very own quirks and troubles. Again: Don’t overrate this. The OmniOS community is rather small and it’s completely impossible for them to make things work on all kinds of crappy hardware.

Chances are that it just works on your machine. While I’d like to test more machines, I don’t have the time for this right now. So let’s move on with GPT and Legacy/BIOS boot.

Installation

I like Kayak, the installer used by OmniOS. It’s simple and efficient – and it does it’s thing without being over-engineered. Being an alternative OS enthusiast, I’ve seen quite a bunch of installation methods. Some more to my liking, some less. When it comes to this one, I didn’t really have any problems with it and am pretty much satisfied. If I had to give any recommendation, I’d suggest adding a function to generate the “whatis” database (man -w) for the user (and probably make that the default option). I’ve come to expect that the apropos command just works when I try out a new system. And other newcomers might benefit from that, too.

Creating a test user with the installer

When I installed OmniOS last year, I ran into a problem with the text installer. I reported it and it was fixed really, really quickly. Of course I could not resist the temptation to try out the text installer for this newer release. With r151028 it works well. However it doesn’t offer any options over the new dialog-based one (on the contrary), so I’d recommend to use the new one.

Shell selection for the new user

As mentioned above, this time I decided to let the installer do the network setup and create a user for me (which made /home/kraileth my home directory and not /exports/home/kraileth!). When creating a user, I’m given the choice to use either ksh93, bash or csh. The latter is just plain old csh, and while I prefer tcsh over bash anytime, this is not such a tempting choice. But the default (ksh) is actually fine for me.

Selecting privileges for the new user

More interesting however is the installer’s ability to enable privileged access for the new user: I can choose to give it the “Primary Administrator” profile and / or to enable sudo (optionally without password).

Several new configuration options in the r151030 installer

Also the installer for r151030 features a few new options like enabling the Extra repository or the Serial Console. Certainly nice to see how this OS evolves!

Profiles

The installer allowed me to give my user the privilege to use sudo. Just like with *BSD and Linux this gives me the the ability to run commands as root or even become root using sudo -i. But this is not the only way to handle privileged actions. In fact there is a much better one on Solaris systems! It works by using profiles.

What are profiles? They are one of the tools present in Solaris (and Solaris-derived systems) that allow for really fine-grained access control. While traditional Unix access control is primitive to say the least, the same is not true for Solaris.

Taking a peek at user’s profiles

Let’s see what profiles my user has:

$ profiles
Primary Administrator
Basic Solaris User
All

My guess here is that every user has the profiles “All” and “Basic Solaris User” – while the “Primary Administrator” was added by the installer. The root user has some more (see screenshot).

The profiles of a user are assigned in the file /etc/user_attr and the actual profiles are defined in /etc/security/prof_attr. While all of this is probably not rocket science, it definitely looks complex and pretty powerful. Take a look at the screenshot to get a first impression and do some reading on your own if you’re interested.

Some profile definitions

As a newbie I didn’t know much about it, yet. The profiles mention help files however, and so I thought it might be worth the effort to go looking for them. Eventually I located them in /usr/lib/help/profiles. There is an HTML help for people like me who are new to the various profiles.

Help file for the “Primary Administrator” profile

Privileged Actions

Alright! But how do you make use of the cumulative privileges of your profiles? There are two ways: Running a single privileged command (much like with sudo) or executing a shell with elevated privileges. The first is accomplished using pfexec like this:

$ pfexec cat /root/.bashrc

Playing with privileged access

The system provides wrappers for some popular shells. However not all of the associated shells are installed by default! So on a fresh installation you should only count on the system shells.

Wrappers for the various profile shells

Basic package management

In the Solaris world there are two means for package management, known as SVR4 and IPS. The former is the old one used up to Solaris 10. I did a little reading on this and it looks like they are quite similar to the traditional *BSD pkg_tools. Basically it’s a set of programs like pkginfo, pkgadd, pkgrm and so on (where it’s really not so hard to tell from their names what they are used for).

The newer one uses the pkg(5) client command of the Image Packaging System. While the name of the binary suggests a relation with e.g. FreeBSD’s pkg(8), this is absolutely not the case.

Package management is a topic that deserves its own article (that I consider writing at some point). But basic operation is as simple as this:

$ pfexec pkg install tmux

It’s not too hard to anticipate that after a short while, Tmux should be available on your system (provided the command doesn’t error out).

System updates

Updating the operating system to a new release is a pretty straight-forward process. It’s recommended (but optional) to create a Boot Environment. The first required step is setting the pkg publisher to a new URI (pointing to a repository containing the data for the new release). Then you update the system (preferably to a new Boot Environment that is made the standard one) and reboot. Yes, that’s all there is to it.

System upgrade to version r151030

After the reboot you’ll see the new boot menu and better looking system fonts – even without using UEFI. Obviously r151030 implements a new frame buffer. I’ve also noticed that boot and especially shutdown times have decreased notably. Very nice!

Upgrade finished

Conclusion

If you’ve never worked with a Solaris-like system this article might have provided you with some new insights. However we’ve barely scratched the surface. The profiles are looking like a great means of access control to me, but usually you’d want to use OmniOSce for different reasons. Why? Because it has some really cool features that make it the OS of choice for some people who are doing impressive things.

What are those features? So far we didn’t talk about ZFS and the miracles of this great filesystem/volume manager (I’ve mentioned Boot Environments, but if you don’t know what BEs are you of course also don’t know why you totally want them – but trust me, you do). We didn’t talk about zones (think FreeBSD’s jails, but not quite – or Linux containers, but totally on steroids). Also I didn’t mention the great networking capabilities of the OS, the debugability and things like that.

As you can see, I probably wouldn’t run out of topics to write about, even if I decided to switch gears on my blog entirely to Illumos instead of just touching on them among more BSD and Linux articles. If you don’t want to wait for me to write about it again – why don’t you give it a try yourself?

ARM’d and dangerous pt. 2: FreeBSD on the Pinebook (aarch64)

In February I wrote about real aarch64 server hardware. My interest in the ARM platform has not decreased – and this month I finally got my Pinebook shipped.

[update] I’ve updated the post from 05-24 to the 05-31 version.[/update]

Aarch64

I’ve always been interested in alternatives to common solutions – not just with open source operating systems, but also with architectures. Sure, ARM is pretty much ubiquitous when it comes to mobile devices. But when it comes to servers or PCs, there’s a total dominance of the amd64 (“x86_64”) architecture. However in times of Meltdown, Spectre, Zombie Load and such it might well be worth to take a closer look at alternative platforms, even if it’s not your primary interest.

Arm Holdings is a UK-based company that designs processors and licenses them. There have been multiple revisions of the architecture, like ARMv6 and ARMv7. Both are 32-bit architectures used e.g. in the popular Raspberry Pi and Raspberry Pi 2. ARMv8 is the first 64-bit version and is available in form of the Raspberry Pi 3 among others. The incredible success of those small single-board computers lead to a whole lot of spin-offs.

Pine64

While a lot of the competitors specialize in extremely cheap Pi clones with a few improvements (and too often with their own problems), one of the better alternatives comes from Pine64. They don’t just sell hardware that works with one custom-compiled Linux kernel which rarely (or never) gets any upgrades. On the contrary: They are trying to build a community around the hardware dedicated to do cool things with it and eventually blaze a trail for high-quality ARM-based alternative hardware.

One really compelling offer is that of the Pinebook. While it’s nothing special to use one of the little single-board computers for a media station or things like that, a real laptop is something way different. Especially as the company behind Pine64 decided to sell it at cost to people of the Linux and *BSD communities! It really is a $99 (plus shipping) 11.6″ laptop. Definitely a nice idea to steer the community. Especially since they announced a Pinebook Pro, a tablet, a phone and so on during this year’s FOSDEM. I’m not sure when they will be available, though.

There’s only one “problem” with the Pinebook: You cannot buy it regularly. If you are interested in purchasing one, you need to register on the site and when a new batch is to be made, you get a coupon code that can be used to actually place an order. Sometime last year I decided that this sounded pretty interesting and requested a coupon. I got notified in February or so and eventually the laptop was shipped in May.

Pinebook

Here’s some info on the specs (see here for the full specifications):

  • Allwinner A64 Quad Core SOC with Mali 400 MP2 GPU
  • 2GB LPDDR3 RAM
  • 16GB of eMMC (upgradable)
  • 2x USB 2.0 Host
  • Lithium Polymer Battery (10000mAH)
  • Stereo Speakers
  • WiFi 802.11bgn + Bluetooth 4.0

Also they offered a 1366×768 display. When my Pinebook arrived, I was in for a surprise, though: They had upgraded this model to use an 1920×1080 IPS which is really nice! The low resolution of the original model was one of the things that made me think twice before buying. Glad that I chose to go ahead.

A Pinebook 1080p

The Pinebook comes with KDE Neon preinstalled. It’s apparently a Debian-based distro with the latest Plasma Workspaces desktop. I opened Firefox and was able to browse the net – but I didn’t buy this laptop to use boring Linux. 😉 Let’s try something more exciting where not all the devices work, yet!

Preparation

The Pinebook is new enough that it’s not supported with any FreeBSD release at this time. So you have to use the development branch known as -CURRENT. Right now it’s 13-CURRENT and it’s the exciting branch where all the latest features, fixes and improvements go. The FreeBSD project provides snapshots for developers or people who are willing to test bleeding edge code. Needless to say that by doing this you install an OS that is not at all meant for daily usage. It should not hold important data or anything. This is tinkering with some cool stuff – no more no less.

You need a computer that can make use of micro SD cards (or SD cards by using an adapter that usually comes with micro SDs). A 4 GB card suffices but anything bigger is also fine.

Get a snapshot and the checksum file from here (e.g. CHECKSUM.SHA512-FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447 and FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img.xz)

Since there are differences with the various ARM-based platforms, make sure you get the right one for the Pinebook. Each file contains a date (2019-05-31 in this case) and a revision number (r348447) so you know when the snapshot was created and what version of the code in Subversion it was built from.

Once you downloaded both files, verify the checksum and decompress the archive:

% shasum -c CHECKSUM.SHA512-FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447
% xz -dvv FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img.xz

Now dd the image onto a micro SD card. Be absolutely sure that you dd over the right device – if you don’t, you can easily lose data that you wanted to keep! In my case the right device is mmcsd0, substitute it for yours. Get the image written with the following command (FreeBSD versions before 12.0 do not support the “progress” option – leave out the “status=progress” in this case):

# dd if=FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img of=/dev/mmcsd0 bs=1m status=progress

FreeBSD

Now insert the micro SD card into your Pinebook and turn it on. It should boot off of the card and right into FreeBSD. You might see some scary looking messages like lock order reversals and things like that which you are probably not used too. Welcome to -CURRENT! Since you are testing a development snapshot, this version of FreeBSD has some diagnostic features enabled that are helpful in pinning down bugs (and eventually fixing them). If you’re just an advanced user (like me) and not a developer, ignore them.

Log in as root with the password root. Take a look around and if everything seems to work, power the device off again:

# shutdown -p now

On first boot, the partition holding the primary filesystem is grown to the maximum available space and the filesystem is, too. Let’s put the card back into the other computer and have a look at the partitions:

# gpart show mmcsd0
=>      63  15493057  mmcsd0  MBR  (7.4G)
        63      2016          - free -  (1.0M)
      2079    110502       1  fat32lba  [active]  (54M)
    112581  15378491       2  freebsd  (7.3G)
  15491072      2048          - free -  (1.0M)

# gpart show mmcsd0s2
=>       0  15378491  mmcsd0s2  BSD  (7.3G)
         0        59            - free -  (30K)
        59  15378432         1  freebsd-ufs  (7.3G)

As you can see, the FreeBSD slice is now > 7 GB in size, even though the original image was a lot smaller to fit onto a 4 GB card. Now that we have enough usable space, let’s mount the filesystem (the only one on the second slice) and copy the image over:

# mount /dev/mmcsd0s2a /mnt
# cp FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img /mnt
# umount /mnt

Putting FreeBSD on the Pinebook

Put the micro SD card back into the Pinebook and boot off of it. Let’s see how many storage devices there are:

# geom disk list
Geom name: mmcsd0
Providers:
1. Name: mmcsd0
   Mediasize: 7932477440 (7.4G)
   Sectorsize: 512
   Stripesize: 4194304
   Stripeoffset: 0

[...]

Geom name: mmcsd1
Providers:
1. Name: mmcsd1
   Mediasize: 15518924800 (14G)
   Sectorsize: 512
   Stripesize: 512
   Stripeoffset: 0

[...]

Alright, mmcsd0 is the SD card and mmcsd1 is the internal eMMC, which is a bit bigger. To “install” FreeBSD on the device (and erase Linux) just dd the image onto the internal storage, shut down the system and eject the card:

# cd /
# dd if=FreeBSD-13.0-CURRENT-arm64-aarch64-PINEBOOK-20190531-r348447.img of=/dev/mmcsd1 status=progress
# shutdown -p now

Now the depenguinization of your device is complete! Power it on again and it’ll boot FreeBSD off the eMMC.

If you get tired of this system, try another. Go to the pine64 website and browse through the “Partner Projects” tab. You’ll certainly find other interesting operating systems or distributions to install.

Status of FreeBSD on the Pinebook

So what works on FreeBSD currently and what doesn’t? I’ve read about screen flickering on the console and things like that. But from what I can say, those issues are gone. The console works well even on my 1080p model. X11 works as well and I’ve tested various desktop environments. Building packages from ports works, but of course this is not the kind of hardware that’s best fit for that.

What did NOT work is e.g. Firefox – it crashes. Also sound is not working, yet.

I didn’t test WLAN or Bluetooth since I’m using a USB to LAN adapter to access the net. For the dmesg output see the end of this post.

What’s next?

I’ll stick to FreeBSD on the Pinebook and if that is your special interest, too, feel free to contact me. My current plan is to write about configuring a fresh systems, packages and a project that I started for the Pinebook (lite packages and the corresponding ports options light”).

dmesg.boot

------
KDB: debugger backends: ddb
KDB: current backend: ddb
Copyright (c) 1992-2019 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
	The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 13.0-CURRENT r348447 GENERIC arm64
FreeBSD clang version 8.0.0 (tags/RELEASE_800/final 356365) (based on LLVM 8.0.0)
WARNING: WITNESS option enabled, expect reduced performance.
VT(efifb): resolution 1920x1080
KLD file umodem.ko is missing dependencies
Starting CPU 1 (1)
Starting CPU 2 (2)
Starting CPU 3 (3)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
arc4random: WARNING: initial seeding bypassed the cryptographic random device because it was not yet seeded and the knob 'bypass_before_seeding' was enabled.
random: entropy device external interface
MAP 47ef4000 mode 2 pages 24
MAP b8f2f000 mode 2 pages 1
MAP b8f31000 mode 2 pages 1
MAP bdf50000 mode 2 pages 16
kbd0 at kbdmux0
ofwbus0: 
clk_fixed0:  on ofwbus0
clk_fixed1:  on ofwbus0
simplebus0:  on ofwbus0
rtc0:  mem 0x1f00000-0x1f003ff irq 52,53 on simplebus0
rtc0: registered as a time-of-day clock, resolution 1.000000s
regfix0:  on ofwbus0
regfix1:  on ofwbus0
ccu_a64ng0:  mem 0x1c20000-0x1c203ff on simplebus0
ccu_sun8i_r0:  mem 0x1f01400-0x1f014ff on simplebus0
psci0:  on ofwbus0
aw_sid0:  mem 0x1c14000-0x1c143ff on simplebus0
iichb0:  mem 0x1f03400-0x1f037ff irq 57 on simplebus0
iicbus0:  on iichb0
gic0:  mem 0x1c81000-0x1c81fff,0x1c82000-0x1c83fff,0x1c84000-0x1c85fff,0x1c86000-0x1c87fff irq 49 on simplebus0
gic0: pn 0x2, arch 0x2, rev 0x1, implementer 0x43b irqs 224
gpio0:  mem 0x1c20800-0x1c20bff irq 23,24,25 on simplebus0
gpiobus0:  on gpio0
aw_nmi0:  mem 0x1f00c00-0x1f00fff irq 54 on simplebus0
iichb1:  mem 0x1f02400-0x1f027ff irq 55 on simplebus0
iicbus1:  on iichb1
gpio1:  mem 0x1f02c00-0x1f02fff irq 56 on simplebus0
gpiobus1:  on gpio1
axp8xx_pmu0:  at addr 0x746 irq 59 on iicbus0
gpiobus2:  on axp8xx_pmu0
generic_timer0:  irq 4,5,6,7 on ofwbus0
Timecounter "ARM MPCore Timecounter" frequency 24000000 Hz quality 1000
Event timer "ARM MPCore Eventtimer" frequency 24000000 Hz quality 1000
a10_timer0:  mem 0x1c20c00-0x1c20c2b irq 8,9 on simplebus0
Timecounter "a10_timer timer0" frequency 24000000 Hz quality 2000
aw_syscon0:  mem 0x1c00000-0x1c00fff on simplebus0
awusbphy0:  mem 0x1c19400-0x1c19413,0x1c1a800-0x1c1a803,0x1c1b800-0x1c1b803 on simplebus0
cpulist0:  on ofwbus0
cpu0:  on cpulist0
cpufreq_dt0:  on cpu0
cpu1:  on cpulist0
cpu2:  on cpulist0
cpu3:  on cpulist0
aw_thermal0:  mem 0x1c25000-0x1c250ff irq 10 on simplebus0
a31dmac0:  mem 0x1c02000-0x1c02fff irq 11 on simplebus0
aw_mmc0:  mem 0x1c0f000-0x1c0ffff irq 15 on simplebus0
mmc0:  on aw_mmc0
aw_mmc1:  mem 0x1c10000-0x1c10fff irq 16 on simplebus0
mmc1:  on aw_mmc1
aw_mmc2:  mem 0x1c11000-0x1c11fff irq 17 on simplebus0
mmc2:  on aw_mmc2
ehci0:  mem 0x1c1a000-0x1c1a0ff irq 19 on simplebus0
usbus0: EHCI version 1.0
usbus0 on ehci0
ohci0:  mem 0x1c1a400-0x1c1a4ff irq 20 on simplebus0
usbus1 on ohci0
ehci1:  mem 0x1c1b000-0x1c1b0ff irq 21 on simplebus0
usbus2: EHCI version 1.0
usbus2 on ehci1
ohci1:  mem 0x1c1b400-0x1c1b4ff irq 22 on simplebus0
usbus3 on ohci1
gpioc0:  on gpio0
uart0:  mem 0x1c28000-0x1c283ff irq 31 on simplebus0
uart0: console (115384,n,8,1)
pwm0:  mem 0x1c21400-0x1c217ff on simplebus0
pwmbus0:  on pwm0
pwmc0:  on pwm0
iic0:  on iicbus1
gpioc1:  on gpio1
gpioc2:  on axp8xx_pmu0
iic1:  on iicbus0
aw_wdog0:  mem 0x1c20ca0-0x1c20cbf irq 58 on simplebus0
cryptosoft0: 
Timecounters tick every 1.000 msec
usbus0: 480Mbps High Speed USB v2.0
usbus1: 12Mbps Full Speed USB v1.0
ugen0.1:  at usbus0
ugen1.1:  at usbus1
uhub0:  on usbus1
uhub1:  on usbus0
usbus2: 480Mbps High Speed USB v2.0
usbus3: 12Mbps Full Speed USB v1.0
ugen2.1:  at usbus2
uhub2:  on usbus2
ugen3.1:  at usbus3
uhub3:  on usbus3
AW_MMC_INT_RESP_TIMEOUT 
uhub0: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
uhub3: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
mmc0: No compatible cards found on bus
aw_mmc0: Spurious interrupt - no active request, rint: 0x00000004

aw_mmc1: Cannot set vqmmc to 33000003300000
uhub1: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
uhub2: 1 port with 1 removable, self powered
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
mmc1: No compatible cards found on bus
aw_mmc1: Spurious interrupt - no active request, rint: 0x00000004

aw_mmc2: Cannot set vqmmc to 33000003300000
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_DATA_END_BIT_ERR
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
AW_MMC_INT_RESP_TIMEOUT 
mmcsd0: 16GB  at mmc2 52.0MHz/8bit/4096-block
mmcsd0boot0: 4MB partion 1 at mmcsd0
mmcsd0boot1: 4MB partion 2 at mmcsd0
mmcsd0rpmb: 4MB partion 3 at mmcsd0
Release APs...done
CPU  0: ARM Cortex-A53 r0p4 affinity:  0
Trying to mount root from ufs:/dev/ufs/rootfs [rw]...
 Instruction Set Attributes 0 = 
 Instruction Set Attributes 1 = 
         Processor Features 0 = 
         Processor Features 1 = 
      Memory Model Features 0 = 
      Memory Model Features 1 = 
      Memory Model Features 2 = 
             Debug Features 0 = 
             Debug Features 1 = 
         Auxiliary Features 0 = 
         Auxiliary Features 1 = 
CPU  1: ARM Cortex-A53 r0p4 affinity:  1
CPU  2: ARM Cortex-A53 r0p4 affinity:  2
CPU  3: ARM Cortex-A53 r0p4 affinity:  3
WARNING: WITNESS option enabled, expect reduced performance.
ugen2.2:  at usbus2
uhub4:  on usbus2
random: randomdev_wait_until_seeded unblock wait
uhub4: 4 ports with 1 removable, self powered
ugen2.3:  at usbus2
ukbd0 on uhub4
ukbd0:  on usbus2
kbd1 at ukbd0
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
ums0 on uhub4
ums0:  on usbus2
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
ums0: 5 buttons and [XYZT] coordinates ID=1
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
hid_get_item: Number of items(1039) truncated to 1024
ugen2.4:  at usbus2
random: randomdev_wait_until_seeded unblock wait
random: unblocking device.
GEOM_PART: mmcsd0s2 was automatically resized.
  Use `gpart commit mmcsd0s2` to save changes or `gpart undo mmcsd0s2` to revert them.
lock order reversal:
 1st 0xffff000040a26ff8 bufwait (bufwait) @ /usr/src/sys/kern/vfs_bio.c:3904
 2nd 0xfffffd00011a1400 dirhash (dirhash) @ /usr/src/sys/ufs/ufs/ufs_dirhash.c:289
stack backtrace:
#0 0xffff0000004538a0 at witness_debugger+0x64
#1 0xffff0000003f7f9c at _sx_xlock+0x7c
#2 0xffff00000068879c at ufsdirhash_add+0x38
#3 0xffff00000068b0d8 at ufs_direnter+0x3c4
#4 0xffff000000691b14 at ufs_rename+0xb7c
#5 0xffff000000755b60 at VOP_RENAME_APV+0x90
#6 0xffff0000004c1678 at kern_renameat+0x304
#7 0xffff000000718448 at do_el0_sync+0x4fc
#8 0xffff0000006ff200 at handle_el0_sync+0x84
lo0: link state changed to UP

Exploring OmniOS in a VM (2/2)

This is the second part of my post about “exploring OmniOS in a VM”. The first post showed my adventures with service and user management on a fresh installation. My initial goal was to make the system let me ssh into it: Now the SSH daemon is listening and I’ve created an unprivileged user. So the only thing still missing is bringing up the network to connect to the system from outside.

Network interfaces

Networking can be complicated, but I have rather modest requirements here (make use of DHCP) – so that should not be too much of a problem, right? The basics are pretty much the same on all Unices that I’ve come across so far (even if Linux is moving away from ifconfig with their strange ip utility). I was curious to see how the Solaris-derived systems call the NIC – but it probably couldn’t be any worse than enp2s0 or something that is common with Linux these days…

# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
          inet 127.0.0.1 netmask ff000000
lo0: flags=200200849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
          inet6 ::1/128

Huh? Only two entries for lo0 (IPv4 and v6)? Strange! First thought: Could it be that the type of NIC emulated by VirtualBox is not supported? I looked for that info on the net – and the opposite is true: The default VirtualBox NIC (Intel PRO/1000) is supported, the older models that VBox offers aren’t!

Ifconfig output and fragment of the corresponding man page

Obviously it’s time again to dig into manpages. Fortunately there’s a long SEE ALSO section again with ifconfig(1M). And I’ve already learned something so far: *adm commands are a good candidate to be read first. Cfgadm doesn’t seem to be what I’m looking for, but dladm looks promising – the short description reads: “Administer data links”.

I must say that I like the way the parameters are named. No cryptic and hard to remember stuff here that makes you wonder what it actually means. Parameters like “show-phys” and “show-link” immediately give you an idea of what they do. And I’m pretty sure: With a little bit of practice it will work out well to guess parameters that you’ve not come across, yet. Nice!

# dladm show-link
LINK         CLASS     MTU    STATE     BRIDGE     OVER
e1000g0      phys      1500   unknown   --         --

Ok, there we have the interface name: e1000g0. I don’t want to create bridges, bond together NICs or anything, so I’m done with this tool. But what to do with that information?

The ifconfig manpage mentions the command ipadm – but it’s kind of hidden in the long text. For whatever reason it’s missing in SEE ALSO! I’d definitely suggest that it’d be added, too. More often than not people don’t have the time to really read through a long manpage (or are not as curious about a new system as I am and thus don’t feel like reading more than they have to). But anyways.

Ipadm manpage

# ipadm create-if e1000g0

This should attach the driver and create the actual interface. And yes, now ifconfig can show it:

Interface e1000g0 created

Network connection?

Almost there! Now the interface needs an IP address. Looks like ipadm is the right tool for this job as well. I had some trouble to find out what an interface-id is, though. The manpage obviously assumes the reader is familiar with that term, which was not the case with me. I tried to find out more about it, but was unable to locate any useful info in other manpages. So I resorted to the OmniOS wiki again and that helped (it seems that you can actually choose almost anything as an Interface ID but there are certain conventions). Ok, let’s try it out and see if it works:

# ipadm create-addr -T dhcp e1000g0/v4

No error or anything, so now the system should have acquired an IPv4 address.

IPv4 address assigned via DHCP

10.0.2.15, great! Let’s see if we can reach the internet:

# ping elderlinux.org
ping: unknown host elderlinux.org

DNS pt. 1

Ok, looks like we don’t have name resolution, yet. Is the right nameserver configured?

# cat /etc/resolv.conf
cat: cannot open /etc/resolv.conf: No such file or directory

Oops. There’s something not right here! If I configure my FreeBSD box to use DHCP that makes sure resolv.conf is populated properly. The interface e1000g0 got an IP – so the DHCP request must have been successful. But did something break before the network was configured completely? Is the DHCP client daemon even running?

DHCP

# ps aux | grep -i [d]hcp
root       551  0.0  0.1 2836 1640 ?        S 18:52:22  0:00 /sbin/dhcpagent

Hm! I’m used to dhclient, but dhcpagent is unknown to me. According to the manpage, it’s the DHCP client daemon, so that must be what OmniOS uses to initiate and renew DHCP requests. And obviously it’s running. However the manpage solves the mystery:

Aside from the IP address, and for IPv4 alone, the netmask, broadcast address, and the default router, the agent does not directly configure the workstation, but instead acts as a database which may be interrogated by other programs, and in particular by dhcpinfo(1).

Ah-ha! So I have to configure things like DNS myself. Nevertheless the system should have received the correct nameserver IP. Let’s see if we can actually get it via the dhcpinfo command that I’ve just learned about. A look at the manpage as well as at the DHCP inittab later I know how to ask for that information:

# dhcpinfo -i e1000g0 DNSserv
192.168.2.1

Right, that’s my nameserver.

Some lines from /etc/dhcp/inittab

Route?

Is the routing information correct? Let’s check.

# netstat -r -f inet
[...]
default      10.0.2.2  [...]
10.0.2.0     10.0.2.15 [...]
localhost    localhost [...]

Looks good and things should work. One more test:

# route get 192.168.2.1
   route to: fw.local
destination: default
       mask: default
    gateway: 10.0.2.2
  interface: e1000g0
[...]

Alright, then it’s probably not a network issue… But what could it be?

DNS pt. 2

Eventually I found another hint at the wiki. Looks like by default OmniOS has a very… well, old-school setting when it comes to sources for name resolution: Only the /etc/hosts is used by default! I haven’t messed with nsswitch.conf for quite a while, but in this case it’s the solution to this little mystery.

# fgrep "hosts:" /etc/nsswitch.conf
hosts:      files

There are a couple of example configurations that can be used, though:

# ls -1 /etc/nsswitch.*
/etc/nsswitch.ad
/etc/nsswitch.conf
/etc/nsswitch.dns
/etc/nsswitch.files
/etc/nsswitch.ldap
/etc/nsswitch.nis

Copying nsswitch.dns over nsswitch.conf should fix that problem.

And really: Instead of getting a ping: unknown host elderlinux.org now I get no answer from elderlinux.org – which is OK, since my nameserver doesn’t answer ping requests.

Remote access

Now with the network set up correctly, it’s finally time to connect to the VM remotely. To be able to do so, I configure VirtualBox to forward port 22 of the VM to port 10022 on the host machine. And then it’s time to try to connect – and yes, it works!

SSHing into the OmniOS VM… finally!

Conclusion

So much for my first adventure with OmniOS. I had quite a few difficulties to overcome even in this very simple scenario of just SSHing into the VM. But what could have been a trivial task proved to be rather educational. And being a curious person, I actually enjoyed it.

I’ll take a break from the Illumos universe for now, but I definitely plan to visit OmniOS again. Then I’ll do an installation on real hardware and plan to take a look at package management and other areas. Hope you enjoyed reading these articles, too!

Exploring OmniOS in a VM (1/2)

While I’ve been using Unix-like systems for quite a while and heavily for about a decade, I’m completely new to operating systems of the Solaris/Illumos family. So this post might be of interest for other *BSD or Linux users who want to take a peek at an Illumos distribution – or to the Illumos people interested in how accessible their system is for users coming from other *nix operating systems.

In my previous post I wrote about why I wanted to take a look at OmniOS in the first place. I also detailed the installation process. This post is about the first steps that I took on my newly installed OmniOS system. There’s a lot to discover – more actually than I first thought. For that reason I decided to split what was meant to be one post into two. The first part covers service management and creating a user.

Virtualize or not?

According to a comment, a Reddit user would be more interested in an installation on real hardware. There are two reasons why I like to try things out using a hypervisor: 1) It’s a quick thing to do and no spare hardware is required 2) it’s pretty easy to create screenshots for an article like this.

OmniOS booted up and root logged in

However I see the point in trying things out on real hardware and I’m always open to suggestions. For that reason I’ll be writing a third post about OmniOS after this one – and will install it on real hardware. I’ve already set a somewhat modern system aside so that I can test if things work with UEFI and so on.

Fresh system: Where to go?

In the default installation I can simply login as root with a blank password. So here we are on a new OmniOS system. What shall we explore first?

The installer asked me which keymap to use and I chose German. While I’m familiar with both the DE and US keymaps enough that I can work with them, I’ve also learned an ergonomic variant called Neo² (think dvorak or colemak on steroids) and strongly prefer that.

It’s supported by X11 and after a simple setxkbmap de neo -option everything is fine. On the console however… For FreeBSD there’s a keymap available, but for Illumos-based systems it seems I’m out of luck.

So here’s the plan: Configure the system to let me SSH into it! Then I can use whatever I want on my client PC. Also this scenario touches upon a nice set of areas: System services, users, network… That should make a nice start.

Manpages (pt. 1)

During the first startup, smf(5) was mentioned so it might be a good idea to look that up. So that’s the service management facility. But hey, what’s that? The manpage clearly does not describe any type of configuration file. And actually the category headline is “Standards, Environments, and Macros”! What’s happening here?

smf(5) manpage

First discovery: The manpage sections are different. Way different, actually! Sections like man1, man1b, man1c, man3, man3bsm, man3contract, man3pam, etc… Just take a look. Very unfamiliar but obviously clearly arranged.

The smf manpage is also pretty informative and comprehensive including the valuable see also section. The same is true for other pages that I’ve looked at so far. On the whole this left a good impression on me.

System services

Solaris has replaced the traditional init system with smf and Illumos inherited it. It does more than the old init did, though: Services are now supervised, too. If a service is meant to be kept running, smf can take care of restarting it, should it die. It could be compared to systemd in some regards, but smf was created earlier (and doesn’t have the same … controversial reputation).

Services are described by a Fault Management Resource Identifier or FMRI which looks like svn:/network/loopback:default but can be shortened if they are unambiguous. I had no idea how to work with all this but the first “see also” reference was already helpful: scvs can be used to view service states (and thus get an idea about what the system is doing anyway).

svcs: service status

Another command caught my attention right away, too: svcadm sounded like it might be useful. And indeed this was just what I was searching for! The manpage revealed a really straight-forward syntax, so here we go:

# svcadm enable sshd
svcadm: Pattern 'sshd' doesn't match any instances
# svcadm enable ssh

The latter command did not produce any output. And since we’re in Unix territory here, that’s just what you want: It’s the normal case that something worked as intended, no need for output. Issuing a svcs and grepping for ssh shows it now so it should be running. But is SSH really ready now? Let’s check:

# sockstat -4 -l
-bash: sockstat: commmand not found

Yes, right, this is not FreeBSD. One netstat -tulpen later I know that it’s not exactly like Linux, either. Once more: man to the rescue!

# netstat -af inet -P tcp

The output (see image below) looks good. Let’s do one last check and simply connect to the default SSH port on the VM: Success. SSHd is definitely running and accepting connections.

Testing if the SSH daemon is running

Alright, so much for very basic service management! It’s just a test box, but I don’t really like sshing into it as root. Time to add a user to the system. How hard could that be?

User management

Ok, there’s no user database like on FreeBSD or anything like that. It’s just plain old /etc/passwd and /etc/shadow. How boring. So let’s just create a user and go on with the next topic, right?

# useradd -m kraileth
UX: useradd: ERROR: Unable to create home directory: Operation not applicable.

Uh oh… What is this? Maybe it’s not going to be so simple after all. And really: Reading through the manpage for useradd, I get a feeling that this topic is everything – but certainly not boring!

There’s a third file, /etc/user_attr, involved in user management. Users and roles can be assigned extended attributes there. Users can have authorizations, roles, profiles and be part of a project. I won’t go into any detail here (it makes sense to actually read the fine manpages yourself if you’re interested). Now if you’re thinking that some of this might be related to what FreeBSD does with login.conf, you’re on the right track. I cannot claim that I understood everything after reading about it just once. But it is sufficient to get an idea of what sort of complex and fine-grained user management Illumos can do!

Content of /etc/user_attr

Manpages (pt. 2)

Ok, while this has been an interesting read and is certainly good to know, it didn’t solve the problem that I had. The useradd manpage even has a section called DIAGNOSTICS where several possible errors are explained – however the one that I’m having isn’t. And that’s a pitty, since some of the ones listed here are pretty self-explanatory while “Operation not applicable” isn’t (at least for me).

I read a bit more but didn’t find any clue to what’s going on here. When my man skills failed me I turned to documentation on the net. And what better place to start with than the OmniOSce Wiki?

When it comes to local users, the first sentence (ignoring the missing word) reads: On OmniOS, home is under automounter control, so the directory is not writable.

Ah-ha! That sounds quite a bit like the reason for the mysterious “not applicable”! That should be hint enough so I can go on with manpages, right?

# apropos automounter
/usr/share/man/whatis: No such file or directory
# man -k automounter
/usr/share/man/whatis: No such file or directory

Hm… Looks like the whatis database does not exist! Let’s create it and try again:

# man -w
# apropos automounter
# apropos automount
autofs(4)      - automount configuration properties
automount(1m)  - install automatic mount points
automountd(1m) - autofs mount/unmount daemon

There we go, more pages to read and understand.

User’s home directories

The automounter is another slightly complex topic (and the fact that the wiki mentions /export/home while useradd seems to default to /home doesn’t help to prevent confusion, either). So I’m going to sum up what I found out so far:

It seems that people at Sun had the idea that it would be nice to be able work not only on your own workstation but at any other one, too. Managing users locally on each machine would be a nightmare (with people coming and going). Therefore they created the Yellow Pages, later renamed to NIS (Network Information Service). If you have never heard of it, think LDAP (as that has more or less completely replaced NIS today). Thus it was possible to get user information over the net instead of from local passwd and friends.

The logical next step was shared home directories so employees could have fully networked user logins on multiple machines. Sun already had NFS (Network File System) which could be used for the job. But it made sense to accompany it with the automounter. So this is the very short story of why home directories are typically located in /export/home on Solaris-derived operating systems: They were meant to be shared via NFS!

So we have to add a line to /etc/auto_home to let the automounter know to handle our new home directory:

* localhost:/export/home/&

Configuring the automounter for the home directory

Most of the two automounter configuration files are made up of the CDDL (pronounced “cuddle”) license – I’ve left it out here by using tail (see picture). After adding the needed rule (following the /export/home standard even though I don’t plan on using shared home directories), the autofs daemon needs to be restarted, then the user can finally be created:

# mkdir /export/home
# useradd -m -b /export/home kraileth
# passwd kraileth

Creating a new user

So with the user present on the system, we should be able to SSH into the local machine with that user:

ssh kraileth@localhost

SSHing into the system locally

Success! Now all that remains is bringing the net up.

What’s next?

The next post will be mostly network related and feature a conclusion of my first impressions. I hope to finish it next week.

A look beyond the BSD teacup: OmniOS installation

Five years ago I wrote a post about taking a look beyond the Linux teacup. I was an Arch Linux user back then and since there were projects like ArchBSD (called PacBSD today) and Arch Hurd, I decided to take a look at and write about them.

Things have changed. Today I’m a happy FreeBSD user, but it’s time again to take a look beyond the teacup of operating systems that I’m familiar with.

Why Illumos / OmniOS?

There are a couple of reasons. The Solaris derivatives are the other big community in the *nix family besides Linux and the BSDs and we hadn’t met so far. Working with ZFS on FreeBSD, I now and then I read messages that contain a reference to Illumos which certainly helps to keep up the awareness. Of course there has also been a bit of curiosity – what might the OS be like that grew ZFS?

Also the Ravenports project that I participate in planned to support Solaris/Illumos right from the beginning. I wanted to at least be somewhat “prepared” when support for that platform would finally land. So I did a little research on the various derivatives available and settled on the one that I had heard a talk about at last year’s conference of the German Unix Users Group: “OmniOS – Solaris for the Rest of Us”. I would have chosen SmartOS as I admire what Bryan Cantrill does but for getting to know Illumos I prefer a traditional installation over a run-from-RAM system.

There was also a meme about FreeBSD that got me thinking:

Internet Meme: Making fun of FreeBSD

Of course FreeBSD is not run by corporations, especially when compared to the state of Linux. And when it comes to sponsoring, OpenBSD also takes the money… When it comes to FreeBSD developers, there’s probably some truth to the claim that some of them are using macOS as their desktop systems while OpenBSD devs are more likely to develop on their OS of choice. But then there’s the statement that “every innovation in the past decade comes from Solaris”. Bhyve alone proves this wrong. But let’s be honest: Two of the major technologies that make FreeBSD a great platform today – ZFS and DTrace – actually do come from Solaris. PAM originates there and a more modern way of managing services as well. Also you hear good things about their zones and a lot of small utilities in general.

In the end it was a lack of time that made me cheat and go down the easiest road: Create a Vagrantfile and just pull a VM image of the net that someone else had prepared… This worked to just make sure that the Raven packages work on OmniOS. I was determined to return, though – someday. You know how things go: “someday” is a pretty common alias for “probably never, actually.”

But then I heard about a forum post on the BSDNow! podcast. The title “Initial OmniOS impressions by a BSD user” caught my attention. I read that it was written by somebody who had used FreeBSD for years but loathed the new Code of Conduct enough to leave. I also oppose the Conduct and have made that pretty clear in my February post [ ! -z ${COC} ] && exit 1. As stated there, I have stayed with my favorite OS and continue to advocate it. I decided to stop reading the post and try things out on my own instead. Now I’ve finally found the time to do so.

First attempt at installing the OS

OmniOS offers images for three branches: Stable, LTS and Bloody. Stable releases are made available twice a year with every fourth release being supported for three years (LTS) instead of one. Bloody images are more or less development snapshots meant for advanced users who want to test the newest features.

I downloaded the latest stable ISO and spun up a VM in Virtual Box. This is how things went:

Familiar Boot Loader

Ah, the good old beastie menu – with some nice ASCII artwork! OmniOS used GRUB before but not too long ago, the FreeBSD loader was ported over to Illumos. A good choice!

Two installers available

It looks like the team has created a new installer. I’m a curious person and want to know what it was like before – so I went with the old text-based installer.

Text installer: Keymap selection

Not much of a surprise: The first thing to do is selecting the right keymap.

ZFS pool creation options

Ok, next it’s time to create the ZFS pool for the operating system to install on. It seems like the Illumos term is rpool (resource pool I guess?). Since I’m just exploring the OS for the first time, I picked option 1 and… Nothing happened! Well, that’s not exactly true, since a message appears for a fraction of a second. If I press 1 again, it blinks up briefly again. Hm!

I kept the key pressed and try my best to read what it’s saying:
/kayak/installer/kayak-menu[254]: /kayak/installer/find-and-install: not found [No such file or directory]

Oops! Looks like there’s something broken on the current install media… So this was a dead-end pretty early on. However since we’re all friends in Open Source, I filed an issue with OmniOS’s kayak installer. A developer responded the next day and the issue was solved. This left a very good impression on me. Quality in development doesn’t show in that you never introduce bugs (which is nearly impossible even for really lame programs) but in how you react to bugs being found. Two thumbs up for OmniOS here (my latest PRs with FreeBSD have been rotting for about a year now)!

Dialog-based installer

What a great opportunity to test the new installer as well! Will it work?

Dialog-based installer: Keymap selection

Back on track with the dialog-based installer. Keymap selection is now done via a simple menu.

ZFS pool creation options

Ok, here we are again! Pool creation time. In the new installer it just does its job…

Disk selection

… and finds the drives, giving me a choice on where to install to. Of course it’s a pretty easy decision to make in case of my VM with just one virtual drive!

ZFS Root Pool Configuration

Next the installer allows for setting a few options regarding the pool. It’s nice to see that UEFI seems to be already supported. For this VM I went with BIOS GPT, though.

Hostname selection

Then the hostname is set. For the impatient (and uncreative) it suggests omniosce. There’s not too much any installer could do to set itself apart (and no need to).

Time zone selection 1

Another important system configuration is time zone settings. Since there’s a lot of time zones, it makes sense to group them together by continent instead of providing one large list. This looks pretty familiar from other OS installations.

Time zone selection 2

The next menu allows for selecting the actual time zone.

Time zone confirmation

Ok, a confirmation screen. A chance to review your settings probably doesn’t hurt.

Actual copying of OS data

Alright! Now the actual installation of the files to the pool starts.

Installer: Success!

Just a moment later the installation is finished. Cool, it even created a boot environment on its own! Good to see that they are so tightly integrated into the system.

Last step

Finally it’s time to reboot. The installer is offering to do some basic configuration of the system in case you want to do that.

Basic configuration options

I decided not to do it as you probably learn most when you force yourself to figure out how to configure stuff yourself. Of course I was curious, though, and took a peek at it. If you choose to create a user (just did this in another VM, so I can actually write about the installer), you’ll get to decide if you want to make it an administrative user, whether to give it sudo privileges and if you want to allow passwordless sudo. Nice!

First start: Preparing services

After rebooting the string “Loading unix…” made me smile and I was very curious about what’s to come. On the first boot it takes a bit longer since the service descriptions need to be parsed once. It’s not a terribly long delay, though.

First login on the new system

And there we have it, my first login into an OmniOS system installed myself.

What’s next?

That’s it for part one. In part two I’ll try to make the system useful. So far I have run into a problem that I haven’t been able to solve. But I have some time now to figure things out for the next post. Let’s see if I manage to get it working or if I have to report failure!

Modern-day package requirements

A little rant first: Many thanks to the EU (and all the people who decide on topics related to tech without having any idea on how tech stuff actually works). Their GDPR is the reason for me having been really occupied with work this month! Email being a topic that I’m teaching myself while writing the series of posts about it, I have to get back to it as time permits. This means that for May I’m going to write about a topic that I’m more familiar with.

Benefits of package management

I’ve written about package management before, telling a bit about the history of it and then focusing on how package management is done on FreeBSD. The benefits of package management are so obvious that I don’t see any reason not to content myself with just touching them:

Package management makes work put into building software re-usable. It helps you to install software and to keep it up to date. It makes it very easy to remove things in a clean manner. And package management provides a trusted source for your software needs. Think about it for just a moment and you’ll come up with more benefits.

Common package management requirements

But let’s take a look at the same topic from a different angle. What do we actually require our package systems to do? What features are necessary? While this may sound like a rather similar question, I assure you that it’s much less boring. Why? Because we’re looking at what we need – and it’s very much possible that the outcome actually is: No, we’re not using the right tool!

Yes, we need package management, obviously. While there’s this strange, overly colorful OS that cannot even get the slashes in directories right, we can easily dismiss that. We’re talking *nix here, anyway!

Ok, ok, there’s OmniOS with its KYSTY policy. That stands for “keep your software to yourself” and is how the users of said OS describe the fact that there’s no official packages available for it. While it’s probably safe to assume that the common kiddies on the web don’t know their way around on Solaris, I’m still not entirely convinced that this is an approach to recommend.

Going down that road is a pretty bold move, though. Of course it’s possible to manage your software stack properly. With a lot of machines and a lot of needed programs this will however turn into an abundance of work (maybe there are companies out there who enjoy paying highly qualified staff to carefully maintain software while others rarely spend more than a couple of minutes per day to keep their stuff up-to-date).

Also if you’re a genius who uses the method that’s called “It’s all in my head!” in the Linux from Scratch book, I’m not going to argue against it (except that this is eventually going to fail when you have to hand things over to a mere mortal when you’re leaving).

But enough of those really special corner cases. Let’s discuss what we actually require our package systems to provide! And let’s do so from the perspective not of a hobby admin but from a business-orientated one. There are three things that are essential and covered by just about any package system.

Ease of use

One of the major requirements we have today is that package management needs to be easy to use. Yes, building and installing software from source is usually easy enough on *nix today. However figuring out which configure options to use isn’t. Build one package without some feature and you might notice much later that it’s actually needed after all. Or even find that you compiled something in that’s getting in the way of something else later! Avoiding this means having to do some planning.

Reading (and understanding!) the output of ./configure –help probably isn’t something you’re going to entrust the newly employed junior admin with. Asking that person to just install mysql on the new server will probably be ok, though. Especially since package managers will usually handle dependencies, too.

Making use of package management means that somebody else (the package maintainer) has already thought about how the software will be used in most of the cases. For you this means that not having to hire and pay senior admins for work that can be done by a junior in your organization, too.

Fast operations

Time is money and while “compiling!” is a perfectly acceptable excuse for a dev, it shouldn’t be for an admin who is asked why the web server still wasn’t deployed on the new system.

Compiling takes time and uses resources. Even if your staff uses terminal multiplexers (which they should), thus being able to compile stuff on various systems at the same time, customers usually want software available when they call – and not two hours later (because the admin was a bit confused with the twenty-something tmux sessions and got stuck with one task while a lot of the other compile jobs have been finished ages ago).

Don’t make your customers wait longer than necessary. Most requests can be satisfied with a standard package. No need to delay things where it doesn’t make any sense.

Regular (security) updates

It’s 2018 and you probably want that new browser version that mitigates some of the Spectre vulnerabilities on your staff’s workstations ASAP. And maybe you even have customers that are using Drupal, in which case… Well, you get the point.

While it does make sense to subscribe to security newsletters and keep an eye on new CVEs, it takes a specialist to maintain your own software stack. When you got word of a new CVE for a program that you’re using that doesn’t mean the way you built the software makes it vulnerable. And perhaps you have a special use-case where it is but the vulnerability is not exploitable.

Again this important task is one that others have already done for you if you use packaged software from a popular repository. Of course those people are not perfect either and you may very well decide that you do not trust them. Doing everything yourself because you think you can do better is a perfectly legitimate way of handling things. Chances are however that your company cannot afford a specialist for this task. And in that case you’re definitely better off trusting the package maintainers than carelessly doing things yourself that you don’t have the knowledge for.

Special package management requirements

Some package managers offer special features not found in other ones. If your organization needs such a feature this can even mean that a new OS or distribution is chosen for some job because of that. Also repositories vary greatly in the number of software they offer, in the software versions that they hold and in the frequency of updates taking place.

“Stability” vs. “freshness”

A lot of organizations prefer “stable”, well-tested software versions. In many cases I think of “stable” as a marketing word for “really old”. For certain use-cases I agree that it makes sense to choose a system where not much will change within the next decade. But IMO this is far less often the case than some decision makers may think.

The other extreme is rolling-release systems which generally adapt the newest software versions after minimal testing. And yes, at one point there was even the “Arch server project” (if I remember the name correctly), which was all about running Arch Linux on a server. In fact this is not as bad an idea as it may seem. There are people who really live Arch and they’ll be able to maintain an Arch server for you. But I think this makes most sense as a box for your developers who want to play with new versions of the software that you’re using way before it hits your actual dev or even prod servers.

Where possible I definitely favor the “deliver current versions” model. Not even due to the security aspect (patches are being backported in case of the “stable” repositories) but because of the newer features. It’s rather annoying if you want to make use of the jumphost ability of OpenSSH (for which a nice new way of doing it was introduced not too long ago) and then notice you can’t use it because there’s that stupid CentOS box with its old SSH involved!

Number of packages

If you need one or a couple of packages that are not available (or too old) in the package repository of your OS or distribution, chances are that external repos exist or that the upstream project provides packages. That may be ok. However if you find that a lot of the software that you require is not available this may very well be a good reason to think about using a different OS or distribution.

A large number of packages in the repository increases the chance that you may get what you need. Still it can very well be the case where certain packages that you require (and which are rather costly to maintain yourself) are available on another repo.

Package auditing

Some package systems allow you to audit the installed packages. If security is very important for your organization, you’ll be happy to have your package tool recommend to “upgrade or deinstall” the installed version of some application because it’s known to be vulnerable.

Flexibility

What if you have special needs on some servers and require support for rarely needed functionality to be compiled into some software? With most package systems you’re out of luck. The best thing that you can do is roll your own customized package using a different name.

The ports tree on *BSD or portage on Gentoo Linux really show their power in this case, allowing you to just build the software easily and with the options that you choose.

Heterogeneous environments

So most of the time it makes perfect sense to stick to the standard repository for your OS or distribution. If you have special needs you’d probably consider another one and use the standard repo for that one. But what about heterogeneous environments?

Perhaps your database product only runs on, say, CentOS. You don’t have much choice here. However a lot of customers want their stuff hosted on Linux but they demand newer program versions. So a colleague installed several Ubuntu boxes. And another colleague, a really strange guy, slipped in some FreeBSD storage servers! When the others found out that this was not even Linux and started protesting (because “BSD is dying”), they were already running too damn well to replaced with something that does not have as good ZFS support.

A scenario like that is not too uncommon. If you don’t do anything about it, this might lead to “camps” among the employees; some of them are sure that CentOS is so truly enterprise that it’s the way to go. And of course yum is better than apt-get (and whatever that BSD thing offers – if anything). Some others laugh at that because Ubuntu is clearly superior and using apt-get feels a lot more natural than having to use yum (which is still better than that BSD thing which they refuse to even touch). And then there’s the BSD guy who is happy to have a real OS at his hand rather than “kernel + distro-chosen packages”.

In general if you are working for a small organization, every admin will have to be able to work with each system that is being used. Proper training for all package systems is probably expansive and thus managers will quite possible be reluctant to accept more than two package systems.

Portability

There’s a little known (in the Linux community) solution to this: Pkgsrc (“package source”). It’s NetBSD’s package management system. But with probably the most important goal of the NetBSD project being portability, it’s portable, too!

Pkgsrc is available for many different platforms. It runs on NetBSD, of course. But it runs on Linux as well as on the other BSDs and on Solaris. It’s even available for commercial UNIX platforms and various exotic platforms.

For this very nature of it, Pkgsrc may be one answer for your packaging needs in heterogeneous environments. It can provide a unified means of package management across multiple platforms. It rids you of the headache of version jungle if you use different repositories for different platforms. And it’s free and open source, too!

Is it the only solution out there? No. Is it the best one? That certainly depends on what you are looking for specifically. But it’s definitely something that you should be aware of.

What’s next?

The next post will be about a relatively new alternative to traditional package management systems that tries to deliver all the strong points in one system while avoiding their weaknesses!