Exploring OmniOS in a VM (2/2)

This is the second part of my post about “exploring OmniOS in a VM”. The first post showed my adventures with service and user management on a fresh installation. My initial goal was to make the system let me ssh into it: Now the SSH daemon is listening and I’ve created an unprivileged user. So the only thing still missing is bringing up the network to connect to the system from outside.

Network interfaces

Networking can be complicated, but I have rather modest requirements here (make use of DHCP) – so that should not be too much of a problem, right? The basics are pretty much the same on all Unices that I’ve come across so far (even if Linux is moving away from ifconfig with their strange ip utility). I was curious to see how the Solaris-derived systems call the NIC – but it probably couldn’t be any worse than enp2s0 or something that is common with Linux these days…

# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
          inet 127.0.0.1 netmask ff000000
lo0: flags=200200849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
          inet6 ::1/128

Huh? Only two entries for lo0 (IPv4 and v6)? Strange! First thought: Could it be that the type of NIC emulated by VirtualBox is not supported? I looked for that info on the net – and the opposite is true: The default VirtualBox NIC (Intel PRO/1000) is supported, the older models that VBox offers aren’t!

Ifconfig output and fragment of the corresponding man page

Obviously it’s time again to dig into manpages. Fortunately there’s a long SEE ALSO section again with ifconfig(1M). And I’ve already learned something so far: *adm commands are a good candidate to be read first. Cfgadm doesn’t seem to be what I’m looking for, but dladm looks promising – the short description reads: “Administer data links”.

I must say that I like the way the parameters are named. No cryptic and hard to remember stuff here that makes you wonder what it actually means. Parameters like “show-phys” and “show-link” immediately give you an idea of what they do. And I’m pretty sure: With a little bit of practice it will work out well to guess parameters that you’ve not come across, yet. Nice!

# dladm show-link
LINK         CLASS     MTU    STATE     BRIDGE     OVER
e1000g0      phys      1500   unknown   --         --

Ok, there we have the interface name: e1000g0. I don’t want to create bridges, bond together NICs or anything, so I’m done with this tool. But what to do with that information?

The ifconfig manpage mentions the command ipadm – but it’s kind of hidden in the long text. For whatever reason it’s missing in SEE ALSO! I’d definitely suggest that it’d be added, too. More often than not people don’t have the time to really read through a long manpage (or are not as curious about a new system as I am and thus don’t feel like reading more than they have to). But anyways.

Ipadm manpage

# ipadm create-if e1000g0

This should attach the driver and create the actual interface. And yes, now ifconfig can show it:

Interface e1000g0 created

Network connection?

Almost there! Now the interface needs an IP address. Looks like ipadm is the right tool for this job as well. I had some trouble to find out what an interface-id is, though. The manpage obviously assumes the reader is familiar with that term, which was not the case with me. I tried to find out more about it, but was unable to locate any useful info in other manpages. So I resorted to the OmniOS wiki again and that helped (it seems that you can actually choose almost anything as an Interface ID but there are certain conventions). Ok, let’s try it out and see if it works:

# ipadm create-addr -T dhcp e1000g0/v4

No error or anything, so now the system should have acquired an IPv4 address.

IPv4 address assigned via DHCP

10.0.2.15, great! Let’s see if we can reach the internet:

# ping elderlinux.org
ping: unknown host elderlinux.org

DNS pt. 1

Ok, looks like we don’t have name resolution, yet. Is the right nameserver configured?

# cat /etc/resolv.conf
cat: cannot open /etc/resolv.conf: No such file or directory

Oops. There’s something not right here! If I configure my FreeBSD box to use DHCP that makes sure resolv.conf is populated properly. The interface e1000g0 got an IP – so the DHCP request must have been successful. But did something break before the network was configured completely? Is the DHCP client daemon even running?

DHCP

# ps aux | grep -i [d]hcp
root       551  0.0  0.1 2836 1640 ?        S 18:52:22  0:00 /sbin/dhcpagent

Hm! I’m used to dhclient, but dhcpagent is unknown to me. According to the manpage, it’s the DHCP client daemon, so that must be what OmniOS uses to initiate and renew DHCP requests. And obviously it’s running. However the manpage solves the mystery:

Aside from the IP address, and for IPv4 alone, the netmask, broadcast address, and the default router, the agent does not directly configure the workstation, but instead acts as a database which may be interrogated by other programs, and in particular by dhcpinfo(1).

Ah-ha! So I have to configure things like DNS myself. Nevertheless the system should have received the correct nameserver IP. Let’s see if we can actually get it via the dhcpinfo command that I’ve just learned about. A look at the manpage as well as at the DHCP inittab later I know how to ask for that information:

# dhcpinfo -i e1000g0 DNSserv
192.168.2.1

Right, that’s my nameserver.

Some lines from /etc/dhcp/inittab

Route?

Is the routing information correct? Let’s check.

# netstat -r -f inet
[...]
default      10.0.2.2  [...]
10.0.2.0     10.0.2.15 [...]
localhost    localhost [...]

Looks good and things should work. One more test:

# route get 192.168.2.1
   route to: fw.local
destination: default
       mask: default
    gateway: 10.0.2.2
  interface: e1000g0
[...]

Alright, then it’s probably not a network issue… But what could it be?

DNS pt. 2

Eventually I found another hint at the wiki. Looks like by default OmniOS has a very… well, old-school setting when it comes to sources for name resolution: Only the /etc/hosts is used by default! I haven’t messed with nsswitch.conf for quite a while, but in this case it’s the solution to this little mystery.

# fgrep "hosts:" /etc/nsswitch.conf
hosts:      files

There are a couple of example configurations that can be used, though:

# ls -1 /etc/nsswitch.*
/etc/nsswitch.ad
/etc/nsswitch.conf
/etc/nsswitch.dns
/etc/nsswitch.files
/etc/nsswitch.ldap
/etc/nsswitch.nis

Copying nsswitch.dns over nsswitch.conf should fix that problem.

And really: Instead of getting a ping: unknown host elderlinux.org now I get no answer from elderlinux.org – which is OK, since my nameserver doesn’t answer ping requests.

Remote access

Now with the network set up correctly, it’s finally time to connect to the VM remotely. To be able to do so, I configure VirtualBox to forward port 22 of the VM to port 10022 on the host machine. And then it’s time to try to connect – and yes, it works!

SSHing into the OmniOS VM… finally!

Conclusion

So much for my first adventure with OmniOS. I had quite a few difficulties to overcome even in this very simple scenario of just SSHing into the VM. But what could have been a trivial task proved to be rather educational. And being a curious person, I actually enjoyed it.

I’ll take a break from the Illumos universe for now, but I definitely plan to visit OmniOS again. Then I’ll do an installation on real hardware and plan to take a look at package management and other areas. Hope you enjoyed reading these articles, too!

Advertisements

Exploring OmniOS in a VM (1/2)

While I’ve been using Unix-like systems for quite a while and heavily for about a decade, I’m completely new to operating systems of the Solaris/Illumos family. So this post might be of interest for other *BSD or Linux users who want to take a peek at an Illumos distribution – or to the Illumos people interested in how accessible their system is for users coming from other *nix operating systems.

In my previous post I wrote about why I wanted to take a look at OmniOS in the first place. I also detailed the installation process. This post is about the first steps that I took on my newly installed OmniOS system. There’s a lot to discover – more actually than I first thought. For that reason I decided to split what was meant to be one post into two. The first part covers service management and creating a user.

Virtualize or not?

According to a comment, a Reddit user would be more interested in an installation on real hardware. There are two reasons why I like to try things out using a hypervisor: 1) It’s a quick thing to do and no spare hardware is required 2) it’s pretty easy to create screenshots for an article like this.

OmniOS booted up and root logged in

However I see the point in trying things out on real hardware and I’m always open to suggestions. For that reason I’ll be writing a third post about OmniOS after this one – and will install it on real hardware. I’ve already set a somewhat modern system aside so that I can test if things work with UEFI and so on.

Fresh system: Where to go?

In the default installation I can simply login as root with a blank password. So here we are on a new OmniOS system. What shall we explore first?

The installer asked me which keymap to use and I chose German. While I’m familiar with both the DE and US keymaps enough that I can work with them, I’ve also learned an ergonomic variant called Neo² (think dvorak or colemak on steroids) and strongly prefer that.

It’s supported by X11 and after a simple setxkbmap de neo -option everything is fine. On the console however… For FreeBSD there’s a keymap available, but for Illumos-based systems it seems I’m out of luck.

So here’s the plan: Configure the system to let me SSH into it! Then I can use whatever I want on my client PC. Also this scenario touches upon a nice set of areas: System services, users, network… That should make a nice start.

Manpages (pt. 1)

During the first startup, smf(5) was mentioned so it might be a good idea to look that up. So that’s the service management facility. But hey, what’s that? The manpage clearly does not describe any type of configuration file. And actually the category headline is “Standards, Environments, and Macros”! What’s happening here?

smf(5) manpage

First discovery: The manpage sections are different. Way different, actually! Sections like man1, man1b, man1c, man3, man3bsm, man3contract, man3pam, etc… Just take a look. Very unfamiliar but obviously clearly arranged.

The smf manpage is also pretty informative and comprehensive including the valuable see also section. The same is true for other pages that I’ve looked at so far. On the whole this left a good impression on me.

System services

Solaris has replaced the traditional init system with smf and Illumos inherited it. It does more than the old init did, though: Services are now supervised, too. If a service is meant to be kept running, smf can take care of restarting it, should it die. It could be compared to systemd in some regards, but smf was created earlier (and doesn’t have the same … controversial reputation).

Services are described by a Fault Management Resource Identifier or FMRI which looks like svn:/network/loopback:default but can be shortened if they are unambiguous. I had no idea how to work with all this but the first “see also” reference was already helpful: scvs can be used to view service states (and thus get an idea about what the system is doing anyway).

svcs: service status

Another command caught my attention right away, too: svcadm sounded like it might be useful. And indeed this was just what I was searching for! The manpage revealed a really straight-forward syntax, so here we go:

# svcadm enable sshd
svcadm: Pattern 'sshd' doesn't match any instances
# svcadm enable ssh

The latter command did not produce any output. And since we’re in Unix territory here, that’s just what you want: It’s the normal case that something worked as intended, no need for output. Issuing a svcs and grepping for ssh shows it now so it should be running. But is SSH really ready now? Let’s check:

# sockstat -4 -l
-bash: sockstat: commmand not found

Yes, right, this is not FreeBSD. One netstat -tulpen later I know that it’s not exactly like Linux, either. Once more: man to the rescue!

# netstat -af inet -P tcp

The output (see image below) looks good. Let’s do one last check and simply connect to the default SSH port on the VM: Success. SSHd is definitely running and accepting connections.

Testing if the SSH daemon is running

Alright, so much for very basic service management! It’s just a test box, but I don’t really like sshing into it as root. Time to add a user to the system. How hard could that be?

User management

Ok, there’s no user database like on FreeBSD or anything like that. It’s just plain old /etc/passwd and /etc/shadow. How boring. So let’s just create a user and go on with the next topic, right?

# useradd -m kraileth
UX: useradd: ERROR: Unable to create home directory: Operation not applicable.

Uh oh… What is this? Maybe it’s not going to be so simple after all. And really: Reading through the manpage for useradd, I get a feeling that this topic is everything – but certainly not boring!

There’s a third file, /etc/user_attr, involved in user management. Users and roles can be assigned extended attributes there. Users can have authorizations, roles, profiles and be part of a project. I won’t go into any detail here (it makes sense to actually read the fine manpages yourself if you’re interested). Now if you’re thinking that some of this might be related to what FreeBSD does with login.conf, you’re on the right track. I cannot claim that I understood everything after reading about it just once. But it is sufficient to get an idea of what sort of complex and fine-grained user management Illumos can do!

Content of /etc/user_attr

Manpages (pt. 2)

Ok, while this has been an interesting read and is certainly good to know, it didn’t solve the problem that I had. The useradd manpage even has a section called DIAGNOSTICS where several possible errors are explained – however the one that I’m having isn’t. And that’s a pitty, since some of the ones listed here are pretty self-explanatory while “Operation not applicable” isn’t (at least for me).

I read a bit more but didn’t find any clue to what’s going on here. When my man skills failed me I turned to documentation on the net. And what better place to start with than the OmniOSce Wiki?

When it comes to local users, the first sentence (ignoring the missing word) reads: On OmniOS, home is under automounter control, so the directory is not writable.

Ah-ha! That sounds quite a bit like the reason for the mysterious “not applicable”! That should be hint enough so I can go on with manpages, right?

# apropos automounter
/usr/share/man/whatis: No such file or directory
# man -k automounter
/usr/share/man/whatis: No such file or directory

Hm… Looks like the whatis database does not exist! Let’s create it and try again:

# man -w
# apropos automounter
# apropos automount
autofs(4)      - automount configuration properties
automount(1m)  - install automatic mount points
automountd(1m) - autofs mount/unmount daemon

There we go, more pages to read and understand.

User’s home directories

The automounter is another slightly complex topic (and the fact that the wiki mentions /export/home while useradd seems to default to /home doesn’t help to prevent confusion, either). So I’m going to sum up what I found out so far:

It seems that people at Sun had the idea that it would be nice to be able work not only on your own workstation but at any other one, too. Managing users locally on each machine would be a nightmare (with people coming and going). Therefore they created the Yellow Pages, later renamed to NIS (Network Information Service). If you have never heard of it, think LDAP (as that has more or less completely replaced NIS today). Thus it was possible to get user information over the net instead of from local passwd and friends.

The logical next step was shared home directories so employees could have fully networked user logins on multiple machines. Sun already had NFS (Network File System) which could be used for the job. But it made sense to accompany it with the automounter. So this is the very short story of why home directories are typically located in /export/home on Solaris-derived operating systems: They were meant to be shared via NFS!

So we have to add a line to /etc/auto_home to let the automounter know to handle our new home directory:

* localhost:/export/home/&

Configuring the automounter for the home directory

Most of the two automounter configuration files are made up of the CDDL (pronounced “cuddle”) license – I’ve left it out here by using tail (see picture). After adding the needed rule (following the /export/home standard even though I don’t plan on using shared home directories), the autofs daemon needs to be restarted, then the user can finally be created:

# mkdir /export/home
# useradd -m -b /export/home kraileth
# passwd kraileth

Creating a new user

So with the user present on the system, we should be able to SSH into the local machine with that user:

ssh kraileth@localhost

SSHing into the system locally

Success! Now all that remains is bringing the net up.

What’s next?

The next post will be mostly network related and feature a conclusion of my first impressions. I hope to finish it next week.

A look beyond the BSD teacup: OmniOS installation

Five years ago I wrote a post about taking a look beyond the Linux teacup. I was an Arch Linux user back then and since there were projects like ArchBSD (called PacBSD today) and Arch Hurd, I decided to take a look at and write about them.

Things have changed. Today I’m a happy FreeBSD user, but it’s time again to take a look beyond the teacup of operating systems that I’m familiar with.

Why Illumos / OmniOS?

There are a couple of reasons. The Solaris derivatives are the other big community in the *nix family besides Linux and the BSDs and we hadn’t met so far. Working with ZFS on FreeBSD, I now and then I read messages that contain a reference to Illumos which certainly helps to keep up the awareness. Of course there has also been a bit of curiosity – what might the OS be like that grew ZFS?

Also the Ravenports project that I participate in planned to support Solaris/Illumos right from the beginning. I wanted to at least be somewhat “prepared” when support for that platform would finally land. So I did a little research on the various derivatives available and settled on the one that I had heard a talk about at last year’s conference of the German Unix Users Group: “OmniOS – Solaris for the Rest of Us”. I would have chosen SmartOS as I admire what Bryan Cantrill does but for getting to know Illumos I prefer a traditional installation over a run-from-RAM system.

There was also a meme about FreeBSD that got me thinking:

Internet Meme: Making fun of FreeBSD

Of course FreeBSD is not run by corporations, especially when compared to the state of Linux. And when it comes to sponsoring, OpenBSD also takes the money… When it comes to FreeBSD developers, there’s probably some truth to the claim that some of them are using macOS as their desktop systems while OpenBSD devs are more likely to develop on their OS of choice. But then there’s the statement that “every innovation in the past decade comes from Solaris”. Bhyve alone proves this wrong. But let’s be honest: Two of the major technologies that make FreeBSD a great platform today – ZFS and DTrace – actually do come from Solaris. PAM originates there and a more modern way of managing services as well. Also you hear good things about their zones and a lot of small utilities in general.

In the end it was a lack of time that made me cheat and go down the easiest road: Create a Vagrantfile and just pull a VM image of the net that someone else had prepared… This worked to just make sure that the Raven packages work on OmniOS. I was determined to return, though – someday. You know how things go: “someday” is a pretty common alias for “probably never, actually.”

But then I heard about a forum post on the BSDNow! podcast. The title “Initial OmniOS impressions by a BSD user” caught my attention. I read that it was written by somebody who had used FreeBSD for years but loathed the new Code of Conduct enough to leave. I also oppose the Conduct and have made that pretty clear in my February post [ ! -z ${COC} ] && exit 1. As stated there, I have stayed with my favorite OS and continue to advocate it. I decided to stop reading the post and try things out on my own instead. Now I’ve finally found the time to do so.

First attempt at installing the OS

OmniOS offers images for three branches: Stable, LTS and Bloody. Stable releases are made available twice a year with every fourth release being supported for three years (LTS) instead of one. Bloody images are more or less development snapshots meant for advanced users who want to test the newest features.

I downloaded the latest stable ISO and spun up a VM in Virtual Box. This is how things went:

Familiar Boot Loader

Ah, the good old beastie menu – with some nice ASCII artwork! OmniOS used GRUB before but not too long ago, the FreeBSD loader was ported over to Illumos. A good choice!

Two installers available

It looks like the team has created a new installer. I’m a curious person and want to know what it was like before – so I went with the old text-based installer.

Text installer: Keymap selection

Not much of a surprise: The first thing to do is selecting the right keymap.

ZFS pool creation options

Ok, next it’s time to create the ZFS pool for the operating system to install on. It seems like the Illumos term is rpool (resource pool I guess?). Since I’m just exploring the OS for the first time, I picked option 1 and… Nothing happened! Well, that’s not exactly true, since a message appears for a fraction of a second. If I press 1 again, it blinks up briefly again. Hm!

I kept the key pressed and try my best to read what it’s saying:
/kayak/installer/kayak-menu[254]: /kayak/installer/find-and-install: not found [No such file or directory]

Oops! Looks like there’s something broken on the current install media… So this was a dead-end pretty early on. However since we’re all friends in Open Source, I filed an issue with OmniOS’s kayak installer. A developer responded the next day and the issue was solved. This left a very good impression on me. Quality in development doesn’t show in that you never introduce bugs (which is nearly impossible even for really lame programs) but in how you react to bugs being found. Two thumbs up for OmniOS here (my latest PRs with FreeBSD have been rotting for about a year now)!

Dialog-based installer

What a great opportunity to test the new installer as well! Will it work?

Dialog-based installer: Keymap selection

Back on track with the dialog-based installer. Keymap selection is now done via a simple menu.

ZFS pool creation options

Ok, here we are again! Pool creation time. In the new installer it just does its job…

Disk selection

… and finds the drives, giving me a choice on where to install to. Of course it’s a pretty easy decision to make in case of my VM with just one virtual drive!

ZFS Root Pool Configuration

Next the installer allows for setting a few options regarding the pool. It’s nice to see that UEFI seems to be already supported. For this VM I went with BIOS GPT, though.

Hostname selection

Then the hostname is set. For the impatient (and uncreative) it suggests omniosce. There’s not too much any installer could do to set itself apart (and no need to).

Time zone selection 1

Another important system configuration is time zone settings. Since there’s a lot of time zones, it makes sense to group them together by continent instead of providing one large list. This looks pretty familiar from other OS installations.

Time zone selection 2

The next menu allows for selecting the actual time zone.

Time zone confirmation

Ok, a confirmation screen. A chance to review your settings probably doesn’t hurt.

Actual copying of OS data

Alright! Now the actual installation of the files to the pool starts.

Installer: Success!

Just a moment later the installation is finished. Cool, it even created a boot environment on its own! Good to see that they are so tightly integrated into the system.

Last step

Finally it’s time to reboot. The installer is offering to do some basic configuration of the system in case you want to do that.

Basic configuration options

I decided not to do it as you probably learn most when you force yourself to figure out how to configure stuff yourself. Of course I was curious, though, and took a peek at it. If you choose to create a user (just did this in another VM, so I can actually write about the installer), you’ll get to decide if you want to make it an administrative user, whether to give it sudo privileges and if you want to allow passwordless sudo. Nice!

First start: Preparing services

After rebooting the string “Loading unix…” made me smile and I was very curious about what’s to come. On the first boot it takes a bit longer since the service descriptions need to be parsed once. It’s not a terribly long delay, though.

First login on the new system

And there we have it, my first login into an OmniOS system installed myself.

What’s next?

That’s it for part one. In part two I’ll try to make the system useful. So far I have run into a problem that I haven’t been able to solve. But I have some time now to figure things out for the next post. Let’s see if I manage to get it working or if I have to report failure!

Ravenports: A modern, cross-platform package solution

This post is about Ravenports, a universal package system und building framework for *nix systems (DragonflyBSD, FreeBSD, Linux and Solaris at the time of this writing). It’s a relatively young project that begun in late February 2017 after a longer period of careful planning. The idea is to provide a unified, convenient experience in a cross-platform way while putting focus on performance, scalability and modern tooling.

What exactly is it and why should you care? If you’ve read my previous post, you know that I consider the old package systems lacking in several ways. For me Raven already does a great job at solving some problems existing with other systems – and it’s still far from tapping its full potential.

Rationale

A lot of people will think now: “We already have quite capable package systems. What’s the point in doing it again?” Yes, in many regards it’s “re-inventing the wheel”… And rightfully so! Most of the known package systems are pretty old now and while new features were of course added, this is sometimes problematic. There is a point where it’s an advantage to start fresh and incorporate modern ideas right from the start. Being able to benefit from the experience and knowledge gained by using the other systems for two decades when designing a new system is invaluable.

Ravenadm running on FreeBSD, OmniOS, Ubuntu Linux and DragonflyBSD

Ravenports was designed, implemented and is primarily maintained by a veteran in packaging software. John Marino at a time maintained literally thousands of ports for FreeBSD and DragonflyBSD. In addition to that, he wrote an alternative build tool called Synth. Aiming for higher portability, he modified Synth to work with Pkgsrc (which is available for many platforms) and also ported the modern Pkg package manager from FreeBSD to work with it.

In the end he had too many ideas about what could be improved in package building that would not fit into any existing project. Eventually Ravenports was born when he decided to give it a try and create a new framework with the powerful capabilities that he wanted to have and without the known weaknesses of the existing ones.

How does it compare to xyz?

It probably makes sense to get to know Ravenports by comparison to others. Let’s take a look at some of them first:

1) FreeBSD’s ports system is the oldest one such framework. It’s quite easy to use today, very flexible and since the introduction of Pkg (or “pkg-ng”) it also has a really nice package manager.
2) NetBSD adopted the ports system and developed it according to their own needs. It’s missing some of the newer features that FreeBSD added later but has gained support for an incredible amount of operating systems. Unfortunately it still uses the old pkg_* tools that really show their age now.
3) OpenBSD also adopted the early FreeBSD ports system. They took a different path and added other features. OpenBSD put the focus on avoiding users having to compile their own packages. To do so, they added so-called package flavors. This allows for building packages multiple times with different compile-time options set. Their package tools were re-written in Perl and do what they are meant to. But IMO they don’t compare well to a modern package manager.
4) Gentoo Linux with its portage system has taken flexibility to the extreme. It gives you fine-grained control over exactly how to build your software and really shines in that. The logical consequence is that, while it supports binary packages, this support is rudimentary in comparison.

EDE desktop, pekwm with Menda theme and brand-new LibreOffice

FreeBSD gained support for flavors in December 2017 and NetBSD did some work to support subpackages in a GSoC project in the same year. It’s hard to retrofit major new features into an existing framework, tough. When Ravenports started in the beginning of 2017, it already had those two features: Variant packages (Raven’s name for flavors) and subpackages. As a result they feel completely natural and fit well into the whole framework (which is why they are used excessively).

Ravenports knows ports options that can be set before building a package. Like with NetBSD or OpenBSD there’s generally fewer options available compared to FreeBSD. This is because Raven is more geared towards building binary packages than being a ports framework to build on the target machine (which would defeat the goal of always providing a clean building environment). For that reason the options mostly exist to support the variants for the packages. Compared to NetBSD’s Pkgsrc, Ravenports supports much fewer operating systems right now but has a much easier bootstrap process (binary!) for all supported platforms. It also offers a much superior package manager. When comparing against FreeBSD, OpenBSD and Gentoo, Ravenports is much more portable and supports multiple operating systems and – with the exception of FreeBSD – comes with a more modern package manager for binary packages.

Strong points

As Ravenports is not tied to a single operating system, it didn’t have to take into account specific needs that are for one OS only. In general there are no second-class citizens among the supported platforms. Also it was made to be agnostic of the package manager used. Right now it’s using Pkg only but other formats could be supported and thus binary packages be installed via pacman, rpm, dpkg, you-name-it.

Repology: Raven’s package freshness in percent (06/25/2018)

It allows for different versions of some software to be concurrently installed. If you e.g. want PHP 7.2 while some of your projects are stuck with 5.6 this is not a problem. It’s also possible to define a default version for databases like MySQL and Postgres as well as languages like Perl, Python and Ruby. Speaking of MySQL: Raven knows about Oracle MySQL, MariaDB, Percona and Galera. Only the first one is currently available (the ports for the others are missing) but the selection of which product to install is already present and the others can be easily added as needed.

If you build packages yourself you’ll notice that the whole tooling is fully integrated. Everything was planned right from the beginning to interact well and thus plays together just great. Also performance is something where Raven shines: Thanks to being programmed for high concurrency, operations like port scans are amazingly fast (if you know other frameworks).

Repology: Raven’s outdated package count (06/25/2018)

Raven follows a rolling-release model with extremely current package versions. In Repology, a fine tool for package maintainers and people interested in package statistics, Ravenports is the clear leader when it comes to freshness of the package repository: It rarely falls below 98% of freshness (while no other repo has managed to even reach 90% – and Repology lists almost 200 repositories!). If it does, it’s usually for less than a day until updates get pushed.

This is only possible because much of ports maintenance is properly automated. This saves a lot of work and allows for keeping the software version current without the need for dozens of maintainers. Custom port collections are supported if you have special needs like sticking to specific program versions. This way Raven can e.g. support legacy versions that should not be part of the main tree. It might also be interesting for companies that want to package their product for multiple platforms but need to keep the source closed. Ravenports supports private GitHub repositories for cases like this. All components of project itself are completely open-source, though, and are permissively licensed.

Also Raven is not the jealous kind of application. Packages are installed into /raven by default (you can choose to build your packages with a different prefix if you wish) and thus probably separate from the default system location for software. This makes it possible to use raven in addition to your operating system’s / distribution’s package manager instead of being forced to replace it.

Shortcomings

If you ask me about permanent problems with Raven: I don’t really see any. However there’s definitely a couple of things where it’s currently way behind other package systems. Considering how young the project is this is probably no wonder.

It’s a “needs more everything” situation. In fact it has the usual “chicken egg problem”: More available ports would be nice and potentially attract more users. With more users probably more people would become porters. And with more porters there’d surely be more ports available… But every new project faces problems like this and with resolve, dedication and perseverance as well as a fair amount of work, it’s possible to achieve the goal of making a project both useful and appealing enough for others to join in. Once that happens things get easier and easier.

KeePassXC, Geany and the EDE application menu

The Ravenports catalog has over 3,000 entries right now. It’s extremely hard to compare things like the package count, though. John provided an example: FreeBSD has 8 ports for each PostgreSQL version. With 5 supported versions that’s 40 ports. Ravenports has 5 ports with 8 subpackages each. In this case the package count is comparable, but not the port count. Taking flavors and multiversions into account, all repositories look much bigger than they actually are in case of available software. Also how to measure the quality of packages? What’s with ports that are used by less than a handful of people? What with those that are extremely outdated? Do you think they should count? It’s probably best to take a look and see if the software that you need is available. It is true though, that there’s of course still many important packages missing. IMO the most important one being Rust – which is not only needed for current versions of Firefox but increasingly important to build other software, too.

Also Linux support is not perfect, yet, and Solaris support even less so. On Solaris systems Raven is currently mostly binary-only because the Solaris kernel is unable to work with system libraries other than the ones matching exactly in version. Packages built on older releases of the OS work fine on newer ones, but for each OS release, a specific build environment would need to be created before building packages is possible. This is an issue that needs to be resolved in the future (I guess some help from the Illumos/Solaris community wouldn’t hurt). Also there are packages that don’t build on Solaris without patches which are not currently available. In case of important packages this leads to blockers since all other ports which depend on one such package also cannot be built: On FreeBSD there are 3,559 packages (including variants and metapackages) available from the repository at the present time. In the Solaris repo it’s only 2,851 packages. That’s certainly a nice start – but don’t expect to run a full-fledged desktop (or even X11 at all) there, yet!

In Linux land, distributions that come with glibc version 2.23 or newer work best. On distributions with older glibc versions (e.g. CentOS 7), software will not run as the standard C library is missing some required symbols. Raven will need to be bootstrapped again to support those distros. This is likely to happen before too long, but we’re not there, yet.

Current Firefox ESR version (+ sakura and pcmanfm in the panel)

MacOS (which might be supported soon), OpenBSD and NetBSD are not currently supported, nor is Linux with musl-libc or μclibc. Also currently Raven is amd64 only. ARM64 support is planned and i386 might or might not happen but are not available now.

Current status

At this time Raven is probably most interesting for people who love tech and enjoy tinkering on *nix systems as well as those who like the features and are ok with being early adopters. Yes, in general it’s ready for the latter. At least two people (including me) use Raven’s packages exclusively on one of their machines. I’d say it is ready as a daily driver (if you can live with the limited set of software available – or consider adding more ports). In fact I built a laptop that I use e.g. for on-call duty with it. Since that one is critical, it probably needs to be considered as “in production use”.

It’s possible to install various text mode applications with Raven, but X11 is also available. You can choose from multiple window managers or from at least two desktop environments (Lumina and the ultra-light EDE). Xfce4 is partially available (i.e. the panel is already ported). If you’re looking for web browsers, a current version of Firefox ESR (called “rustless-firefox”) can be installed as well as Surf, a simple webkit-based browser. The LibreOffice suite is available in its latest version, too. The same is true for the just released Perl 5.28 and Python 3.7.

Running Chocolate DooM and Chocolate Heretic

Oh, and if you’re into gaming… It’s not all just serious stuff. Yes, you can install and play DooM!

Conclusion

Ravenports is a fascinating project with lots and lots of possibilities. I wanted to get into porting with FreeBSD for quite a while but hesitated as I’m not a programmer. Then again I had been interested in package building for a long time and had played around with it on Arch Linux quite a bit. After my submissions to FreeBSD had been rotting in bug tracker for months (and still are after almost a year), I chose to give Raven a try in the meantime.

I was already familiar with Pkg and had used Synth before, too. Bootstrapping Raven’s pkg and then installing stuff was as easy as expected. The same was true for building the ports myself. Then I did quite a bit of reading and wrote my first port. It didn’t take more than 5 minutes after I opened my pull request on GitHub, before John responded – and the port was committed not much later. This was such a huge contrast that I decided to do more with Raven.

There was a learning curve, yes, but I received lots of help in getting started. I obviously liked the project enough to become a regular contributor and even got commit access to the ravensource repo later. Currently I’m maintaining just over 80 ports and I hope to write many more in the future. There have been some hard ports along the way (where I learned a lot about *nix), but lots of things are actually pretty easy once you get the hang of it.

Tongue-in-cheek: Make chaos or “make sense”!

If this post got you interested, just give it a try. Feel free to comment here and if you run into problems I’ll try to help. After this general overview of Raven the next post I plan to write will be on actually using it.

Modern-day package requirements

A little rant first: Many thanks to the EU (and all the people who decide on topics related to tech without having any idea on how tech stuff actually works). Their GDPR is the reason for me having been really occupied with work this month! Email being a topic that I’m teaching myself while writing the series of posts about it, I have to get back to it as time permits. This means that for May I’m going to write about a topic that I’m more familiar with.

Benefits of package management

I’ve written about package management before, telling a bit about the history of it and then focusing on how package management is done on FreeBSD. The benefits of package management are so obvious that I don’t see any reason not to content myself with just touching them:

Package management makes work put into building software re-usable. It helps you to install software and to keep it up to date. It makes it very easy to remove things in a clean manner. And package management provides a trusted source for your software needs. Think about it for just a moment and you’ll come up with more benefits.

Common package management requirements

But let’s take a look at the same topic from a different angle. What do we actually require our package systems to do? What features are necessary? While this may sound like a rather similar question, I assure you that it’s much less boring. Why? Because we’re looking at what we need – and it’s very much possible that the outcome actually is: No, we’re not using the right tool!

Yes, we need package management, obviously. While there’s this strange, overly colorful OS that cannot even get the slashes in directories right, we can easily dismiss that. We’re talking *nix here, anyway!

Ok, ok, there’s OmniOS with its KYSTY policy. That stands for “keep your software to yourself” and is how the users of said OS describe the fact that there’s no official packages available for it. While it’s probably safe to assume that the common kiddies on the web don’t know their way around on Solaris, I’m still not entirely convinced that this is an approach to recommend.

Going down that road is a pretty bold move, though. Of course it’s possible to manage your software stack properly. With a lot of machines and a lot of needed programs this will however turn into an abundance of work (maybe there are companies out there who enjoy paying highly qualified staff to carefully maintain software while others rarely spend more than a couple of minutes per day to keep their stuff up-to-date).

Also if you’re a genius who uses the method that’s called “It’s all in my head!” in the Linux from Scratch book, I’m not going to argue against it (except that this is eventually going to fail when you have to hand things over to a mere mortal when you’re leaving).

But enough of those really special corner cases. Let’s discuss what we actually require our package systems to provide! And let’s do so from the perspective not of a hobby admin but from a business-orientated one. There are three things that are essential and covered by just about any package system.

Ease of use

One of the major requirements we have today is that package management needs to be easy to use. Yes, building and installing software from source is usually easy enough on *nix today. However figuring out which configure options to use isn’t. Build one package without some feature and you might notice much later that it’s actually needed after all. Or even find that you compiled something in that’s getting in the way of something else later! Avoiding this means having to do some planning.

Reading (and understanding!) the output of ./configure –help probably isn’t something you’re going to entrust the newly employed junior admin with. Asking that person to just install mysql on the new server will probably be ok, though. Especially since package managers will usually handle dependencies, too.

Making use of package management means that somebody else (the package maintainer) has already thought about how the software will be used in most of the cases. For you this means that not having to hire and pay senior admins for work that can be done by a junior in your organization, too.

Fast operations

Time is money and while “compiling!” is a perfectly acceptable excuse for a dev, it shouldn’t be for an admin who is asked why the web server still wasn’t deployed on the new system.

Compiling takes time and uses resources. Even if your staff uses terminal multiplexers (which they should), thus being able to compile stuff on various systems at the same time, customers usually want software available when they call – and not two hours later (because the admin was a bit confused with the twenty-something tmux sessions and got stuck with one task while a lot of the other compile jobs have been finished ages ago).

Don’t make your customers wait longer than necessary. Most requests can be satisfied with a standard package. No need to delay things where it doesn’t make any sense.

Regular (security) updates

It’s 2018 and you probably want that new browser version that mitigates some of the Spectre vulnerabilities on your staff’s workstations ASAP. And maybe you even have customers that are using Drupal, in which case… Well, you get the point.

While it does make sense to subscribe to security newsletters and keep an eye on new CVEs, it takes a specialist to maintain your own software stack. When you got word of a new CVE for a program that you’re using that doesn’t mean the way you built the software makes it vulnerable. And perhaps you have a special use-case where it is but the vulnerability is not exploitable.

Again this important task is one that others have already done for you if you use packaged software from a popular repository. Of course those people are not perfect either and you may very well decide that you do not trust them. Doing everything yourself because you think you can do better is a perfectly legitimate way of handling things. Chances are however that your company cannot afford a specialist for this task. And in that case you’re definitely better off trusting the package maintainers than carelessly doing things yourself that you don’t have the knowledge for.

Special package management requirements

Some package managers offer special features not found in other ones. If your organization needs such a feature this can even mean that a new OS or distribution is chosen for some job because of that. Also repositories vary greatly in the number of software they offer, in the software versions that they hold and in the frequency of updates taking place.

“Stability” vs. “freshness”

A lot of organizations prefer “stable”, well-tested software versions. In many cases I think of “stable” as a marketing word for “really old”. For certain use-cases I agree that it makes sense to choose a system where not much will change within the next decade. But IMO this is far less often the case than some decision makers may think.

The other extreme is rolling-release systems which generally adapt the newest software versions after minimal testing. And yes, at one point there was even the “Arch server project” (if I remember the name correctly), which was all about running Arch Linux on a server. In fact this is not as bad an idea as it may seem. There are people who really live Arch and they’ll be able to maintain an Arch server for you. But I think this makes most sense as a box for your developers who want to play with new versions of the software that you’re using way before it hits your actual dev or even prod servers.

Where possible I definitely favor the “deliver current versions” model. Not even due to the security aspect (patches are being backported in case of the “stable” repositories) but because of the newer features. It’s rather annoying if you want to make use of the jumphost ability of OpenSSH (for which a nice new way of doing it was introduced not too long ago) and then notice you can’t use it because there’s that stupid CentOS box with its old SSH involved!

Number of packages

If you need one or a couple of packages that are not available (or too old) in the package repository of your OS or distribution, chances are that external repos exist or that the upstream project provides packages. That may be ok. However if you find that a lot of the software that you require is not available this may very well be a good reason to think about using a different OS or distribution.

A large number of packages in the repository increases the chance that you may get what you need. Still it can very well be the case where certain packages that you require (and which are rather costly to maintain yourself) are available on another repo.

Package auditing

Some package systems allow you to audit the installed packages. If security is very important for your organization, you’ll be happy to have your package tool recommend to “upgrade or deinstall” the installed version of some application because it’s known to be vulnerable.

Flexibility

What if you have special needs on some servers and require support for rarely needed functionality to be compiled into some software? With most package systems you’re out of luck. The best thing that you can do is roll your own customized package using a different name.

The ports tree on *BSD or portage on Gentoo Linux really show their power in this case, allowing you to just build the software easily and with the options that you choose.

Heterogeneous environments

So most of the time it makes perfect sense to stick to the standard repository for your OS or distribution. If you have special needs you’d probably consider another one and use the standard repo for that one. But what about heterogeneous environments?

Perhaps your database product only runs on, say, CentOS. You don’t have much choice here. However a lot of customers want their stuff hosted on Linux but they demand newer program versions. So a colleague installed several Ubuntu boxes. And another colleague, a really strange guy, slipped in some FreeBSD storage servers! When the others found out that this was not even Linux and started protesting (because “BSD is dying”), they were already running too damn well to replaced with something that does not have as good ZFS support.

A scenario like that is not too uncommon. If you don’t do anything about it, this might lead to “camps” among the employees; some of them are sure that CentOS is so truly enterprise that it’s the way to go. And of course yum is better than apt-get (and whatever that BSD thing offers – if anything). Some others laugh at that because Ubuntu is clearly superior and using apt-get feels a lot more natural than having to use yum (which is still better than that BSD thing which they refuse to even touch). And then there’s the BSD guy who is happy to have a real OS at his hand rather than “kernel + distro-chosen packages”.

In general if you are working for a small organization, every admin will have to be able to work with each system that is being used. Proper training for all package systems is probably expansive and thus managers will quite possible be reluctant to accept more than two package systems.

Portability

There’s a little known (in the Linux community) solution to this: Pkgsrc (“package source”). It’s NetBSD’s package management system. But with probably the most important goal of the NetBSD project being portability, it’s portable, too!

Pkgsrc is available for many different platforms. It runs on NetBSD, of course. But it runs on Linux as well as on the other BSDs and on Solaris. It’s even available for commercial UNIX platforms and various exotic platforms.

For this very nature of it, Pkgsrc may be one answer for your packaging needs in heterogeneous environments. It can provide a unified means of package management across multiple platforms. It rids you of the headache of version jungle if you use different repositories for different platforms. And it’s free and open source, too!

Is it the only solution out there? No. Is it the best one? That certainly depends on what you are looking for specifically. But it’s definitely something that you should be aware of.

What’s next?

The next post will be about a relatively new alternative to traditional package management systems that tries to deliver all the strong points in one system while avoiding their weaknesses!

Introduction to email (pt. 2): Mail dialog / the “mail” command

The first post of the series discussed some fundamental general knowledge about email (daemons involved, protocols, etc.). It also covered building a test system for actively trying out mail-related things. (I had to update it, however, since I discovered some problems with building the VM.)

I assume that you’ve built the test system with Vagrant to follow along. If you haven’t, please refer back to the previous post to learn how to do this. You’re free to use any FreeBSD system of course. Using vagrant has a few advantages, though. The most important is that it allows you to save the state so you can continue to play with your VM while still being able to return to the clean state anytime to follow the next parts of this series. This post will demonstrate how to send mail from the console and point out a few important things involved.

Sending mail with… “mail”

Change to the appropriate directory where you keep your Vagrant file for the mail-vm. If you’ve tinkered with the VM after the first post, reset it to the last state (if it hasn’t changed, you may issue vagrant up instead of restoring the snapshot) and enter the VM:

% cd ~/vagrant/mail-vm
% vagrant snapshot restore mail1
% vagrant ssh

Thanks to the changes we made to the config in the previous post, you should be directly logged in as root. Let’s see if root has any mail. As part of FreeBSD’s base system comes the mail utility which is a very basic MUA able to compose and view messages. Execute it without any parameters for now:

# mail

No mail for root

Ok, so root does not currently have any messages. On an average FreeBSD system there’d probably be mail there as by default the system reports in for e.g. a daily security run output. But on this system there’s nothing so far. So let’s send a message now! We can also use mail for that. With the -s parameter we can specify the subject and of course we need to tell it who we’re sending the message to! When we did that, the program will let us type in the actual message. To indicate that we’re done, we need to place a single period (.) in line all by itself and hit the return key:

# mail -s "Test mail 1" root
This is a test message!
.

EOT

Mail acknowledged the action by printing EOT (end of text). Congratulations, you’ve just sent an email from root to root! And yes, this is the same kind of email that you know from writing to other people, only done locally in this case.

Hard to believe? Let’s do it again and make use of something else that you know from email: Sending a carbon copy (cc) to another user:

# mail -s "Test mail 2" root -c vagrant
This is another test message!
.

EOT

All done! But did it actually do what we wanted it to? Let’s become the vagrant user and check our mail real quick:

# su -l vagrant
% mail

>N 1 root@mail-vm.local Fri Apr 20 20:56 19/719 “Test mail 2”

Nice: There it is! Mail obviously found a message with the subject “Test mail 2” that was sent by root@mail-vm.local. Looks good so far. Let’s quit the mail utility by pressing CTRL-D or issuing the command x.

Sending and checking mail with – mail

The mail dialog

What’s next? How about sending an email message back to root – and this time tell mail to be verbose?

% mail -v -s "Test mail 3" root
And yet another!
.

EOT

This time our mail utility shows what’s usually happening in the background:

The mail dialog! This is basically what the MUA and the MTA are talking to make mail delivery happen.

Beginning of a mail dialog

Let’s take a closer look at some snippets:

[…]
root… Connecting to [127.0.0.1] via relay…
220 mail-vm ESMTP Sendmail 8.15.2/8.15.2; Fri, 20 Apr 2018 20:59:15 +0200 (CEST)
>>> EHLO mail-vm.local
250-mail-vm.local Hello localhost [127.0.0.1], pleased to meet you

[…]

Here we can see the beginning of the dialog: Our MUA (mail) connected to the MTA (Sendmail in this case), said “hello” (or rather EHLO according to the protocol rules) and was greeted by tho MTA, too. We’ll skip the next bits; client and server agree on various parameters to upgrade their connection to use encryption instead of plain text. While encryption is definitely an important topic when it comes to mail, it also makes things a fair bit more complicated and we’ll ignore it for now.

MAIL From: SIZE=48
250 2.1.0 … Sender ok
>>> RCPT To:
>>> DATA
250 2.1.5 … Recipient ok
354 Enter mail, end with “.” on a line by itself
>>> .

The MUA announces the sender and the MTA acknowledges it. Then the MUA tells the MTA the recipient as well as the actual message and the latter acknowledges it again.

250 2.0.0 w3KIxFbt000881 Message accepted for delivery
root… Sent (w3KIxFbt000881 Message accepted for delivery)
Closing connection to [127.0.0.1]
>>> QUIT
221 2.0.0 mail-vm.local closing connection

Finally the MTA tells the MUA that it accepted the message and will take care of delivering it. And that concludes the mail sending action from the perspective of the MUA. The MTA has taken over and will do something with the message.

What you’ve been reading here is an example of what an SMTP dialog looks like. If the MTA figures that the message cannot be delivered locally, it will try to connect to another MTA and pass it on using the same protocol. And if it cannot deliver the message at all (e.g. the remote MTA rejected the message, probably because the recipient user does not exist), the MTA is probably configured to send a message to the original sender, letting him know that the message was lost.

Using mail to view messages

We’re done with the vagrant user for now so let’s exit back to root:

% logout

Root should have received three mails. We can use the mail command again to look at our mailbox:

# mail

The result will look like on the bottom of this picture:

Second part of the SMTP dialog & checking root’s unread mail

Right, all three were received and are there. Sendmail obviously did its job after taking over the messages from our MUA! Mail also tells us that we have three messages in total of which three are new. And it mentions /var/mail/root. What’s that? Well, it’s a file. But let’s quit the MUA again and take a closer look:

# less /var/mail/root

Messages in the inbox file

What we’ve stumbled across here is the root user’s mailbox for incoming mail (“inbox”). It’s just a file holding the text and headers of all unread messages. Alright, all the messages are there and can be accessed by all means that you typically access text files with. But what about mail? Can you use it to view the messages, too?

You bet that’s possible. Let’s run mail again:

# mail

Do you see a difference? No? Look more closely! Last time all three messages had an uppercase “N” in front of them, meaning new. Now there’s a “U”: Those messages are still unread, but they were already in the inbox last time we checked our mail.

The greater-than sign hints that mail 1 is selected. To read it, issue the command “print” (or use the abbreviation “p”). The plus character selects the next message, while minus does the opposite. If you’d like to play around with the mail MUA a little, you should know that there are many more commands like e.g. “f” to print the current message header. Should you want to know more, the manpage is your friend.

Viewing messages with mail

If we exit now, this is what mail tells us:

Saved 2 messages in mbox
Held 1 message in /var/mail/root

What does that mean?

Inbox and mbox

If you like analogies, think of /var/mail/root as the mailbox outside of your house. If you get new mail, it’ll be put into there. Let’s say you got three letters. The analogy to what we did a minute ago was going to the mailbox and take only two of the letters out to read them. After we read them we put them somewhere were we use to stash our letters as long as we think that we might need them again. The same thing happened here: There’s one message “held” in /var/mail/root because we didn’t bother to touch it, yet. The other two were moved to the “mbox”.

Ok, what’s the mbox? It’s another file that holds email messages. Actually there’s not much different about it compared to the inbox. It’s just used differently: To locally store your mail whereas your inbox is typically on a remote system. In our case both are on the same system and so it’s just removing a message from one file and putting it in another.

# head -21 ~/mbox

Here you can see the first message and the beginning of the second one (in line 21):

Contents of our mbox

Replying to and deleting messages

If we start mail again we know how to view the one remaining message. What else could we do with it? Well, we could reply to it (“r”) and delete (“d”) it then:

Replying to and deleting a message

That wasn’t too hard, was it? Asking mail for the headers returns a no applicable messages. Now root’s inbox should be empty. Let’s run mail again. What? A new message? Checking back at what we just did, it looks like we sent the reply to both vagrant and root. We didn’t mean to receive this message so let’s delete it, too.

Alright. Our inbox should now really be clean. Is it? Let’s put it to the test:

# mail

No mail for root

It is!

Emptying the inbox again

Excellent. But… How do we access the mails that we didn’t delete which were moved to the mbox? As I said before, the mbox really is only functionally different from the inbox. In fact the inbox is merely a special mbox. On our system it’s special in being the default that mail works with unless told otherwise!

Of course we can tell mail to operate on root’s mbox instead. This is done by using “-f”:

# mail -f /root/mbox

And there are our messages. No magic here.

Accessing root’s mbox

Intermission

That’s it for this article on mail. You should now have a much better understanding of what is happening when a message is being sent – and what a message actually is. Also you’ve met an old Unix tool that probably isn’t going to become your favorite MUA but still gets the job done after several decades. And while it’s not very intuitive, it just helped you to get started in better understanding email. Also it might actually still well suffice for some simple tasks. In fact we’ve only scratched the surface of the mail utility. It can do much, much more. But that’s too special a topic and way beyond the goal of this series on email.

You’ve come to the end of part two. If you’ve been following along with your Vagrant VM, stop it now, make sure that it’s powered off and create a second snapshot:

# shutdown -p now
% vagrant status
% vagrant snapshot save mail2

Until next time!

Introduction to email (pt. 1): Email basics

[update: Seems like the maildir patch for the Alpine mail client does not currently work with the newest version… So we need to get an older version of the ports tree – this is not ideal, but fortunately this is only a test VM where we can do bad things]
[update2: More changes… A necessary patch is no longer available from its previous location]

Email – short for electronic mail – is one of the things of modern life that we all are familiar with… Or are we? We know how to use it and probably have a rough idea of what’s going on when we press the send button. But email is actually a surprisingly complicated topic. You probably won’t notice that until you think about setting up your own mail server for the first time. Once you do however, you’re in for quite some reading.

I thought to write this mail mini series in late 2016 when I had to debug an issue with a customer’s mail system and thus figured that I really, really could use a bit more knowledge when it comes to email. I quickly went on with other tasks, but I’m still interested in the topic – and maybe I’ll find some time for it this time. This series of posts is intended as a summary of things that I found noteworthy about email.

Daemons & Terminology

When it comes to email, various daemons are involved. Daemons? Why of course! Email is native to Unix; it had precursors, yes, but it was on Unix that things really took off. That’s why for understanding email it really helps to understand Unix.

The part of the mail system that’s visible to the user is called Mail User Agent (MUA or just UA) in mail terminology. That’s what most people just call the Email client: Programs like Thunderbird, Sylpheed or Outlook. It can be used to compose messages (that’s the correct term – you’re actually sending messages not emails) and hand them over to an email server or to retrieve messages from there.

The server-side of the mail system consists of multiple daemons. First there’s the Mail Transfer Agent (MTA) that’s responsible for accepting incoming messages (e.g. from the MUA), process it and either send it to a second MTA on a foreign server or to save it locally (if the message’s recipient has an account on this server). To store a message locally, the MTA can pass it to a Mail Delivery Agent (MDA).

It’s common that an MTA accepts email that’s destined for another server and hands it over to that server’s MTA. This is called relaying. To know where it needs to send the message to, it looks at the domain part of the recipients address (xyz@example.com). Then it does a lookup for the MX (Mail eXchange) records in the DNS (Domain Name System) for that zone (DNS terminology; think domain for now.

Protocols

To enable the various daemons to interact with each other they need to follow standards in communicating. These standards come in form of so-called RFCs – if you don’t know what that is, do some quick research right now. No matter what, if it has anything to do with the Internet at all, RFCs are what define the standards that every implementation is expected to follow. There are several protocols which allow for various actions:

The MUA needs to speak SMTP, the Simple Mail Transfer Protocol, if it wants to pass a message to an MTA because that’s what this daemon is expecting. The MTA can use make use of MLTP, the Mail Local Transfer Protocol, to talk to an MDA. And if the MUA is expected to retrieve messages from the mailbox it needs to implement the POP (Post Office Protocol) or IMAP (Internet Mail Access Protocol) protocols so it can communicate with a POP or IMAP daemon which will then read messages stored on the server and send them to the MUA.

Other components

And there’s more when it comes to mail. Today’s internet is well-known as a hostile place. However when email was developed, people on the net were few and shared a common passion: Tech. Nobody meant to do the others any harm. This infantile period is far from what we experience today. But the basic principles of email haven’t changed and in fact cannot be changed easily. So the problem arises how to retroactively secure something that was designed in a carefree way and therefore proved to be inherently insecure?

When you think email today, you automatically have to think encryption, too. Otherwise there might always be someone to eavesdrop on you. When you think email you have to think authorisation. If you don’t protect your mail account somebody may break into it, steal your secrets or abuse it. You have to think spam. How do you avoid being flooded with all those useless messages that want you to buy blue pills or the like? And worse: How do you prevent spammers to use your mail server so you don’t get blacklisted? You have to think viruses, phishing, trojans, etc. Have to think security holes in your applications, newer protocol versions being established, and so on.

You can already see that mail is a rather complex topic that requires you to have a fair bit of knowledge on other topics like e.g. DNS, security and more. Fortunately all the knowledge that you need is somewhere out there. You just need some determination, a lot of free time and a bit of luck to find the relevant pieces. I cannot help you with the former two but I’ll try to provide a source that could help you learn some important things about mail without having to search on for a tutorial (even though of course I cannot cover everything).

So much for a tiny bit of theory. It’s merely meant to be enough so we can start doing something and cover more of our topic along the way.

Building a test system

So let’s build some kind of test system to play around with, shall we? Sure thing. I suggest using FreeBSD – it is an excellent choice of OS to get into the topic. No, not because I’ve come to like it quite a bit and base a lot of what I post on my blog on that system. For many things Linux would be more or less an equally good choice. When it comes to getting into mail however, it isn’t.

If you already know the whole topic quite well, you can install all the needed programs on Linux without any problems. If however you’re just starting out, FreeBSD makes the first steps so much easier as it already comes with a working mail solution by default! Setting up mail server software is a very complex and complicated thing to do and FreeBSD really makes your life so much easier in this regard by removing this obstacle for you.

In a previous tutorial on backups with Bacula I used VirtualBox together with Vagrant and it proved to be a very convenient solution. So I’m going to use it here as well. If you haven’t used VirtualBox or don’t even know what Vagrant is, I’ve written this post for you which explains things very detailed and with pictures. For creating the base box that we need, you can refer to this other post that explains everything step by step. Just make sure that you read this section before you build your base box! Because here’s the customization we need for this VM:

This post assumes version we’re using FreeBSD 11.1 – any version of 11 should be fine, though. If at the time you are reading this 12.x or even newer is out, you may want to check if fundamental things (like finally putting Sendmail to rest and importing a different MTA) have changed in between these versions. Starting with version 11.0, FreeBSD has a new installation dialog screen that let’s you choose some system hardening options. While this is a great idea and I’ve recommended to disable Sendmail in the post about building the base box for Vagrant, it would ruin the mail functionality that we’re going to use here. So make sure to not disable Sendmail for this installation!

Selecting hardening options

Ports work

Also don’t install Bacula as described in the customization section for the Vagrant base box – we’re not going to use it here. Instead do the following (if you’re not using a c-shell variant, leave out the “env” for the second command):

# svnlite co svn://svn.freebsd.org/ports/tags/RELEASE_11_0_0 /usr/ports
# env ASSUME_ALWAYS_YES=1 pkg bootstrap
# sed -i.bak -e 's|pkg$|pkg register|' /usr/share/mk/bsd.own.mk 

This downloads an old version of the ports tree which holds a version of Alpine that we’re going to use. It also bootstraps the package manager and makes a change so that it will work with the old tree. Now we need to prepare Alpine; unfortunately one required file is no longer available from the original site. I chose to mirror it since it took me some time to find the right one…

# mkdir -p /usr/ports/distfiles/alpine-2.20
# cd /usr/ports/distfiles/alpine-2.20
# fetch http://elderlinux.org/files/maildir.patch.gz
# cd /usr/ports/mail/alpine
# make makesum

Now we should be good to configure the port:

# make config-recursive

Be sure to check MAILDIR here as we’re going to need it later. Unfortunately this option is not built into Alpine would we install it as a package. So we have to resort to ports. While we’re at it, we can disable IPv6 (which we certainly don’t need), mouse support, NLS and check NOSPELL since we do not require spelling correction in this tutorial either. Configure pico-alpine accordingly. For all the other packages you can generally turn off NLS and DOCS. Then fetch the source for all packages recursively:

# make fetch-recursive

When it’s done change into OpenSMTPD’s port directory and configure it:

# cd /usr/ports/mail/opensmtpd
make config-recursive

Here you want to check the MAILERCONF option. The rest is fine. You don’t need to build the EXAMPLES for m4 and again can generally deselect NLS. Then fetch the sources:

# make fetch-recursive

Then configure dovecot2 (you can keep the default options this time) and fetch the distfiles for it:

# cd /usr/ports/mail/dovecot2
# make config fetch

Then follow the rest of the base box build process.

Preparation

You have your base box built and imported? Good. Create your Vagrant directories if you haven’t and a Vagrantfile in there:

% mkdir -p ~/vagrant/mail-vm
% cd ~/vagrant/mail-vm
% vi Vagrantfile

Put the following lines into it (assuming that your box has the name fbsd11.1-template):

Vagrant.configure("2") do |config|
config.vm.box = "fbsd11.1-template"
config.vm.network "private_network", ip: "192.168.0.10"
#config.ssh.username = "root"
config.vm.synced_folder ".", "/vagrant", disabled: true
end

Now fire up the VM, ssh into it and switch to root. Then edit rc.conf:

% vagrant up
% vagrant ssh
% sudo su -
# vi /etc/rc.conf

Change the line that defines the hostname to:

hostname="mail-vm.local"

Save the file and exit the editor. Let’s do a bad thing next: Let’s configure the VM so that we can SSH into it directly as root. If this just sent a shiver down your spine then you’re having the right feeling. You’re not supposed to do this like… ever! But now and then you feel like doing forbidden things, right? And now’s the perfect opportunity since this is just a private, non internet facing VM. Just remember to never do this on a production machine! Ok, ready? First we need to copy Vagrant’s public key and allow it for root:

# mkdir .ssh
# touch .ssh/authorized_keys
# chmod 0700 .ssh
# chmod 0600 .ssh/authorized_keys
# cp /home/vagrant/.ssh/authorized_keys /root/.ssh/authorized_keys

Now edit the SSH daemon’s config:

# vi /etc/ssh/sshd_config

Look for the line #PermitRootLogin no, remove the comment sign and change it to yes. Save and exit the editor. Then power down the VM:

# shutdown -p now

When you’re back at your host machine, edit the Vagrantfile:

% vi Vagrantfile

Add

config.ssh.username = "root"

to the configuration. Save and exit.

Test the change you just made by starting the VM and ssh’ing into it:

% vagrant up
% vagrant ssh

If everything is correct, you should have entered the VM as root now. Power down the VM again:

# shutdown -p now

Intermission

We’ve now prepared our test environment for trying out all things mail. Use vagrant to create a snapshot now to save your progress so that we can pick up right there next time. But before we do so, we need to make sure that the VM is already powered down:

% vagrant status

If it says running (virtualbox) then wait a moment and issue the same command again. It’s important that the status is poweroff (virtualbox). If you snapshot the VM while it’s powering down, your machine will always power down if you restore the previous state! As soon as it’s off, snapshot it:

% vagrant snapshot save mail1

You now have an idea of what the whole topic email is all about. And you have a test system to try out things and get familiar with mail from the very fundamentals to – perhaps – a fully working mail server. I hope that this was interesting for you and that you’ll follow the next part, too, where we’ll explore sending mail from the console and discuss using a text mode MUA.