Dystopian Open Source

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/dystopian_open_source.gmi

Happy New Year dear reader! The other day I watched a video on YouTube that had only 6 views since last October. It is about a very important topic, though, and I wish it would have a larger impact as well as get more people alarmed and thinking about the current trends in Open Source. This is not a “OMG we’re all doomed!!1” post, but I want to talk about what I feel are grave dangers that we should really, really aim some serious consideration at.

“Pay to Play”

For the readers who would like to watch the video (about 7 minutes), it’s here. Some background info: It’s by Lucas Holt. He is the lead developer of MidnightBSD, a project that began as a Fork of FreeBSD 6.1 and aimed for better usability on the desktop. There were a couple of people who contributed to the project over time, but it never really took off. Therefore it has continued as a project almost entirely done by one man.

It’s not hard to imagine just how much work it is to keep an entire operating system going; much larger teams have failed to deliver something useful after all. So while it’s no wonder that MidnightBSD is not in a state where anybody would recommend it to put to everyday usage, I cannot deny that I admire all the work that has been done.

Holt has merged changes back from FreeBSD several times, eventually updating the system to basically 11.4 plus the MidnightBSD additions and changes. He maintains almost 5,000 ports for his platform (of course not all are in perfect shape, though). And he has kept the project going since about 2006 – despite all the taunting and acid-tongued comments on “the most useless OS ever” and things like that. Even though I never found somewhat serious use for MidnightBSD (and I tried a couple of times!), considering all of that he has earned my deepest respect.

To sum up the video: He talks about a trend in Open Source that some very important projects started to raise the bar on contributing to them. Sometimes you’re required to employ two full-time (!) developers to be considered even worth hearing. Others require you to provide them with e.g. a paid Amazon EC2 instance to run their CI on. And even where that’s not the case, some decision makers will just turn you down if you dare to hand in patches for a platform that’s not a huge player itself.

Quite a few people do not even try to hide that they only ever care about Linux and Holt has made the observation that some of the worst-behaving, most arrogant of these are – Redhat employees. There are people on various developer teams that choose to deliberately ruin things for smaller projects, which is certainly not good and shouldn’t be what Open Source is about.

What does Open Source mean to us?

At a bare minimum, Open Source only means that the source for some application, collection of software or even entire operating system is available to look at. I could write some program, put the code under an extremely restrictive license and still call this thing “Open Source” as long as I make the code available by some means. One could argue that in the truest sense of the two words that make up the term, that would be a valid way to do things. But that’s not what Open Source is or ever was about!

There are various licenses out there that are closely related to Open Source. Taking a closer look at them is one great way to find the very essence of what Open Source actually is. There are two important families of such licenses: The so-called Copyleft licenses and the permissive licenses. One could say that downright religious wars have been waged about which side holds the one real truth…

People who have been reading my blog for a while know that I do have a preference and made quite clear which camp I belong to, even though I reject the insane hostility that some zealots preach. But while the long-standing… err… let’s say: controversy, is an important part of Open Source culture, the details are less relevant to our topic here. They basically disagree on the question of what requirements to put in the license. Should there be any at all? Is it sufficient to ask for giving credit to the original authors? Or should users be forced to keep the source open for example?

Both license families however do not dispute the fundamental rights given to users: They want you to be able to study the code, to build it yourself, to make changes and to put the resulting programs to good use. While it’s usually not explicit, the very idea behind all of Open Source is to allow for collaboration.

Forkability of Open Source projects

Over the years we’ve seen a lot of uproar in the community when the leaders of some project made decisions that go against these core values of Open Source. While some even committed the ultimate sin of closing down formerly open code, most of the time it’s been slightly less harsh. Still we have seen XFree86 basically falling into oblivion after Xorg was forked from it. The reason this happened was a license change: One individual felt that it was time for a little bit of extra fame – and eventually he ended up blowing his work to pieces. Other examples are pfSense and OPNsense, Owncloud and Nextcloud or Bacula and Bareos. When greed strikes, some previously sane people begin to think that it’s a good idea to implement restrictions, rip off the community and go “premium”.

One of the great virtues of Open Source is that a continuation of the software in the old way of the project is possible. With OPNsense we still have a great, permissively licensed firewall OS based on FreeBSD and Pf despite NetGate’s efforts to mess with pfSense. Bareos still has the features that Bacula cut out (!) of the Open Source version and moved to the commercial one. And so on. The very nature of Open Source also allows for people to pick up and continue some software when the original project shuts down for whatever reason.

There are a lot of benefits to Open Source over Closed Source models. But is it really immune to each and every attack you can aim at it?

Three dangers to Open Source!

There is always the pretty obvious danger of closing down source code if the license does not prohibit that. Though I make the claim that this in fact mostly a non-issue. There are a lot of voices out there who are going hysteric about this. But despite what they try to make things look, it is impossible to close down source code that is under an Open Source license! A project can stop releasing the source for newer versions, effectively stopping to distribute current code. But then the Open Source community can always stop using that stuff and continue on with the a fork that stays open.

But we haven’t talked about three other immanent dangers: narrow-mindedness, non-portability and leadership driven by monetary interest.

Narrow-mindedness

One could say that today Open Source is victim of its overwhelming success. A lot of companies and individual developers jumped the wagon because it’s very much beneficial for them. “Let’s put the source on GitHub and people might report issues or even open pull-requests, actively improving our code – all for free!” While this is a pretty smart thing to do from a commercial point of view, in this case software code was not opened up because somebody really believes in the ideas of Open Source. It was merely done to benefit from some of the most obvious advantages.

Depending on how far-sighted such an actor is, he might understand the indirect advantages to the project when keeping things as open as possible – or maybe not. For example a developer might decide that he’ll only ever use Ubuntu. Somebody reports a problem with Arch Linux: Close (“not supported!”). Another person opens a PR adding NetBSD support: Close (“Get lost, freak!”).

Such behavior is about as stupid and when it comes to the values also as anti Open Source as it gets. Witnessing something like this makes people who actually care about Open Source cringe. How can anybody be too blind to see that they are hurting themselves in the long run? But it happens time and time again. By turning down the Arch guy, the project has probably lost a future contributor – and maybe the issue reported was due to incompatibilities with the never GCC in Arch that will eventually land in Ubuntu, too, and could have been fixed ahead of time…

Open Source is about being open-minded. Just publishing the source and fishing for free contributions while living the ways of a closed-source spirit is in fact a real threat to Open Source. I wish more people would just say no to projects that regularly say “no” to others (without a good reason). It’s perfectly fine that some project cannot guarantee their software to even compile on illumos all the time. But the illumos people will take care of that and probably submit patches if needed. But refusing to even talk about possible support for that platform is very bad style and does not fit well with the ideals of Open Source.

If I witness that an arrogant developer insults, say a Haiku person, I’ll go looking for more welcoming alternatives (and am perfectly willing to accept something that is technically less ideal for now). Not because I’ve ever used Haiku or do plan to do so. But simply because I believe in Open Source and in fact have a heart for the cool smaller projects that are doing interesting things aside of the often somewhat boring mainstream.

Non-portability

Somewhat related to the point above is (deliberate) non-portability. A great example of this is Systemd. Yes, there have been many, many hateful comments about it and there are people who have stated that they really hope the main developer will keep the promise to never make it portable “so that *BSD is never going to be infected”.

But whatever your stance on this particular case is – there is an important fact: As soon as any such non-portable Open Source project gains a certain popularity, it will begin to poison other projects, too. Some developers will add dependencies to such non-portable software and thus make their own software unusable on other platforms even though that very software alone would work perfectly fine! Sometimes this happens because developers make the false assumption that “everybody uses Systemd today, anyway”, sometimes because they use it themselves and don’t realize the implication of making it a mandatory requirement.

If this happens to a project that basically has three users world-wide, it’s a pitty but does not have a major impact. If it’s a software however that is a critical component in various downstream projects it can potentially affect millions of users. The right thing here is not to break solidarity with other platforms. Even if the primary platform for your project is Linux, never ever go as far as adding a hard dependency on Systemd and other such software! If you can, it’s much better to make support optional so that people who want to use it benefit from existing support. But don’t ruin the day for everybody else!

And think again about the exemplary NetBSD pull-request mentioned above: Assume that the developer had shown less hostility and accepted the PR (with no promises to ever test if things actually work properly or at all). The software would have landed in Pkgsrc and somebody else would soon have hit a problem due to a corner case on NetBSD/SPARC64. A closer inspection of that would have revealed a serious bug that remained undetected and unfixed. After a new feature was added not much later, the bug became exploitable. Eventually the project gained a “nice” new CVE of severity 9.2 – which could well have been avoided in an alternate reality where the project leader had had a more friendly and open-minded personality…

Taking portability very seriously is exceptionally hard work. But remember: Nobody is asking you to support all the hardware you probably don’t even have or all the operating systems that you don’t know your way around on. But just be open to enthusiasts who care for such platforms and let them at least contribute.

Leadership with commercial interests

This one is a no-brainer – but unfortunately one that we can see happening more and more often. Over the last few years people started to complain about e.g. Linux being “hi-jacked by corporations”. And there is some truth to it: There is a lot of paid work being done on various Open Source projects. Some of the companies that pay developers do so because they have an interest in improving Open Source software they use. A couple even fund such projects because they feel giving back something after receiving for free is the right thing to do. But then there’s the other type, too: Corporations that have their very own agenda and leverage the fact that decision makers on some projects are their employees to influence development.

Be it the person responsible for a certain kernel subsystem turning down good patches that would be beneficial for a lot of people for seemingly no good reason – but in fact because they were handed in by a competitor because his employer is secretly working on something similar and has an interest to get that one in instead. Be it because the employer thinks that the developer is not payed to do anything for platforms that are not of interest to its own commercial plan and is expected to simply turn those down to “save time” for “important work”. Things like that actually happen and have been happening for a while now.

Limiting the influence of commercial companies is a topic on its own. IMO more projects should think about governance models much more deeply and consider the possible impacts of what can happen if a malicious actor buys in.

Towards a more far-sighted, “vrij” Open Source?

As noted above, I feel that some actors in Open Source are too much focused on their own use-case only and are completely ignorant of what other people might be interested in. But as this post’s topic was a very negative one, I’d like to end it more positively. Despite the relatively rare but very unfortunate misbehaving of some representatives of important projects, the overwhelming majority of people in Open Source are happy to allow contributions from more “exotic” projects.

But what’s that funny looking word doing there in the heading? Let me explain. We already have FOSS, an acronym for “Free and Open Source Software”. There’s a group of people arguing that we should rather focus on what they call FLOSS, “Free and Libre Open Source Software”. The “libre” in there is meant to put focus on some copyleft ideas of freedom – “free” was already taken and has the problem that the English word doesn’t distinguish between free “as in freedom” and free of charge. I feel that a term that emphasizes the community aspect of Open Source, the invitation to just about anybody to collaborate and Open Source solidarity with systems other than what I use, could be helpful. How about VOSS? I think it’s better than fitting in another letter there.

Vrij is the Dutch word for free. Why Dutch? For one part to honor the work that has been done at the Vrije Universiteit of Amsterdam (for readers who noticed the additional “e”: That’s due to inflection). Just think of the nowadays often overlooked work of Professor Tanenbaum e.g. with Minix (which inspired Linux among other things). The other thing is that it’s relatively easy to pronounce for people who speak English. It’s not completely similar but relatively close to the English “fray”. And if you’re looking for the noun, there’s both vrijheid and vrijdom. I think the latter is less common, but again: It’s much closer to English “freedom” and thus probably much more practical.

So… I really care for vrij(e) Open Source Software! Do you?

CentOS killed by IBM – a chance to go new ways?

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2020/centos_killed.gmi

On December 8 2020 a Red Hat employee on the CentOS Governing Board announced that CentOS would continue only as CentOS Stream. For the classical CentOS 8 the curtain will fall at the end of 2021 already – instead of 2029 as communicated before! But thanks to “Stream” the brand will not simply go away but remain and add to the confusion. CentOS Stream is a rolling-release distribution taking a middle grounds between the cutting-edge development happening in Fedora and the stability of RHEL.

Many users feel betrayed by this action and companies who have deployed CentOS 8 in production trusting in the 10 years of support are facing hard times. Even worse for those companies who have a product based on CentOS (which is a pretty popular choice e.g. for Business Telephone Systems and many other things) or who offer a product targeted only at RHEL or CentOS. For those the new situation is nothing but a nightmare come true.

What do we do now?

IBM-controlled Red Hat obviously hopes that many users will now go and buy RHEL. While this is certainly technically an option I would not suggest going down that road. Throwing money at a company that just killed a community-driven distribution 8 years early? Sorry Red Hat but nope. And Oracle is never an option, either!

The reaction from the community has been overwhelmingly negative. Many people announced that they will migrate to Debian or Ubuntu LTS, some will consider SLES. Few said that they will consider FreeBSD now – being a happy FreeBSD user and advocate this is of course something I like to read. And I’d like to help people who want to go in that direction. A couple of weeks ago I announced that I’d write a free ebook called The Penguin’s Guide to Daemonland: An introduction to FreeBSD for Linux users. It is meant as a resource for somewhat experienced Linux users who are new to FreeBSD (get the very early draft here – if anybody is interested in this feedback is welcome).

But this post is not about BSD, because there simply are cases where you want (or need) a Linux system. And when it comes to stability, CentOS was simply a very good choice that’s very hard to replace with something else. Fortunately Rocky Linux was announced – an effort by the original founder of CentOS who wants to basically repeat what he once did. I wish the project good luck. However I’d also like to take the chance as an admin and hobby distro tinkerer to discuss what CentOS actually stood for and if we could even accomplish something better! Heresy? Not so much. There’s always room for improvement. Let’s at least talk about it.

Enterprise software

The name CentOS (which stood for Community Enterprise Operating System) basically states that it’s possible to provide an enterprise OS that is community-built. We all have an idea what a community is and while a lot could be written about how communities work and such I’d rather focus on the other term for now. What exactly is enterprise? It helps to make a distinction of the various grades of software. Here’s my take on several levels of how software can be graded:

  • Hobbyist
  • Semi-professional
  • Professional
  • Enterprise
  • Mission critical and above

Hobbyist software is something that is written by one person or a couple of people basically for fun. It may or may not work, collaboration may or may not be desired and it can vanish from the net any day when the mood strikes the decision maker. While this sounds pretty bad that is not necessarily the case. Using a nifty new window manager on your desktop is probably ok. If the project is cancelled tomorrow it just means that you won’t get any more bug fixes or new features and you can easily return to your previous WM. But you certainly don’t want to use such software in your product(s).

Semi-professional software is developed by one person or a team that is rather serious about the project and aiming for professionalism (but commonly falling short due to limitations of time and resources). Usually the software will at least have releases that follow a versioning scheme, source tarballs will not be re-rolled (re-using the same version number). There will be at least some tests and documentation. Patches as well as feedback and reporting issues are almost certainly welcome. The software will be properly licensed, come with things like change logs and release notes. If the project ends you won’t know because the repo on GitHub was deleted but because at least an announcement was made.

Professionally developed software does what the former paragraph talked about and more. It has a more complex structure with multiple people having commit rights so that a single person e.g. on holiday when a severe bug is found doesn’t mean nobody can fix things for days. The software has good test coverage and uses CI. There are several branches with at least the previous one still receiving bug fixes while the main development takes place in a newer branch. Useful documentation is available and there is a table with dates that show when support ends for which version of the software. There’s some form of support available. Such software is most often developed or at least sponsored by companies.

Enterprise products take the whole thing to another level. Enterprise software means that high-quality support options (most likely in various languages) are available. It means that the software is tested extensively and has a long life cycle (long-term support, LTS). There is probably a list of hardware on which the software is guaranteed to work well. And it’s usually not exactly cheap.

“Mission critical” software has very special requirements. For example it could be that it has to be written in Spark (a very strict Ada dialect) which means that formal verification is possible. Most of us don’t work in the medical or aerospace industries where lives and very expansive equipment may be at stake and fortunately we don’t have to give such hard guarantees. But without going deeper into the topic I wanted to at least mention that there’s something beyond enterprise.

Community enterprise?

Having a community-run project that falls into the professional category is quite an accomplishment even with some corporate backing. Enterprise-grade projects seem to be very much without reach of what a community is able to do. Under the right circumstances however it is doable. CentOS was such a project: They didn’t need to pay highly skilled professionals to patch the kernel or to backport fixes into applications and such. Thanks to the GPL Red Hat is forced to keep the source to the tools they ship open. The project can build upon the paid work of others and create a community-built distribution.

Whenever this is not the case the only option is probably to start as professional as possible and create something that becomes so useful to companies that they decide to fund the effort. Then you go enterprise. Other ways a project can go are aiming to become “Freemium” with a free core and a paid premium product or asking for donations from the community. Neither way is easy and even the most thoughtful planning cannot guarantee success as there’s quite a bit of luck required, too. Still, good planning is essential as it can certainly shift the odds in favor of success.

Another interesting problem is: How to create a community of both skilled and dedicated supporters of your project? Or really: Any community at all? How do you let people who might be interested know about your project in the first place? It requires a passionate person to start something, a person that can convince others that not only the idea is worthwhile but also that the goal is in fact achievable and thus worth the time and effort that needs to be invested.

A stable Linux OS base

As mentioned above, I’m a FreeBSD user. One of the things that I’ve really come to appreciate on *BSD and miss on Linux is the concept of a base system (Gentoo being FreeBSD-inspired kind of has “system” and “world”, though). It’s the components of the actual operating system (kernel and basic userland) developed together in one repository and shaped in a way to be a perfect match. Better yet, it allows for a clean separation of the OS and software from third party packages whereas on Linux the OS components are simply packages, too. On FreeBSD third party software has its own location in the filesystem: The operating system configuration is in /etc, the binaries are in /{s}bin and /usr/{s}bin, the libraries in /lib and /usr/lib, but anything installed from a package lives in /usr/local. There’s /usr/local/etc, /usr/local/bin, /usr/local/lib and so on.

Coming from a Linux background this is a bit strange initially but soon feels completely natural. And this separation brings benefits with it like a much, much more solid upgrade process! I maintain servers that were setup as FreeBSD 5.x in the mid 2000’s and still happily serve their purpose today with version 12.x. OS (and hardware) has been upgraded multiple times but the systems were never reinstalled. This is an important aspect of an enterprise OS if you ask me. I wonder if we could achieve it on Linux, too?

Doing things the BSD way would mean to download the source packages for the kernel, glibc and all the other libraries and tools that would form a rather minimal OS, extract them and put the code in a common repository. Then a Makefile would be written and added that can build all the OS components in order with a single command line (e.g. “make buildworld buildkernel”). Things like ALFS (Automated Linux From Scratch) already exist in the Linux world, so building a complete Linux system isn’t something completely new or revolutionary.

As vulnerabilities are found fixes are committed to the source repo. It can then be cloned and re-built. Ideally we’d have our own tool “os-update” which will do differential updates to bring the OS to the newest patch level. My suggestion would be to combine the components of a RHEL release – e.g. kernel 4.18, glibc 2.28, etc. following RHEL 8. So more or less like CentOS but more minimal and focused on the base system. That system would NOT include any package manager! This is due it being intended as a building block for a complete distribution which adds a package manager (be it rpm/dnf, dpkg/apt, pacman or something else) on top and consumes a stable OS with a known set of components and versions, allowing the distributors to focus on a good selection of packages (and giving them the freedom to select the means of package management).

Just to give it a working title, I’m going to call this BaSyL (Base System Linux). For the OS version I would prefer something like the year of the corresponding RHEL release, eg. BaSyL 2019-p0 for RHEL 8. The patch level increases whenever security updates need to be applied.

Enterprise packages

The other part of creating an enterprise-like distribution is the packages. Let’s call it the SUP-R (Stable Unix-like Package Repo) for now. So you’ve installed BaSyL 2019 and need packages. There’s the SUP-R 2019 repo that you can use: It contains the software that RHEL ships. Ideally a team forms that will create and maintain a SUP-R 2024 repo after five years allowing to optionally switch to newer package versions. The special thing here would be that this SUP-R 2024 package set would be available for both BaSyL 2019 and BaSyL 2025 (provided that’s when the next version is released). That way BaSyL 2019 users that already made the switch to SUP-R 2024 can say on that package set when updating the OS and don’t have to do both at the same time! A package repo change requires to consult the update documentation for your programs: E.g. Apache HTTPd, PostgreSQL database, etc. Most likely there are migration steps like changing configuration and such.

Maintaining LTS package sets is not an easy task, though. However there are great tools available to us today which makes this somewhat easier. Still it would require either a really enthusiastic community or some corporate backing. Especially the task of selecting software versions that work well together is a pretty big thing. It would probably make sense to keep the package count low to start with (quality over quantity) and think of test cases that we can build to ensure common workloads can be simulated beforehand.

In addition to experienced admins for software evaluation, package maintainers and programmers for some patch backporting, also people writing docs (both in general and for version migration) would be needed. Yes, this thing is potentially huge – much, much more involved than doing BaSyL alone.

Conclusion

I’d love to see a continuation of a community-built enterprise Linux distro worth the title. But at the same time getting to know BSD in addition to Linux made me think a little different about certain things. De-coupling the OS and the packaging efforts could open up interesting possibilities. At the same time it could even help both projects succeed: Other operating systems might also like (optional) enterprise package sets. And since they’d be installed into a different prefix (e.g. /usr/supr) they would not even conflict with native packages in e.g. Debian or any other glibc-based distribution.

If a portable means of packaging was chosen it would potentially also be interesting to the BSDs and Open-Solaris derivatives. And the more potential consumers the larger the group of people who could have the motivation to make something like this come true.

This article is meant to be giving food for thought. Interested in talking about going new ways? Please share your thoughts!

My first FreeBSD port: Castor

Summary

My first port has been committed to the FreeBSD port’s collection. It’s Castor, a graphical browser for text-based Internet protocols (currently: Gemini, Gopher, Finger). I write about the porting and about the actual application.

Article

[New to Gemini? Have a look at my Gemini FAQ.]

The actual post can be found here (if you are using a native Gemini client):

gemini://gemini.circumlunar.space/users/kraileth/neunix/2020/new_port_castor.gmi

If you want to use a webproxy to read it from your web browser, follow this Link.

Neunix: Eerie transitioning to Geminispace

Regarding my previous post (https://eerielinux.wordpress.com/2020/10/08/retreat-and-reboot-is-there-life-outside-of-the-web/) some witty people may have figured that this was a joke: The last post before that one was the long delayed February one and I wrote that I simply couldn’t post the next one (as I cannot stand the new WP editor). That could make said previous post the April one. Maybe it was meant for April 1st?

No, it wasn’t. I’m serious with my experiment of giving it a try to continue writing BSD and IT material in general in Geminispace. The plan is to continue to announce new posts on this platform but to provide the actual content in the new place.

Making a new home

So what’s the status of this? Well, it’s ready! I’ve contacted one of the people behind Gemini and was given access to a server that would host my new Gemlog for now (thanks solderpunk!). A first version of my capsule (in Web-speech this would be “homepage”) is up and available. It currently looks like this:

Capsule of kraileth - my new home on Gemini
Capsule of kraileth – my new home in Geminispace

I got the nice color scheme for my browser from samsai and generally recommend his introduction article to people who are interested in Gemini: https://samsai.eu/post/introduction-to-gemini/

Took me a day or so to get used to the reduced markup that is called gemtext. But now it’s working out well for the most part. There are some downsides that I’ll probably write about sometime in the future, but in general it works and it’s simplicity makes up for quite a refreshing experience.

Eerielinux blog -> Neunix Gemlog

I’ve picked a new name for the reboot of my previous blogging project. What was the eerielinux blog before is now the Neunix Gemlog. A few things changed or will change, but in general it’s a continuation of what I did with Eerie: I’ll publish articles that I hope are useful or at least interesting to some people.

Focus will remain *BSD-related material but like before I’ll write about other things as they arise. Of course I have some ideas for article series as well. I’ve also picked some old posts from Eerie and converted them (work to transfer more is ongoing).

This is what it looks like right now:

The Neunix index page - continuation of the eerielinux blog in Geminispace
The Neunix index page – continuation of the eerielinux blog in Geminispace

How to access it?

All nice and pretty, but how to you access this thing? Well, Gemini is a protocol wildly different from HTTP and is not currently supported by any mainstream web browser. It is a superset of what HTTP does, though and Gemtext is a superset of what HTML does, so it’s very much possible to convert one to the other – at least in the direction of simpler to more complex! Proxies that make Gemini content accessible via the Web already exist.

So you have two options:

1) Get a real Gemini browser. There are multiple projects out there. I might write about this topic at a later point in time. For now I can only point you to doing a little research and find one that suits you. I use Castor (https://git.sr.ht/~julienxx/castor) and find it to be pretty nice.

Then go to: gemini://gemini.circumlunar.space/users/kraileth/neunix/

2) Use a web proxy. Here you are: https://portal.mozz.us/gemini/gemini.circumlunar.space/users/kraileth/neunix/

What’s next?

I’ve already posted one new article on Neunix. In about a week or so when people had time to read this one, I’ll link to it directly with a new post on this platform.

And then? Well, we’ll see. The journey has begun and there’s vast, untamed land before me. The possibilities are next to endless. I’ll just pick one thing as the mood strikes me.

Retreat… and reboot! Is there life outside of the Web?

Dear readers,

the time has come for some fundamental re-organization of my Eerielinux blog. The impact of this year’s pandemic as well as family matters (a new family member that demands quite a bit of care!) have forced me to post-pone many of my posts that I usually published at least monthly. Right when I was getting back on track and stated that I intended to publish the missed articles (I’ve written them after all but didn’t find the time for final editing) something else happened.

No, to be honest, it has been happening for quite a while. I didn’t like it but had to cope with it as there was nothing I could do about that, anyway. What am I talking about? Well, the sad fact that I would call the “degradation of the Web”.

Eerielinux has always been about some experiments that I did and wanted to share in case anybody was interested. Now I’m in for a new experiment, a somewhat radical one (and what could be considered radical after going Linux-only and later FreeBSD-only for all of my machines?). I would appreciate feedback on the path that I choose to give a try. What do you think about it? Are you interested in that, too? Would you continue to read some of my material or does this mean that our ways will part? But let’s detail first what has made the time ripe for what is less of an elitist move then an act of mere desperation.

What’s wrong with the Web?

So, what has happened? And what’s the gripe that I have with the Web? Well… It’s actually not just the Web (read: WWW), it’s more things coming together. There seems to be a general direction that the IT world is heading which I don’t approve of. This blog is powered by WordPress for the simple reason that when I started in 2012 I had some material that I wanted to share with the world and going with a free plan on a blog hosting platform seemed like a nice way to get started quickly. And it was. Mind the past tense here.

WordPress evolved – which is a good thing in general. It evolved in a way that I loathe, however. There have been all these “new” and “easier” modes for XYZ which were admittedly more shiny and more “modern” than before. But they broke my workflow time and time again – and unfortunately not for the better. I actually like to re-evaluate my workflows, like to challenge my habits. When I was still using Ubuntu Linux I didn’t only give Canonical’s Unity DE a chance, I actually tried to use it before putting it aside because in the end it hindered me more than it helped. I forced myself to use the (then) new graphical TrueOS for half a year and suffered through incompleteness and breakage. I have a fairly high tolerance for things that are… sub-optimal. I’m also rather patient, I think, and I’m surely not a rage-quit type of person. But inevitably there’s this point where I have to admit: “Ok, that’s it, that’s simply too much”.

The same thing happened for WordPress, when they came up with various iterations of “new editors” – which is why I opted to go back to the “old editor” for writing and editing my blog posts. As I returned to finish the next old post however, I found out that the old editor was gone. They had “finally” removed it. Alright, so I had to try and get accustomed to the new one again. Perhaps it had matured since I tried it last? It had. But it proved to be even more annoying for me than before! I don’t know what the designers are thinking – certainly not what I do. For me it is a pain to work with the new tool: Things that worked perfectly fine before are gone and the new options are so much inferior to what we had before… And this is not even the worst thing that made me give up when I was more than half through with polishing the next post.

Technically the new editors work well for small posts. I seem to have a bit more to say than the average blogger, though; and here’s a very frustrating issue: When editing longer posts, the editor gets unresponsive and laggy. Yuck! Trading a working one for a “new” one that does less but eats up so much more resources? Are you freaking kidding me? Oh, right, that’s the way things go, I forgot for a second. There’s something within my very self that refuses to accept the lack of sanity around me, sorry. I should know better by now.

And even though WordPress makes up a quite large portion of today’s Web, that’s just the tip of the iceberg. All this JS nonsense, megs and megs of useless graphics, animations, tracking and spying cookies, etc. pp. make the Web such a obnoxious place! And don’t even get me started on the situation with web browsers…

What’s wrong with hardware?

After my little rant I can hear people say: “See, pal… If the WordPress editor makes your system struggle enough that you complain on the net, you reaaally want to get a new PC! There’s machines with more than 4 gigs of memory today, you know.”

The irony here is: I did get a newer laptop – and went back to primarily use the old one! Why? Because the new one is bugging me just too much whereas the older model works well for me. The newer follows that stupid trend of being extremely thin. “Thanks” to that design no DVD drive fits in there. While this is incredibly dumb, I can live with it. Carrying an external drive with me is not so much of a problem, but still the fact that the machine is missing the drive annoys me. The bigger problem: The battery of the old model can easily be removed and I can carry a second one with me e.g. on longer train rides. With the new one I’d need to open the case to access it! WTF HP? And finally: Some idiot decided that the “cool” flat design was so important that it’s thinner than the jack for an Ethernet connector… Yes, really: There’s some kind of… hatch that I need to pull down to make the opening big enough to insert the plug! If I remove the plug the hatch snaps back up. It’s not hard to see that this brain-dead construction is prone to defects. And really, it didn’t take too long until it stopped working reliably and I’m randomly losing connectivity every other day… Sorry, this is garbage. Plain as that.

I have another older laptop – that one has 24 GB of RAM and two hard drives. Nice, eh? Not so much because it also has some of those gross EFI quirks. When one of the drives has a GPT partition scheme applied to it, that drive won’t even be detected by the POST anymore! So on that machine I’m stuck with MBR which is not so much fun if you’re using ZFS and want to encrypt your pool with GELI – it can be done, but the workarounds have the side-effects of not working with Boot Environments! True garbage again.

“Come on, why don’t you simply get a different model then?” Yeah, right. As I said, I type a lot, so I have some demands concerning the keyboard. Unfortunately some manufacturers seem to take pride in coming up with new ways of f*cking up the keyboard layout. There are models that simply dropped the CAPS LOCK key, because nobody uses that, right? Wrong! There are people who remap their keys, and being a NEO² typist (a keymap that allows for 6 layers) I need that key as an additional modifier! Take it away and that keyboard is utterly useless to me. But don’t current models like the X1 Carbon still have CAPS LOCK? They do, but there’s another oddity: They swap the CTRL and FN keys! Having CTRL as the leftmost key on the keyboard that I use as my daily driver and switching around in my head at 3 AM when there’s an alarm during my on-call duty is something I fail to do properly. I used a Thinkpad for about a month for on-call and it drove me completely mad. Sorry Lenovo. Your laptops may be excellent otherwise, but they are not for me.

Oh, and it’s not so much of a secret that the situation with the x86 and x86_64 architectures is abysmal (need I say more than Meltdown / Spectre?). Modern hardware design is fundamentally broken and I cannot say that I completely trust the fixes and what unknown side-effects they might bring with them either. But that’s not even the point. We’re stuck with market-leading technology that has been criticized as crappy right from the start. It has come a long way since then, but it’s not a great technology in any regard. I’ve been playing with my Pinebook and it might have the potential to become something different. Maybe the Open Source PowerPC notebook initiative will succeed. Probably we’ll have something RISC-V-based in the long run. But right now we have to make use of what is available to us.

Can’t the Web be saved?

I’m a somewhat optimistic person, but in this regard not so much. While I have thought about just self-hosting my blog and using a nice static-site generator or something, that would only solve my issues regarding the blog. A huge portion of the Web will still be a terrible place. Browsers and software will still suck hard. Especially in terms of browsers I’ve found even the suckless offering far from being a good one (I like their work and agree with most of the suckless principles, but in my conclusion it’s not enough to really get me out of this whole mess).

Sure, I can install e.g. the Dillo browser. But its development has been extremely slow for years and the subset of HTML that it supports cannot render most of today’s websites properly. And going with just the pages of people who deliberately keep their sites simple? I like the idea, but there’s the problem that no real community (that I know of) exists which agrees on a single standard of which HTML features are ok and which ones should be avoided.

I’m old enough to remember the browser wars well enough. I also confess being guilty of having used “marquee” on my homepage for a while back in the day. Maybe all of the gibberish we’re facing today is a late punishment for sins committed when we were younger? I don’t know.

So – are we doomed? Should we just pull the plug, go to real libraries again, buy books when we need information and spend our free time in the garden? That thought is tempting only for a moment. I love tech. I love the net and the possibilities it brings with it. Plus I’m kind of a liberal today who believes that mankind will eventually make something good from it all. We’re on this planet to play, to learn and to grow. Some insights can only be gained the hard way. It’s ok that commercial interests ruined the Web for people like me. There are other people who have (monetary or idealistic) interests in the Web being like it is. That’s fine. And in fact: Today’s Web “works” for a lot of people who either have no clue why it’s problematic in the first place or who simply don’t care. Which is a legitimate stance. The tragedy is simply that the rest of us has kind of lost home for a while and we haven’t been able to find a new one, yet.

The “Big Internet” vs. the “Small Internet”

While the Web makes up for a very large and definitely the most visible part of the Internet, there are niches, lesser known parts of it which are certainly not less interesting. Some people speak of the Web as the “Big internet” and call the various special parts the “Small Internet”.

One such niche is the so-called Gopherspace. Gopher is a simple communication protocol for documents which predates the Web. Not just by today’s standards it’s primitive – but it works. And while the major browsers either never supported it or removed support long ago (e.g. Firefox dropped Gopher support for release 4.0), it’s still kept alive by enthusiasts. Some sources even suggest that it has seen moderate growth over the last years with more people fed up with the Web trying out something different. If you’re interested, start here with the web proxy or do some research on your own and install a Gopher client.

I’ve dug into various Gopher holes and my experience with it is a mixed bag. On the one hand it’s really cool to see people putting in the effort of creating a place that’s worthwhile to visit. I also liked the experience in general: It’s not bells and whistles everywhere (because that’s in fact impossible due to the protocols limitations) but rather a focus on the actual content. People have found ways to access git repositories over Gopher and there’s a community called Bitreich (via Gopher) that declared “suckless failed” and propose an even more radical approach. While some of it is probably parody, it’s an interesting one.

On the other hand some of the restrictions of the protocol make no sense at all today, like having a hard-coded limit of 80 characters per line. And there are more shortcomings. When Gopher was conceived, nobody thought of internationalization and indicating the charset that a document uses. Also Gopher is all plain-text and does not provide any means to help protect your privacy – which does not exactly make it overly appealing to me.

But there’s something else, an even smaller – but newer – niche, called Project Gemini. It is a newly designed protocol (2019!) that wants to provide kind of a middle ground between Gopher and the Web while clearly leaning towards the former. It remedies the privacy issues by making TLS mandatory and deliberately not supporting “user agents” or other ways to collect data about the user not required to deliver the content. And it wants to simply be an additional option for people to use not to replace either Gopher or the Web.

While I kind of like Gopher, I really admire Gemini. Having the balls to even try to actually do something like this in our time is simply (no pun intended) astonishing. In my opinion it’s high time to re-invent the wheel. Gemini is certainly not a panacea to all the problems that we have today. But it’s a much welcome (for me at last) chance to start fresh without that huge, huge bag of problems that we’re carrying on with the Web but with the experience that we gained from using it for several decades now.

What’s next?

My experiment is to move Eerielinux to Geminispace and post new content there as I find the time. The plan is to keep this WordPress blog and to create new posts here that simply reference the real content, providing a convenient link using a Web proxy. That should help keeping things accesible for people who prefer to stick with their known browsers instead of getting another one for such a tiny little world as Geminispace is right now.

And who knows? Perhaps something nice comes from Gemini in the long run. I’m looking forward to find out. And if you care — see you in Geminispace for some more articles on *BSD, computers and tech in general!

FreeBSD on SPARC64 (is dead)

This was originally my January article that I decided to postpone for another one. So it’s really the February one – except for myself not getting around to publishing it before August due to… well, the world going crazy. So this is the January state of affairs for the most part.

On the plus side of all the havoc: I went over the article again and improved it. I intend to publish the other articles that I missed out one by one.

After getting my hands on an old SPARC64-based SUN machine, I’ve written a couple of posts about the actual hardware, installing and running OpenBSD as well as an illumos distro on it. I’ve also hinted that I wanted to try out my favorite OS on it, too: FreeBSD.

FreeBSD on SPARC64?

I’m coming pretty late to the party, because SPARC64 support in FreeBSD is apparently doomed: After the POWER platform made the switch to a LLVM/Clang-based toolchain, SPARC64 is one of the last ones that still uses the ancient GCC 4.2-based toolchain that the project wants to finally get rid off (it has already happened as I was writing this – looks like the firm plan was not so firm after all, since they killed it off early). And compared to the other platforms it has seen not too much love in recent times… SPARC64 being a great platform, I’d be quite sad to see it go. But before that happens let’s see what the current status is and what would need to be done if it were to survive, shall we?

The FreeBSD project provides ISO images for various platforms. At the time of my first attempt I downloaded a 12.0-RELEASE SPARC64 iso. Since my machine cannot boot from USB I can’t use the virtual CD drive that I typically use. So I went old-school and burned a CD. Then I prepared everything and turned the machine on. It started booting but before even reaching the installer or anything, I witness something completely unexpected – here’s the last few lines from the serial console:

nexus0:  type unknown (no driver attached)
rtc0:  at port 0x70-0x71 pnpid PNP0b00 on isa0
rtc0: registered as a time-of-day clock, resolution 1.000000s
uart0: console (9600,n,8,1)> at port 0x3f8-0x3ff irq 43 pnpid PNP0501 on isa0
uart1:  at port 0x2e8-0x2ef irq 43 pnpid PNP0501 on isa0
Timecounter "tick" frequency 548000000 Hz quality 1000
Event timer "tick" frequency 548000000 Hz quality 1000
Timecounters tick every 1.000 msec
Power Failure Detected: Shutting down NOW.

Wow, what’s that? FreeBSD thinks that the machine has a damaged PSU and immediately shuts down, giving me no chance to actually do anything. Now that was less than impressive… This was when I decided to try out OpenBSD instead. That worked well and the machine ran for a couple of days, compiling OpenBSD. So the power supply is definitely fine.

I did a little research and it seems that some models fire an interrupt that means “PSU is damaged” with other models. OpenBSD seems to know this for my machine and ignores it, but FreeBSD doesn’t.

Rolling my own ISO

Ok, so what to do now? We have a defunct OS but we also have a string to look for. It’s not hard to find it in /sys/sparc64/pci/psycho.c. I don’t know C and thus cannot implement new features or do real debugging of any sort. But I can comment stuff out. And since this is a false alarm that should be sufficient. So I get rid of the check that ruins the boot for me and save.

Now, one of the things that I love about FreeBSD is its build system. It’s so easy to build the OS – or to even cross-build it! It’s as simple as doing this:

cd /usr/src
make TARGET=sparc64 TARGET_ARCH=sparc64 buildworld buildkernel

This cross-builds FreeBSD for SPARC64. But how to get it onto my SUN machine? Fortunately there is a pretty interesting directory in /usr/src/release. The Makefile and scripts there allow for extremely easy creation of installation media. So it should just be two more commands and I’d end up with an ISO image that contains my modified FreeBSD. Let’s give it a try:

cd /usr/src/release
make cdrom NOPKG=1 NOPORTS=1 NOSRC=1 NODOC=1 TARGET=sparc64 TARGET_ARCH=sparc64

This means: Build a minimal install CDROM image, leaving out any packages, the ports tree as well as the source and documentation distribution sets. But what is this?

gpart: scheme 'VTOC8': Invalid argument
*** Error code 1

The manpage for gpart reveals the cause of the problem: By default VTOC8 is not supported on amd64 kernels (which is a perfectly reasonable choice). However solving the problem is as easy as adding GEOM_PART_VTOC8 to the kernel options and building a custom kernel on my amd64 build machine.

If you’ve never worked with a custom kernel (as it’s far less often required than it was in the past), have a look in /usr/src/sys/amd64/conf. You’ll find a file called GENERIC there. It holds all the definitions for the standard kernel. While you could just edit this file it’s bad practice to do so. At least copy GENERIC to a new file:

cp /usr/src/sys/amd64/conf/GENERIC /usr/src/sys/amd64/conf/GENERIC-VTOC

Now let’s change the ident line to GENERIC-VTOC, too and add the option GEOM_PART_VTOC8 as the gpart manpage suggests. That’s all, now I can rebuild the kernel with

make -C /usr/src kernel KERNCONF=GENERIC-VTOC

This command line builds and installs the new kernel for me, I just have to reboot afterwards. And not too much later I have an ISO to burn that would finally let me install FreeBSD on the machine!

Building software

The first surprise for me: The installer is missing the option to install on ZFS as I have become used to. Looks like ZFS can be used, but the booting facility for sparc64 does not support booting from ZFS.

There are no pre-built packages available for sparc64. There used to be up until about a year ago, but now it’s building everything from ports.

After building tmux I decided to upgrade the system. Of course there’s no way to use freebsd-update, so it’s also compiling from source. I give -CURRENT a try, but the old GCC 4.2 in base cannot build that. Eventually I settle on 12-STABLE. Compiling it takes many hours, but it works without problems.

After rebooting, however, my system doesn’t come back up. Uh-oh. What’s happening here? I connect via the serial console – and see that I forgot to patch psycho.c… Fortunately I can just boot kernel.old and be good again. While editing the file again, I notice that I did unnecessary work before: There’s a sysctl that can turn the automatic shutdown off! So, yes, it’s just a simple hw.psycho.powerfail=0 in the loader.conf and the unmodified stable kernel boots. If I had entered set hw.psycho.powerfail=0 at the OK prompt in the loader, I wouldn’t even had to roll my own ISO. But I like exploring how FreeBSD works, so that extra step was not so bad after all.

Now having a pretty fresh system, I go back to exploring ports again – and run into problems. The default version of GCC on FreeBSD is currently 9.2, but that version is broken on sparc64. GCC8, GCC7 and GCC48 work and can be used to compile software (the latter needs the port edited, though, as sparc64 is not listed as a supported platform even though the port is fine). GCC6 is also broken after the latest minor release.

This is pretty unfortunate since it means that most software that requires a newer GCC to compile cannot be built easily. Sure, I could mess with the ports infrastructure to make it use GCC8 instead, but I’d rather investigate something else.

External GCC

While the PPC platform recently moved to the LLVM/Clang-based toolchain, SPARC64 is still stuck with the ancient GCC in base as mentioned before. For such projects, FreeBSD has created a path to move forward: Using an external toolchain as the base compiler.

Have a look in /usr/ports/base. You’ll find the two ports binutils and gcc6 there. This wiki page gives an overview of how many steps the various platforms have already made.

So I built the binutils and gcc packages, installed them – and broke my system. As I found out, the base ports actually replace said component in base. So after installing, it’s no longer binutils 2.17.50, but the much more recent 2.33.1 and gcc 6.5.0 instead of 4.2.1. The latter however is broken… So time to make installworld from my -STABLE source again to fix my system.

What to do next? Well, -CURRENT is where the exciting stuff happens, so I’ve got to get that onto my machine somehow! The native way is blocked, as I have no idea now to fix the compiler. But what about cross-compiling for now?

Cross-building -CURRENT

There’s a package with the name sparc64-xtoolchain-gcc available for my amd64 machine, so installing the cross-toolchain is about as easy as it gets. Let’s try to build FreeBSD with it!

It takes a while but of course finishes much more quickly compared to a native build on the old SUN hardware. Ok, next step is preparing a release for it, so that I can try it out. Unfortunately trying to do so leads to this:

--- libc.a ---
/usr/local/sparc64-unknown-freebsd13.0/bin/ranlib -D libc.a
--- libc.so.7.full ---   
collect2: fatal error: ld terminated with signal 11 [Segmentation fault], core dumped
compilation terminated.
/usr/local/bin/sparc64-unknown-freebsd13.0-ld: BFD (GNU Binutils) 2.31.1 assertion fail elfxx-sparc.c:765
*** [libc.so.7.full] Error code 1

Ouch! What is this? It looks like the linker is broken! Time to fix it, eh? Except when you’re not a C programmer, what do you do now? After a moment of consideration I decided that this might be well beyond my skills – but sometimes you’re lucky. What if the change that broke it was rather simple to understand? You don’t know until you try, so here we go…

Linker games

There must have been a time where things worked, so I start by testing a build with an older version of Binutils: Using 2.28 yields a successful build with GCC 9 (gtest disabled). Same thing for 2.30. Using 2.31 however leads to the known linker problem. Aha, so the error was introduced during the development of Binutils 2.31. That’s good to know.

So now it was time to clone the git repo (I don’t quite understand why they put binutils and gdb into the same one but never mind) and dig into it. It took me a while to figure out exactly which commit broke things but was eventually able to find the culprit. It was a not a simple commit as I had hoped. Cleanup of old cruft had been done (like phasing out of a.out and COFF support) und that somehow also broke ELF. I couldn’t see why and so I started taking the commit apart.

Finally I traced the problem to a change in gas (the GNU assembler) where the cleanup lead to an oversimplification with targets that made gas on FreeBSD behave as if it was NetBSD! I provided the patch on the FreeBSD bugtracker (here) and it was accepted. I posted my patch to the upstream mailing list for Binutils, too (here) and a couple of days later Alan Modra who did the change back then got back to me after having merged a fix.

Once the patch hit the ports tree I was able to cross-compile a -CURRENT SPARC64 kernel on AMD64 out of the box without any need for patching. Yay!

Playing with cross-built kernels

Trying to boot up the kernel resulted in a panic quite early in the boot, but after getting this far I didn’t want to just give up now. So I filed another bug report and the problem was in fact solved really quickly. With that fix the kernel was able to boot further – only to hit another panic. Fortunately the second one was sorted out almost instantly as well on the same bug report.

Having a working -CURRENT kernel booted up felt just great. But what about the userland? It was also broken in multiple ways and did not compile. While it took me days to figure out all the breaking commits, reverting or otherwise fixing them for testing purposes, I ended up with a patched userland that did successfully cross-compile. The downside of things was that every single resulting binary that I tried out would just crash… At that point I hit a dead end. I submitted one final bug report (here) but as nobody had any suggestions for me this was the game over point.

Goodbye SPARC64!

When I learned that FreeBSD was about to drop SPARC64 support I hoped that I could step in and help in keeping it alive. This proved to be an illusion as obviously nobody with a deeper technical understanding than me had any real interest in that platform. I admit being deeply disappointed in January when SPARC64 was dropped. Not because it was dropped, but mainly because it was dropped early. Maybe a bit more time would have helped to keep it in, maybe it wouldn’t. We’ll never know.

But it’s probably a good thing that I didn’t get around to publish this article until months after the events. I’d like to especially thank Ed Maste, Mark Johnston and Baptiste Daroussin who merged fixes for a most obviously doomed platform as well as other people from the team who were helpful about it until the end. FreeBSD on SPARC64 failed to survive until 13.0-RELEASE, but the efforts that were made show how much of a professionally developed OS FreeBSD is.

So let’s put SPARC64 to rest in dignity. The team has stated that support over the lifespan of 11-STABLE and 12-STABLE will be done as “best effort” work since it’s gone in HEAD. Especially 12 will live for some time and I’ll try to continue doing things with my SunFire until the curtain falls some time after 12.x.

Should you abandon Linux and switch to *BSD?

The popularity of Linux skyrockets these days: More and more companies adapt it and even the just-above-average who doesn’t accept the imposition that is called “Windows 10” is often open to try it out. However at the same time, the popularity of said System seems to be fading among some of the more technical people, operating system enthusiasts and even followers of the FLOSS ideology.

Just recently an article called Why you should migrate everything from Linux to BSD was published and it has caught some attention and even replies like this one (it makes false claims like ZFS not being available on NetBSD and such, though.)

There have always been some posts like this, this is nothing too special. What I think is new, however, is the frequency that you can find discussions like this. Also the general tone has changed. Just a couple of years ago most Linux users wouldn’t even have bothered to comment such a thing. Today they seem to be much more open to learning about alternatives or in fact looking for something better than what they have. So what’s wrong with Linux?

It’s not about Systemd (alone)…

There are a lot of perfectly valid technical reasons to not want to use Systemd on your Linux system. But none of those could ever be an excuse for the hatred that this project has attracted. However there is a pretty simple explanation for that phenomenon: It’s how somebody is acting all high and mighty, simply dismissing valid critics and being a great example for a person with an arrogant attitude.

We have a lot of… let’s say… difficult persons it tech. Sometimes you think they have no manners, are being jerks or greatly overestimate their knowledge in certain fields. That’s ok and in fact they often are somewhat brilliant in a certain area. Most of us have learned to live with that.

And then there are people who think that they can dictate the one way to go. Well, there are project leaders who actually can do that due to being widely respected. But sometimes it’s different. Now when somebody wants to take something away from you (or he really doesn’t but you get the impression), you are likely to stand upon your defense.

Now when all of that unites in one person, you have the perfect boogeyman. Then all the technical aspects lose in weight and the feeling takes over. Which is not to say that feeling is not an important factor: If you don’t feel comfortable with Linux anymore, it might be time to move on.

The GNOME factor

The GNOME desktop is well known among *nix desktop users. It suits the needs some people but not others. That’s fine and there would not be any problem at all. However GNOME has a certain reputation which is not that nice… Why? Not because they didn’t accept some feature requests. Also not because are being ignoramuses when it comes to systems that are not so mainstream. No. They are notorious for cutting features that have already existed! This is what makes a lot of people mad.

I was a GNOME user myself and remember pretty well how I liked the file manager. To my dismay they removed so many features which I needed that the application became useless for me. I didn’t want to go looking for something else, but I was eventually forced to as the situation became unbearable. As I’m more of a calm guy, I didn’t go off at insult anybody, but other people did. And things got worse…

Today I’m a former GNOME user. This is *nix and not Windows. Nobody can force tiles upon us against our will. Yes, some projects think they know better than their users and that leads to the latter becoming upset. But as long as there are alternatives, we can move on.

Clumsy leaders

Recently Linus Torvalds spoke out against ZFS. Being the Linux “inventor” he has earned a lot of respect among the *nix community. I also used to hold him in high esteem despite his often ignoble behavior. However over the course of the last few years, I’ve lost a lot of respect for him.

The ZFS statement was the last coffin nail for me. He says that he always found ZFS more of a “buzzword” than anything else and that “benchmarks” didn’t make it look that good anyway. This is so far off the shot that I’m ashamed to ever have considered him a technical genius. He obviously does have no idea at all what problem ZFS actually solves! Speed benchmarks are all nice and well, but they are not why people want ZFS! And of course it’s far from being a buzzword – if you have valuable data today, you almost certainly want to bet on ZFS.

But not only is he ok with judging what he has not even bothered to take a look at from closer that several hundred meters – he’s also making completely stupid claims that make him look like a terribly ridiculous figure. According to Mr. Torvalds, ZFS had “no real maintenance behind it either any more”.

Ouch! He hasn’t even heard about OpenZFS, I’d guess. If you’re not in a closed-Solaris environment, this is what people are referring to when they say “ZFS”. Nobody outside the small, isolated isle of Oracle has any interest in ClosedZFS. Yes, Oracle laid off most of their Solaris staff and nobody knows if there is any noteworthy future for that OS. But not too long Solaris 11.4 was released – so even if Linus referred to the situation at Oracle, he’s not exactly right. In the case of OpenZFS, however, he could not be more wrong.

The OpenZFS is as alive as it could be. There are regular leadership meetings, many new features are being developed – and just recently a common repository for both Linux and FreeBSD was created, with other operating systems expected to join in! This is a moment of glory for collaboration in Open Source, but Linus didn’t hear a thing – or did he not want to hear anything? The fact that the second-in-command, GKH, has attacked ZFS about a year ago in a pretty questionable way, too, does not bode well.

Why do people leave Linux?

There have always been compelling technical reasons why you would choose *BSD over Linux (e.g. the complete operating system approach, much more consistent design, etc.). But I’d say that lately the the feeling part of it became much more important.

I left Linux because I was so sick and tired of the stupid fights between the hardcore fans of one distro or another and the unbearable arrogance of many. Yes, I also had the feeling that Linux was heading down the wrong way, too. I simply was no longer really happy with it and ready to try something new. There was a learning curve for sure, but the FreeBSD community is extremely friendly and while there are of course also people getting into disputes, I got the feeling that I described as “BSD is for grown-ups”. Not saying that there aren’t any really bright people in the Linux community, but on average I feel that the BSD users are more technical.

Others have stated similar reasons. The primary developer of ClonOS (that strives to be for FreeBSD what Proxmox is for Linux) wrote this:

According to the authors of the project, Linux is no longer a member of the common people, it is fully controlled by big commercial organization. while FreeBSD is developed mostly by enthusiasts. Today, Linux – it is a commercial machine for making money – is that it was Microsoft Windows in 90 years. While many Linux users have struggled against the Windows monopoly (CBSD author of one of them).

Yes, FreeBSD very far behind in their characteristics in comparing to Linux. Just look at the abundance of such powerfull decisions as the OpenVZ, Docker, Rancher, Kubernetis, LXD, Ceph, GlusterFS, OpenNebula, OpenStack, Proxmox, ISPPanel and a dozen others. All this is created by commercial companies for Linux and this is done very well. However, Linux is oversaturated with similar solutions. Therefore, it’s much more interesting to create it on FreeBSD, where nothing like that exists. This is an excellent challenge to improve and fix in FreeBSD.

We all love independence and freedom and FreeBSD today – an independent and free operating system, which is in the hands of ordinary people.

They are not alone. Even convinced followers of the FSF ideas have come to the conclusion that Linux may not be the right platform for them anymore. The people behind Hyperbola GNU/Linux have announced this:

Due to the Linux kernel rapidly proceeding down an unstable path, we are planning on implementing a completely new OS derived from several BSD implementations. This was not an easy decision to make, but we wish to use our time and resources to create a viable alternative to the current operating system trends which are actively seeking to undermine user choice and freedom.

This will not be a “distro”, but a hard fork of the OpenBSD kernel and userspace including new code written under GPLv3 and LGPLv3 to replace GPL-incompatible parts and non-free ones.

Reasons for this include:

Linux kernel forcing adaption of DRM, including HDCP.
Linux kernel proposed usage of Rust (which contains freedom flaws and a centralized code repository that is more prone to cyber attack and generally requires internet access to use.)
Linux kernel being written without security and in mind. (KSPP is basically a dead project and Grsec is no longer free software)
Many GNU userspace and core utils are all forcing adaption of features without build time options to disable them. E.g. (PulseAudio / SystemD / Rust / Java as forced dependencies)

As such, we will continue to support the Milky Way branch until 2022 when our legacy Linux-libre kernel reaches End of Life.

Will *BSD be a better OS for you?

So the big question is: If you are a Linux user, should you make the switch, too? I won’t unconditionally say yes. It really depends.

Are you happy with the overall situation in Linux? In that case there’s no need to migrate anything over. However you might still want to give a BSD of your choice a try. Perhaps you find something that you like even better? If you spend a bit of time exploring a BSD, you will find that several problems can be solved in other ways than those you are familiar with. And that will likely make you a better Linux admin, even if you decide to stick with it. Or maybe you’ll want to use the best tool for the job which could sometimes be Linux and sometimes a BSD. Getting to know a somewhat similar but also at times quite different *nix system will enable you to make an informed choice.

Not happy with Linux anymore in the recent years? Try out a BSD. If you need help to decide which one might be for you, I’ve written an article about that topic, too. Do a bit of reading, then install that BSD in a VM and explore. If you go with FreeBSD, make sure you take a look at the handbook (probably also available in your language) as that is a great source of information and one thing that sets FreeBSD apart from almost all Linux distros.

If you find that you like what you found, make a list of your requirements and find out if your BSD would indeed fulfill your needs. If it doesn’t, consider alternatives. Once the path is clear, I recommend to take a look at the community, too. For example there’s the weekly BSDNow! podcast that’s very informative. A lot of people have already written in, confessing that they are still Linux users only, but the topics of the show got them still hooked.

Do not rush things. Did you start with Linux or have you migrated e.g. from Windows? If you did come from a different OS, remember that there have been frustrating moments when you were all new to Linux and had certain misconceptions. You will be going through that again, but looking at the final outcome it will likely be a pretty rewarding journey.

Also don’t be shy and ask others if you don’t have the time or will to figure out everything yourself. The BSD people are usually pretty approachable and helpful. Feel free to ask me questions here, I might be able to give some answers.

It has been a couple of years now since I replaced the last machine that ran Linux at home. Would I choose to make the switch after all the experience that I gained since then? Oh yes! Anytime.

A glimpse into 2020

When you read this, the old year will be over (well, depending on the time zone you live in). If we’re lucky, this might be the year to get our hands on the first affordable RISC-V hardware that can actually run a Unix-like operating system. It should definitely be the year to get interesting devices like the ARM64-based PinePhone. And it also means that Python 2 is finally dead.

Speaking about that: For me 2019 has been a pretty busy year. On this blog I wrote about quite some different topics, among them my first attempt at writing something programming-related as I tried to teach myself a little bit of Python. If I had to name an overall theme, I’d say that the past year was the year of hardware architectures. I didn’t plan this, but that’s what’s happened. But I don’t actually want to look back in this post. On the contrary! But speaking of dead things also kind of fits into the next topic (more than I like…)!

FreeBSD on SPARC64

The next post that I plan to write will be about FreeBSD on the SPARC64 architecture. What I did not know when I decided on that is that it is more or less a doomed architecture when it comes to FreeBSD. SPARC64 is in grave danger – people expect support for it to be dropped before FreeBSD 13.0 is released!

The reason is that it is one of the architectures that still need the old GCC 4.2 (yes, from 2007!) toolchain – and that old cruft finally has to go. And while everybody agrees that this is a completely sensible thing to do, SPARC64 doesn’t seem to have as many friends among the FreeBSD developers to make the transition to something newer. A few people are trying to get something done (I’m also tinkering and trying to help), but it’s far from a save bet that it’ll succeed.

IMHO it would be a real shame to see FreeBSD on SPARC64 die. If it does survive, I’ll definitely try to help with QA. Can you help? If so: Please do! I’ll give more details on what the current status is and where the problems are in my January post.

FreeBSD on ARM64

I plan on writing another post about the current status on FreeBSD on ARM64. The topic of making it a Tier 1 architecture has recently been brought up again and I’d like to join the discussion about that rather sooner than later. If it wasn’t for the very unpleasant situation with SPARC64, this would actually be my next post.

Progress has been made on the networking issues on the Cavium ThunderX servers and I’ll also take a look again at the PineBook. Most likely I’ll also buy a PinePhone and/or one of their Tablets. If I do, you will find a review here.

Orchestration and Configuration Management with SaltStack

I wanted to write about this topic for a while, but now I’ve at last started to set aside the hardware that I need for the project. Several years ago (gosh…) I wrote a little comparison of Puppet, Chef, Salt and Ansible. After a general introduction I might update and publish this as well.

But that will only be the start of a series of posts introducing Salt. There will be a slight focus on FreeBSD, but in general it will show off how to work with various operating systems and distributions. We’ll start with Salt-SSH using Remote Execution Modules, talk about targeting and get to know grains. Then we’ll progress to the state system, pillar data and so on before switching to the master-minion model.

I’m looking forward to this one. If anybody has any ideas – just tell me, I’m open to suggestions on what to cover.

HardenedBSD

While I ran this derivative of FreeBSD for a couple of weeks on spare hardware (until I needed that for something else), I just didn’t find the time to write about my experience with it, yet. I liked it, though, and plan on re-visiting the OS. And when I do, you’ll read about it here for sure.

The project is currently being re-structured. So I’ll wait with this topic for a while longer. Might happen in the second half of the year (time flies by much too fast, anyway).

E-Mail

For years now I’ve been putting this one off. In fact I’ve started digging a bit into the topic twice. Two times I got distracted with other topics. Maybe third time is the charm?

To help making this more likely, I registered a domain (with a pretty bad pun in German) in December. Maybe now that I pay money, I’ll actually live up to the plan to “do moar with mail”. I’ll be using BSD technology where possible. So expect that the mailing stuff will involve OpenSMTPd.

illumos

This one is really a time issue. I’m still very much interested in the heritage of OpenSolaris and would like to do some more things with it. However I have no idea when I’ll find the time to do a dedicated illumos project. But there will definitely be some illumos involved with one of the other topics. You guess which one! 😉

Ravenports

The Ravenports project is still as fascinating to me as it was when I discovered it. I really wish I could dedicate more time to porting and helping to bring things forward (there’s still quite a lot of ports missing that I’d like to have and use).

While things are going well in general and ports are being updated really, really fast most of the time, big changes are rare right now. But big changes are what it makes sense to write about. And while there are some noteworthy things that I can think of, I’m still waiting for something else to land. Once that happens I will dedicate another post to Raven.

Linux vs. FreeBSD

I recently needed to setup a new Linux machine for a customer. Usually my co-workers do that, since I volunteered to take care of our BSD machines. That installation left me totally puzzled. Has the Debian installer become worse – or does my memory fail me and it has always been so bad (and I didn’t notice when I was into Linux only)?

Since then I thought about a few things. The conclusion is that I really love FreeBSD. It’s not perfect (well, nothing is), but there are so many areas where it’s much, much more comfortable to work with (can you say iptables or mdadm? Yuck!). And there is a lot more beauty and even technical genius if you take a closer look and compare things.

Yes, Linux in much more advanced in many areas. But that’s not much of a surprise given how much more manpower goes into that system. But it is a little miracle how the BSDs with their much lower manpower continue to deliver excellent operating systems on par with or even superior to Linux when it comes to sanity of use. Thank you, *BSD!

Happy new year!

So that’s what I have on my mind right now (I’m not out of ideas, but these are the topics that are on the top of my “would like to write about” list currently). Which of these topics will I be able to deliver and which will I miss? Time will tell. Feel free to comment and tell me what interests you the most.

Happy new year to all of my readers!

Writing a daemon using FreeBSD and Python pt.3

Part 1 of this series covered Python fundamentals, signal handling and logging. We wrote an init script as well as a program that can be daemonized by daemon(8).

In the previous part we modified the program as well as the init script so that it can daemonize itself using the Python daemon module. I also covered a few topics that people totally new to programming (or Python) might want to know to better understand what’s happening.

Part 3 is about exploring a simple means of IPC (inter-program communication) by using named pipes.

Creating a named pipe

What is a named pipe – also known as a fifo (first in, first out)? It is a way of connecting two processes together, where one can sequentially send data and the other receives it in exactly the same order. It’s basically what us Unix lovers use for our command lines all the time when we pipe the input of one program into another. E.g.:

ls | wc -l

In this case the output of ls is piped to wc which will then print the amount of lines to stdout (which could be used as input for another program with another pipe). This kind of pipe between two programs is usually short lived. When the first program is done sending output and the second one has received all the data, the pipe goes away with the two processes. It also only exists between the two processes involved.

A named pipe in contrast is something a bit more permanent and more flexible. It has a representation in the filesystem (which is why it’s a named pipe). One program creates a named pipe (usually in /var/run) and attaches to the receiving end of the pipe. Another process can then attach to the sending end and start putting data into it which will then be received by the former. Named pipes have their own character (p) showing that a file is of type named pipe, looking like this when you ls -l:

prw-rw-r--

Here’s what the next version of the code looks like:

#!/usr/local/bin/python3.6
 
 # Imports #
import daemon, daemon.pidfile, logging, os, signal, time
 
 # Globals #
IN_PIPE = '/var/run/bd_in.pipe'
 
 # Fuctions #
def handler_sigterm(signum, frame):
    logging.debug("Caught SIGTERM! Cleaning up...")
    if os.path.exists(IN_PIPE):
        try:
            os.unlink(IN_PIPE)
        except:
            raise
    logging.info("All done, terminating now.")
    exit(0)

def start_logging():
    try:
        logging.basicConfig(filename='/var/log/bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
    except:
        print("Error: Could not create log file! Exiting...")
        exit(1)

def assert_no_pipe_exists():
    if os.path.exists(IN_PIPE):
        logging.critical("Cannot start: Pipe file \"" + IN_PIPE + "\" already exists!")
        exit(1)

def make_pipe():
    try:
        os.mkfifo(IN_PIPE)
    except:
        logging.critical("Cannot start: Creating pipe file \"" + IN_PIPE + "\" failed!")
        exit(1)
    logging.debug("Created pipe \"" + IN_PIPE)

 # Main #
with daemon.DaemonContext(pidfile=daemon.pidfile.TimeoutPIDLockFile("/var/run/bdaemon.pid"), umask=0o002):
    signal.signal(signal.SIGTERM, handler_sigterm)
    start_logging()
    assert_no_pipe_exists()
    make_pipe()

    logging.info("Baby Daemon started up and ready!")
    while True:
        time.sleep(1)

We’re using a new import here: os. It gives the programmer access to various OS-dependent functions (like pipes which are not existent on Windows for example). I’ve also added a global definition for the location of the named pipe.

The next thing that you’ll notice is that the signal handler function got some new code. Before the daemon terminates it tries to clean up. If the named pipe exists the program will attempt to delete it. I’m not handling what could possibly go wrong here as this is just an example. That’s why in this case I just re-raise the exception and let the program error out.

Then we have a new “start_logging()” function that I put the logging stuff into to unclutter main. Except for that changed structure, there’s really nothing new here.

The next new function, “assert_no_pipe_exists()” should be fairly easy to read: It checks if a file by the name it wants to use is already present in the filesystem (be it as a leftover from an unclean exit or by chance from some other program). If it is found, the daemon aborts because it cannot really continue. If the filename is not taken, however, “make_pipe()” will attempt to create the named pipe.

The other thing that I did was moving the main part back from being a function directly to the program. And since we’re doing small incremental steps, that’s it for today’s step 1. Fire up the daemon using the init script and you should see that the named pipe was created in /var/run. Stop the process and the pipe should be gone.

Using the named pipe

Creating and removing the named pipe is a good first step, but now let’s use it! To do so we must first modify the daemon to attach to the receiving end of the pipe:

#!/usr/local/bin/python3.6
 
 # Imports #
import daemon, daemon.pidfile, errno, logging, os, signal, time
 
 # Globals #
IN_PIPE = '/var/run/bd_in.pipe'
 
 # Fuctions #
def handler_sigterm(signum, frame):
    try:
        close(inpipe)
    except:
        pass

    logging.debug("Caught SIGTERM! Cleaning up...")
    if os.path.exists(IN_PIPE):
        try:
            os.unlink(IN_PIPE)
        except:
            raise
    logging.info("All done, terminating now.")
    exit(0)

def start_logging():
    try:
        logging.basicConfig(filename='/var/log/bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
    except:
        print("Error: Could not create log file! Exiting...")
        exit(1)

def assert_no_pipe_exists():
    if os.path.exists(IN_PIPE):
        logging.critical("Cannot start: Pipe file \"" + IN_PIPE + "\" already exists!")
        exit(1)

def make_pipe():
    try:
        os.mkfifo(IN_PIPE)
    except:
        logging.critical("Cannot start: Creating pipe file \"" + IN_PIPE + "\" failed!")
        exit(1)
    logging.debug("Created pipe \"" + IN_PIPE)

def read_from_pipe():
    try:
        buffer = os.read(inpipe, 255)
    except OSError as err:
        if err.errno == errno.EAGAIN or err.errno == errno.EWOULDBLOCK:
            buffer = None
        else:
            raise
 
    if buffer is None or len(buffer) == 0:
        logging.debug("Inpipe not ready.")
    else:
        logging.debug("Got data from the pipe: " + buffer.decode())
    
 # Main #
with daemon.DaemonContext(pidfile=daemon.pidfile.TimeoutPIDLockFile("/var/run/bdaemon.pid"), umask=0o002):
    signal.signal(signal.SIGTERM, handler_sigterm)
    start_logging()
    assert_no_pipe_exists()
    make_pipe()
    inpipe = os.open(IN_PIPE, os.O_RDONLY | os.O_NONBLOCK)
    logging.info("Baby Daemon started up and ready!")

    while True:
        time.sleep(5)
        read_from_pipe()

Apart from one more import, errno, we have three important changes here. First, the cleanup has been extended, there is a new function, called “read_from_pipe()” and then main has been modified as well. We’ll take a look at the latter first.

There’s a ton of examples on named pipes on the net, but they usually use just one program that forks off a child process and then communicates over the pipe with that. That’s pretty simple to do and works nicely by just copying and pasting the example code in a file. But adapting it for our little daemon does not work: The daemon just seems to “hang” after first trying to read something from the pipe. What’s happening there?

By default, reads from the pipe are in blocking mode, which means that on the attempt to read, the system just waits for data if there is none! The solution is to use non-blocking mode, which however means to use the raw os.open function (that supports flags to be passed to the operating system) instead of the nice Python open function with its convenient file object.

So what does the line starting with “inpipe” do? It calls the function os.open and tells it to open IN_PIPE where we defined the location of our pipe. Then it gives the flags, so that the operating system knows how to open the file, in this case in read-only and in non-blocking mode. We need to open it read-only, because the daemon should be at the receiving side of the pipe. And, yes, we want non-blocking, so that the program continues on if there is no data in the pipe without waiting for it all the time!

What might look a little strange to you, is the | character between the two flags. Especially since on the terminal it’s known as the pipe character and we’re talking about pipes here, right? In this case it’s something completely unrelated however. That symbol just happens to be Python’s choice for representing the bit-wise OR operator. Let’s leave it at that (I’ll explain a bit more of it in a future “Python pieces” section, but this article will be long enough without it).

However that’s still not all that the line we’re just discussing does. The os.open() function returns a file descriptor that we’re then assigning to the inpipe variable to keep it around.

What’s left is a new infinite loop that calls read_from_pipe() every 5 seconds.

Speaking of that function, let’s take a closer look at what it does. It tries to use the os.read function to read up to 255 bytes from the pipe into the variable named buffer. We’re doing so in a try/except block, because the read is somewhat likely to fail (e.g. if the pipe is empty). When there’s an exception, the code checks for the exact error that happened and if it’s EAGAIN or EWOULDBLOCK, we deliberately empty the buffer. If some other error occurred, it’s something that we didn’t expect, so let’s better take the straight way out by raising the exception again and crashing the program.

On FreeBSD the error numbers are defined in /usr/include/errno.h. If you take a look at it, you see that EAGAIN and EWOULDBLOCK are the same thing, so checking for one of them would be enough. But it makes sense to know that on some systems these are separate errors and that it’s good practice to check for both.

If the buffer either has the None value or has a length of 0, we assume that the read failed. Otherwise we put the data into the log. To make it readable we have to use decode, because we will be receiving encoded data.

All that’s left is the cleanup function. I’ve added another try/except block that simply tries to close the pipe file before trying to delete it. This is example code, so to make things not even more complex, I just silently ignore if the attempt fails.

Control script

Ok, great! That was quite a bit of things to cover, but now we have a daemon that creates a pipe and tries to read data from it. There’s just one problem: How can we test it? By creating another, separate program, that puts data in the pipe of course! For that let’s create another file with the name bdaemonctl.py:

#!/usr/local/bin/python3.6

 # Imports #
import os, time

 # Globals #
OUT_PIPE = '/var/run/bd_in.pipe'

 # Main #
try:
    outpipe = os.open(OUT_PIPE, os.O_WRONLY)
except:
    raise

for i in range(0, 21):
    print(i)
    try:
        os.write(outpipe, bytes(str(i).encode('utf-8')))
    except BrokenPipeError:
        print("Pipe has disappeared, exiting!")
        os.close(outpipe)
        exit(1)
    time.sleep(3)
os.close(outpipe)

Fortunately this one is fairly simple. We do our imports and define a variable for the pipe. We could skip the latter, because we’re using it on only one occasion but in general it’s a good idea to keep it as it is. Why? Because hiding things deep in the code may not be such a smart move. Defining things like this at the top of the file increases the maintainability of your code a lot. And since we want to send data this time, of course we name our variable OUT_PIPE appropriately.

In the main section we just try to open the pipe file and crash if that doesn’t work. It’s pretty obvious that such a case (e.g. the pipe is not there because the daemon is not running) should be better handled. But I wanted to keep things simple here because it’s just an example after all.

Then we have a loop that counts from 0 to 20, outputs the current number to stdout and tries to also send the data down the pipe. If that works, the program waits three seconds and then continues the loop.

To be able to write to the pipe we need a byte stream but we only have numbers. We first convert them to a string and use a proper encoding (utf8) and then convert them to bytes that can be sent over the pipe.

When the loop is over, we close the pipe file properly because we as the sender are done with it. I added a little bit of code to handle the case when the daemon exits while the control script runs and still tries to send data over the pipe. This results in a “broken pipe” error. If that happens, we just print an error message, close the file (to not leak the file descriptor) and exit with an error code of 1.

So for today we’re done! We can now send data from a control program to the daemon and thus have achieved uni-directional communication between two processes.

What’s next?

I’ll take a break from these programming-related posts and write about something else next.

However I plan to continue with a 4th part later which will cover argument parsing. With that we could e.g. modify our control program to send arbitrary data to the daemon from the command line – which would of course be much more useful than the simple test case that we have right now.

Writing a daemon using FreeBSD and Python pt.2

The previous part of this series left off with a running “baby daemon” example. It covered Python fundamentals, signal handling, logging as well as an init script to start the daemon.

Daemonization with Python

The outcome of part 1 was a program that needed external help actually to be daemonized. I used FreeBSD’s handy daemon(8) utility to put the program into the background, to handle the pidfile, etc. Now we’re making one step forward and try to achieve the same thing using just Python.

To do that, we need a module that is not part of Python’s standard library. So you might need to first install the package py36-daemon if you don’t already have it on your system. Here’s a small piece of code for you – but don’t get fooled by the line count, there’s actually a lot of things going on there (and of concepts to grasp):

#!/usr/local/bin/python3.6
 
 # Imports #
import daemon, daemon.pidfile
import logging
import signal
import time

 # Fuctions #
def handler_sigterm(signum, frame):
    logging.debug("Exiting on SIGTERM")
    exit(0)

def main_program():
    signal.signal(signal.SIGTERM, handler_sigterm)
    try:
        logging.basicConfig(filename='/var/log/bdaemon.log', format='%(levelname)s:%(message)s', level=logging.DEBUG)
    except:
        print("Error: Could not create log file! Exiting...")
        exit(1)
 
    logging.info("Started!")
    while True:
        time.sleep(1)

 # Main #
with daemon.DaemonContext(pidfile=daemon.pidfile.TimeoutPIDLockFile("/var/run/bdaemon.pid"), umask=0o002):
    main_program()

I dropped some ballast from the previous version; e.g. overriding SIGINT was a nice thing to try out once, but it’s not useful as we move on. Also that countdown is gone. Now the daemon continues running until it’s signaled to terminate (thanks to what is called an “infinite loop”).

We have two new imports here that we need for the daemonization. As you can see, it is possible to import multiple modules in one line. For readability reasons I wouldn’t recommend it in general. I only do it when I import multiple modules that kind of belong together anyway. However in the coming examples I might just put everything together to save some lines.

The first more interesting thing with this version is that the main program was moved to a function called “main_program”. We could have done that before if we really wanted to, but I did it now so the code doesn’t take attention away from the primary beast of this example. Take a look at the line that starts with the with keyword. Now that’s a mouthful, isn’t it? Let’s break this one up into a couple of pieces so that it’s easier to chew, shall we?

The value for umask is looking a bit strange. It contains an “o” among the numbers, so it has to be a string, doesn’t it? But why is it written without quotes then? Well, it is a number. Python uses the “0o” prefix to denote octal (the base-8 numbering system) numbers and 0x would mean hexadecimal (base-16) ones.

Remember that we talked about try/except before (for the logging)? You can expand on that. A try block can not only have except blocks, it can also have a finally block. Statements in such a block are meant to be executed no matter the outcome of the try block. The classical example is that when you open a file, you definitely want to close it again (everything else is a total mess and would make your program an exceptionally bad one).

Closing it when you are done is simple. But what if an exception is raised? Then the code path that properly closes the file might never be reached! You could close the file in every thinkable scenario – but that would be both tedious and error-prone. For that reasons there’s another way to handle those cases: Close the file in the finally block and you can be sure that it will be closed regardless of what happens in the try or in any except block.

Ok, but what does this have to do with our little daemon? Actually a lot. That case of try/finally has been so common that Python provides a shortcut with so-called context managers. They are objects that manage a resource for you like this: You request it, it is valid only inside one block (the with one!) and when the block ends, the context manager takes care of properly cleaning up for you without having you add any extra code (or even without you knowing, if you just copy/paste code from the net without reading explanations like this).

So the with statement in our code above lets Python handle the daemonization process while the main_program function is running. When it ends on the signal, Python cleans up everything and the process terminates – which is great for us. Accept that for now and live with the fact that you might not know just how it does that. We’ll come back to things like that.

Updated init script

Ok, the one thing left to do here is making the required changes to the init script. We are no longer using the daemon(8) utility, so we need to adjust it. Here it is the new one:

#!/bin/sh

. /etc/rc.subr

name=bdaemon
rcvar=bdaemon_enable

command="/root/bdaemon.py"
command_interpreter=/usr/local/bin/python3.6
pidfile="/var/run/${name}.pid"

load_rc_config $name
run_rc_command "$1"

Not too much changed here, but let’s still go over what has. The command definition is pretty obvious: The program can now daemonize itself, so we call it directly. It doesn’t take any arguments, which means we can drop command_args.

However we need to add command_interpreter instead (one important thing that I had overlooked first), because the program will look like this in the process list:

/usr/local/bin/python3.6 /root/bdaemon.py

Without defining the interpreter, the init system would not recognize this process as being the correct one. Then we also need to point it to the to the pidfile, because in theory there could be multiple processes that match otherwise.

And that’s it! Now we have a daemon process running on FreeBSD, written in pure Python.

Python pieces

This next part is a completely optional excursion for people who are pretty new to programming. We’ll take a step back and discuss concepts like functions and arguments, modules, as well as namespaces. This should help you better understand what’s happening here, if you like to know more. Feel free to save some time and skip the excursion if you are familiar with those things.

Functions and arguments

As you’ve seen, functions are defined in Python by using the def keyword, the function name and – at the very least – an empty pair of parentheses. Inside the parentheses you could put one or more arguments if needed:

def greet(name):
    print("Hi, " + name + "!")

greet("Alice")
greet("Bob")

Here we’re passing a string to the function that it uses to greet that person. We can add a second argument like this:

def greet(name, phrase):
    print("Hi, " + name + "! " + phrase)

greet("Alice", "Great to see you again!")
greet("Bob", "How are you doing?")

The arguments used here are called positional arguments, because it’s decided by their position what goes where. Invert them when calling the function and the output will obviously be garbage as the strings are assigned to the wrong function variable. However it’s also possible to refer to the variable by name, so that the order does no longer matter:

def greet(name, phrase):
    print("Hi, " + name + "! " + phrase)

greet(phrase="Great to see you again!", name="Alice")
greet("Bob", "How are you doing?")

This is what is used to assign the values for the daemon context. Technically it’s possible to mix the ways of calling (as done here), but that’s a bit ugly.

We’re not using it, yet, but it’s good to know that it exists: There are also default values. Those mean that you can leave out some arguments when calling a function – if you are ok with the default value.

def greet(name, phrase = "Pleased to meet you."):
    print("Hi, " + name + "! " + phrase)

greet(phrase="Great to see you again!", name="Alice")
greet("Bob", "How are you doing?")
greet("Carol")

And then there’s something known as function overloading. We’re not going into the details here, but you might want to know that you can have multiple functions with the same name but a different number of arguments (so that it’s still possible to precisely identify which one needs to be called)!

Modules

When reading about Python it usually won’t take too long before you come across the word module. But what’s a module? Luckily that’s rather easy to explain: It’s a file with the .py extension and with Python code in it. So if you’ve been following this daemon tutorial, you’ve been creating Python modules all the way!

Usually modules are what you might want to refer as to libraries in other languages. You can import them and they provide you with additional functions. You can either use modules that come with Python by default (that collection of modules is known as the standard library, so don’t get confused by the terminology there), additional third-party modules (there are probably millions) or modules that you wrote yourself.

It’s fairly easy to do the latter. Let’s pick up the previous example and put the following into a file called “greeter.py”:

forgot_name = "Sorry, what was your name again?"

def greet(name, phrase = "Pleased to meet you."):
    print("Hi, " + name + "! " + phrase)

Now you can do this in another Python program:

import greeter

greeter.greet("Carol")
print(greeter.forgot_name)

This shows that after importing we can use the “greet()” function in this program, even though it’s defined elsewhere. We can also access variables used in the imported module (greeter.forgot_name in this case).

Namespaces

Ever wondered what that dot means (when it’s not used in a filename)? You can think of it as a hierarchical separator. The standard Python functions (e.g. print) are available in the global namespace and can thus be used directly. Others are in a different namespace and to use them, it’s necessary to refer to that namespace as well as the function name so that Python understand what you want and finds the function. One example that we’ve used is time.sleep().

Where does this additional namespace come from? Well, remember that we did import time at the top of the program? That created the “time” namespace (and made the functions from the time module available there).

There’s another way of importing; we could import either everything (using an asterisk (*) character, but that’s considered poor coding) or just specific functions from one module into the global namespace:

from time import sleep
sleep(2)
exit(0)

This code will work because the “from MODULE import FUNCTION” statement in this example imported the sleep function so that it becomes available in the global namespace.

So why do we go through all the hassle to have multiple namespaces in the first place? Can’t we just put everything in the global one? Sure, we could – and for more simple programs that’s in fact an option. But consider the following case: Python provides the open keyword. It’s used to open a file and get a nice object back that makes accessing or manipulating data really easy. But then there’s also os.open, which is not as friendly, but let’s you use more advanced things since it uses the raw operating system functionality. See the problem?

If you import the functions from os into the global namespace, you have a name clash in the case of open. This is not an error, mind you. You can actually do that, but you should know what happens. The function imported later will override the one that went by that name previously, effectively making the original one inaccessible. This is called “shadowing” of the original function.

To avoid problems like this it’s often better to have your own separate namespace where you can be sure that no clashes happen.

What’s next?

In the next part we’ll take a look at implementing IPC (inter-process communication) using named pipes (a.k.a “fifos”).