Dystopian Open Source

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/dystopian_open_source.gmi

Happy New Year dear reader! The other day I watched a video on YouTube that had only 6 views since last October. It is about a very important topic, though, and I wish it would have a larger impact as well as get more people alarmed and thinking about the current trends in Open Source. This is not a “OMG we’re all doomed!!1” post, but I want to talk about what I feel are grave dangers that we should really, really aim some serious consideration at.

“Pay to Play”

For the readers who would like to watch the video (about 7 minutes), it’s here. Some background info: It’s by Lucas Holt. He is the lead developer of MidnightBSD, a project that began as a Fork of FreeBSD 6.1 and aimed for better usability on the desktop. There were a couple of people who contributed to the project over time, but it never really took off. Therefore it has continued as a project almost entirely done by one man.

It’s not hard to imagine just how much work it is to keep an entire operating system going; much larger teams have failed to deliver something useful after all. So while it’s no wonder that MidnightBSD is not in a state where anybody would recommend it to put to everyday usage, I cannot deny that I admire all the work that has been done.

Holt has merged changes back from FreeBSD several times, eventually updating the system to basically 11.4 plus the MidnightBSD additions and changes. He maintains almost 5,000 ports for his platform (of course not all are in perfect shape, though). And he has kept the project going since about 2006 – despite all the taunting and acid-tongued comments on “the most useless OS ever” and things like that. Even though I never found somewhat serious use for MidnightBSD (and I tried a couple of times!), considering all of that he has earned my deepest respect.

To sum up the video: He talks about a trend in Open Source that some very important projects started to raise the bar on contributing to them. Sometimes you’re required to employ two full-time (!) developers to be considered even worth hearing. Others require you to provide them with e.g. a paid Amazon EC2 instance to run their CI on. And even where that’s not the case, some decision makers will just turn you down if you dare to hand in patches for a platform that’s not a huge player itself.

Quite a few people do not even try to hide that they only ever care about Linux and Holt has made the observation that some of the worst-behaving, most arrogant of these are – Redhat employees. There are people on various developer teams that choose to deliberately ruin things for smaller projects, which is certainly not good and shouldn’t be what Open Source is about.

What does Open Source mean to us?

At a bare minimum, Open Source only means that the source for some application, collection of software or even entire operating system is available to look at. I could write some program, put the code under an extremely restrictive license and still call this thing “Open Source” as long as I make the code available by some means. One could argue that in the truest sense of the two words that make up the term, that would be a valid way to do things. But that’s not what Open Source is or ever was about!

There are various licenses out there that are closely related to Open Source. Taking a closer look at them is one great way to find the very essence of what Open Source actually is. There are two important families of such licenses: The so-called Copyleft licenses and the permissive licenses. One could say that downright religious wars have been waged about which side holds the one real truth…

People who have been reading my blog for a while know that I do have a preference and made quite clear which camp I belong to, even though I reject the insane hostility that some zealots preach. But while the long-standing… err… let’s say: controversy, is an important part of Open Source culture, the details are less relevant to our topic here. They basically disagree on the question of what requirements to put in the license. Should there be any at all? Is it sufficient to ask for giving credit to the original authors? Or should users be forced to keep the source open for example?

Both license families however do not dispute the fundamental rights given to users: They want you to be able to study the code, to build it yourself, to make changes and to put the resulting programs to good use. While it’s usually not explicit, the very idea behind all of Open Source is to allow for collaboration.

Forkability of Open Source projects

Over the years we’ve seen a lot of uproar in the community when the leaders of some project made decisions that go against these core values of Open Source. While some even committed the ultimate sin of closing down formerly open code, most of the time it’s been slightly less harsh. Still we have seen XFree86 basically falling into oblivion after Xorg was forked from it. The reason this happened was a license change: One individual felt that it was time for a little bit of extra fame – and eventually he ended up blowing his work to pieces. Other examples are pfSense and OPNsense, Owncloud and Nextcloud or Bacula and Bareos. When greed strikes, some previously sane people begin to think that it’s a good idea to implement restrictions, rip off the community and go “premium”.

One of the great virtues of Open Source is that a continuation of the software in the old way of the project is possible. With OPNsense we still have a great, permissively licensed firewall OS based on FreeBSD and Pf despite NetGate’s efforts to mess with pfSense. Bareos still has the features that Bacula cut out (!) of the Open Source version and moved to the commercial one. And so on. The very nature of Open Source also allows for people to pick up and continue some software when the original project shuts down for whatever reason.

There are a lot of benefits to Open Source over Closed Source models. But is it really immune to each and every attack you can aim at it?

Three dangers to Open Source!

There is always the pretty obvious danger of closing down source code if the license does not prohibit that. Though I make the claim that this in fact mostly a non-issue. There are a lot of voices out there who are going hysteric about this. But despite what they try to make things look, it is impossible to close down source code that is under an Open Source license! A project can stop releasing the source for newer versions, effectively stopping to distribute current code. But then the Open Source community can always stop using that stuff and continue on with the a fork that stays open.

But we haven’t talked about three other immanent dangers: narrow-mindedness, non-portability and leadership driven by monetary interest.

Narrow-mindedness

One could say that today Open Source is victim of its overwhelming success. A lot of companies and individual developers jumped the wagon because it’s very much beneficial for them. “Let’s put the source on GitHub and people might report issues or even open pull-requests, actively improving our code – all for free!” While this is a pretty smart thing to do from a commercial point of view, in this case software code was not opened up because somebody really believes in the ideas of Open Source. It was merely done to benefit from some of the most obvious advantages.

Depending on how far-sighted such an actor is, he might understand the indirect advantages to the project when keeping things as open as possible – or maybe not. For example a developer might decide that he’ll only ever use Ubuntu. Somebody reports a problem with Arch Linux: Close (“not supported!”). Another person opens a PR adding NetBSD support: Close (“Get lost, freak!”).

Such behavior is about as stupid and when it comes to the values also as anti Open Source as it gets. Witnessing something like this makes people who actually care about Open Source cringe. How can anybody be too blind to see that they are hurting themselves in the long run? But it happens time and time again. By turning down the Arch guy, the project has probably lost a future contributor – and maybe the issue reported was due to incompatibilities with the never GCC in Arch that will eventually land in Ubuntu, too, and could have been fixed ahead of time…

Open Source is about being open-minded. Just publishing the source and fishing for free contributions while living the ways of a closed-source spirit is in fact a real threat to Open Source. I wish more people would just say no to projects that regularly say “no” to others (without a good reason). It’s perfectly fine that some project cannot guarantee their software to even compile on illumos all the time. But the illumos people will take care of that and probably submit patches if needed. But refusing to even talk about possible support for that platform is very bad style and does not fit well with the ideals of Open Source.

If I witness that an arrogant developer insults, say a Haiku person, I’ll go looking for more welcoming alternatives (and am perfectly willing to accept something that is technically less ideal for now). Not because I’ve ever used Haiku or do plan to do so. But simply because I believe in Open Source and in fact have a heart for the cool smaller projects that are doing interesting things aside of the often somewhat boring mainstream.

Non-portability

Somewhat related to the point above is (deliberate) non-portability. A great example of this is Systemd. Yes, there have been many, many hateful comments about it and there are people who have stated that they really hope the main developer will keep the promise to never make it portable “so that *BSD is never going to be infected”.

But whatever your stance on this particular case is – there is an important fact: As soon as any such non-portable Open Source project gains a certain popularity, it will begin to poison other projects, too. Some developers will add dependencies to such non-portable software and thus make their own software unusable on other platforms even though that very software alone would work perfectly fine! Sometimes this happens because developers make the false assumption that “everybody uses Systemd today, anyway”, sometimes because they use it themselves and don’t realize the implication of making it a mandatory requirement.

If this happens to a project that basically has three users world-wide, it’s a pitty but does not have a major impact. If it’s a software however that is a critical component in various downstream projects it can potentially affect millions of users. The right thing here is not to break solidarity with other platforms. Even if the primary platform for your project is Linux, never ever go as far as adding a hard dependency on Systemd and other such software! If you can, it’s much better to make support optional so that people who want to use it benefit from existing support. But don’t ruin the day for everybody else!

And think again about the exemplary NetBSD pull-request mentioned above: Assume that the developer had shown less hostility and accepted the PR (with no promises to ever test if things actually work properly or at all). The software would have landed in Pkgsrc and somebody else would soon have hit a problem due to a corner case on NetBSD/SPARC64. A closer inspection of that would have revealed a serious bug that remained undetected and unfixed. After a new feature was added not much later, the bug became exploitable. Eventually the project gained a “nice” new CVE of severity 9.2 – which could well have been avoided in an alternate reality where the project leader had had a more friendly and open-minded personality…

Taking portability very seriously is exceptionally hard work. But remember: Nobody is asking you to support all the hardware you probably don’t even have or all the operating systems that you don’t know your way around on. But just be open to enthusiasts who care for such platforms and let them at least contribute.

Leadership with commercial interests

This one is a no-brainer – but unfortunately one that we can see happening more and more often. Over the last few years people started to complain about e.g. Linux being “hi-jacked by corporations”. And there is some truth to it: There is a lot of paid work being done on various Open Source projects. Some of the companies that pay developers do so because they have an interest in improving Open Source software they use. A couple even fund such projects because they feel giving back something after receiving for free is the right thing to do. But then there’s the other type, too: Corporations that have their very own agenda and leverage the fact that decision makers on some projects are their employees to influence development.

Be it the person responsible for a certain kernel subsystem turning down good patches that would be beneficial for a lot of people for seemingly no good reason – but in fact because they were handed in by a competitor because his employer is secretly working on something similar and has an interest to get that one in instead. Be it because the employer thinks that the developer is not payed to do anything for platforms that are not of interest to its own commercial plan and is expected to simply turn those down to “save time” for “important work”. Things like that actually happen and have been happening for a while now.

Limiting the influence of commercial companies is a topic on its own. IMO more projects should think about governance models much more deeply and consider the possible impacts of what can happen if a malicious actor buys in.

Towards a more far-sighted, “vrij” Open Source?

As noted above, I feel that some actors in Open Source are too much focused on their own use-case only and are completely ignorant of what other people might be interested in. But as this post’s topic was a very negative one, I’d like to end it more positively. Despite the relatively rare but very unfortunate misbehaving of some representatives of important projects, the overwhelming majority of people in Open Source are happy to allow contributions from more “exotic” projects.

But what’s that funny looking word doing there in the heading? Let me explain. We already have FOSS, an acronym for “Free and Open Source Software”. There’s a group of people arguing that we should rather focus on what they call FLOSS, “Free and Libre Open Source Software”. The “libre” in there is meant to put focus on some copyleft ideas of freedom – “free” was already taken and has the problem that the English word doesn’t distinguish between free “as in freedom” and free of charge. I feel that a term that emphasizes the community aspect of Open Source, the invitation to just about anybody to collaborate and Open Source solidarity with systems other than what I use, could be helpful. How about VOSS? I think it’s better than fitting in another letter there.

Vrij is the Dutch word for free. Why Dutch? For one part to honor the work that has been done at the Vrije Universiteit of Amsterdam (for readers who noticed the additional “e”: That’s due to inflection). Just think of the nowadays often overlooked work of Professor Tanenbaum e.g. with Minix (which inspired Linux among other things). The other thing is that it’s relatively easy to pronounce for people who speak English. It’s not completely similar but relatively close to the English “fray”. And if you’re looking for the noun, there’s both vrijheid and vrijdom. I think the latter is less common, but again: It’s much closer to English “freedom” and thus probably much more practical.

So… I really care for vrij(e) Open Source Software! Do you?

RISC-V – Open Hardware coming to us?

There is no question: Open Source Software is a success. In the beginning of computing it went without saying that software was distributed along with the source code for it. This was before commercialism came up with the idea to close the source and hide it away. But even though there still are many companies who make millions of Dollars with their closed software solutions, OSS has a strong stand.

There are various operating systems which are developed with an open source model and there are open source applications for just about every use you could imagine. Today it’s not hard at all to run your computer with just OSS for all of your daily tasks (unless you have very special needs). I do – and I am more happy with it then I was at home in the over-colored world of put-me-under-tutelage operating systems and the software typically running on them.

Open Source Hardware?

While – like I just said – OSS has largely succeeded, Open Source Hardware is really just in its very early stages. But what is OSH after all? Unlike software it does not have any “source code” in the common meaning of the word. And for sure hardware can’t simply be copied and thus easily duplicated!

The later is completely true of course. And it’s also the reason that makes open hardware a thing very different from free software. While anybody can learn to code, write a program and give it away for free without having any expenses, this is simply not possible with hardware. You cannot do something once and are done with it (except for maintaining or perhaps improving). Every single piece of hardware takes time to assemble. You cannot create hardware ex nihilo (“from nothing”) – you need material and machinery for it.

So while we often come across the term Free and Open Source Software, there can’t be Free Hardware (unless it’s donated – which still means that there’s only a limited amount available). So we are left with the next best thing: Open Source. In case of hardware this means that the blueprints for the hardware are available. But what is the benefit of this?

Research, Innovation, Progress

Of course OSH doesn’t enable you to have a look at the source, change it and recompile. But while modifying it is a lot more complicated than that, it is at least possible at all! Interested in how some piece of hardware works? If it is some conventional hardware – tough luck (unless you’re working as a specialist in analysing hardware from your competitors). If it’s OSH however, just get the circuit diagram and the other papers and given that your knowledge in Electronics suffice, you won’t have too many problems.

Usually hardware vendors try to keep their products as untransparent as possible. The idea is to keep the gap between company A and the competitors as large as possible by making it hard for those to make use of any results from the research of company A. From a purely commercial point of view this is the most natural thing to do. From a technical one however, it is not good practice at all. Any new vendor is doomed to repeat much of the research that was already conducted by others.

With OSH, not only the results of the research (the hardware pieces) are made available, but the complete documentation is also published. This means that others can have a look at how that hardware is being built. They can copy the design and concentrate on improving it instead of having to do fundamental research beforehand, maybe only to find a solution to essential problems that have been solved several hundred times before… This means that while individual companies profit a lot from closed sources, this practice is a major waste of resources and time. Just imagine how much better hardware we might already have today if all these wasted resources had been put to good use!

Trust and Security

Another important issue is security. We know that e.g. just about all routers available contain backdoors intended mostly for intelligence services. But even if you don’t care about this: Keep in mind that if there are holes in your hardware, ordinary criminals could exploit them as well. Do you want this? Is it a pleasant thought that your hardware which you bought for a lot of money, keeps spying on you? Think about it for a moment. The bitter conclusion will be that there is not much you can do.

Some people recommend not to buy electronics from the US anymore. A good idea at first glance. Still let’s be realistic: The USA might have the biggest spying apparatus of the world and thanks to Mr. Snowden we are alarmed about it. But that doesn’t mean other nations don’t do the same. This is rather unlikely if you ask me. And in the end, currently it’s the choice whether your hardware tells everything about you to the Americans or to the Chinese…

This is a very dissatisfactory situation. Open Source Hardware could help here a lot: If you’re really a serious target of intelligence, you probably even deserve it. In this case they have the means to intercept your hardware and – make subtle changes you probably won’t ever notice. But it could make todays mass-surveillance much harder or even impossible. Got your new piece of OSH? Thanks to the detailed specifications you could even substitute its firmware with a custom one thwarting the purpose of the one that was installed by whoever person or group.

Open Source CPUs

One of the most important pieces of OSH would surely be a computer’s CPU. What matters most in this regard is – next to the actual design – the instruction set. It is this instruction set which determines a processor’s capabilities. Each family of CPUs (i.e. those which have the same or a compatible instruction set) is its own platform. Programs have to be compiled to run on it and won’t run on another (unless emulated).

There have been various attempts to create OSH CPUs or at least come somewhere close to it. Since the beginning of the 1990’s, the MIPS CPU’s design are licensable. While this does not mean the diagrams are available to the public, it is at least possible to acquire them if you are willing to pay for it. If you are, you can produce your own CPUs based on their design. SUN attempted to go the same way with its SPARC architecture but was less successful.

In the recent years the ARM platform has gained a lot of attention – thanks to basically following the same strategy and licensing their CPU designs to their customers. This development is a good step in the right direction and certainly commendable: One company specializes in conducting research and designing CPUs and others license and build theirs based on them.

But then there are projects which really qualify as OSH. Yes, they are underdogs compared to the other hardware which is no wonder because they of course lack the financial means which back the others. But we are getting there. Slowly but steadily.

RISC-V

This month the Berkley university released what was originally started just for learning purposes but grew into a very promising project: The RISC-V. RISC stands for “Reduced instruction set computing” which basically means the decision to build a CPU that uses a simpler instruction set to reach high performance.

While there’s also another OSH project known as OpenRISC, it never gained enough traction. They managed to design a 32-bit CPU and the architecture has been implemented in emulators such as Qemu and can run Linux. A 64-bit variant has been in development for quite some time and while the project collected money via donations for years it has been unsuccessful to actually produce their hardware. So OpenRISC exists merely as a soft-CPU and using FPGAs (Field programmable gate array). OpenRISC is licensed under the (L)GPL.

RISC-V is made available under the proven BSD license. In contrast to the GPL it is a permissive license. While I was keeping my fingers crossed for OpenRISC for years, now I am really excited at this news! Especially with such a reputable entity as Berkley behind the project – and the fact that things look like they are really moving forward this time!!

One of the most promising efforts is lowRISC: This British endeavor is actually a non-for-profit organisation claiming to work together closely with the university of Cambridge and some people of Raspberry PI fame. A dream come true! The idea is to implement a 64-bit RISC processor via FPGA during the next months and have a test chip ready by the end of 2015. They estimate that about a year later, in the end of 2016, the first chips could be produced. Sounds like a plan, doesn’t it?

I will definitely keep an eye on this. And even if the release must be postponed, it will surely be worth the wait. Open Source Hardware is coming to us. It will most likely be “expensive” (since it can only be produced in much lower numbers than conventional hardware for the mass market) and quite a bit “weaker” in terms of performance. Nevertheless it will be trailblazing the future for OSH and thus drawbacks like these are very much acceptable.

No RISC no fun!