Top things that I missed in 2015

Another year of blogging comes to an end. It has been quite full of *BSD stuff so that I’d even say: Regarding this blog it has been a BSD year. This was not actually planned but isn’t a real surprise, either. I’ve not given up on Linux (which I use on a daily basis as my primary desktop OS) but it’s clear that I’m fascinated with the BSDs and will try to get into them further in 2016.

Despite being a busy year, there were quite a few things that I would have liked to do and blog about that never happened. I hope to be able to do some of these things next year.

Desktops, toolkits, live DVD

One of the most “successful” (in case of hits) article series was the desktop comparison that I did in 2012. Now in that field a lot has happened since then and I really wanted to do this again. Some desktops are no longer alive others have become available since then and it is a sure thing that the amount of memory needed has changed as well… ๐Ÿ˜‰

Also I’ve never been able to finish the toolkit comparison which I stopped in the middle of writing about GTK-based applications. This has been started in 2013 so it would also be about time. However my focus has shifted away from the original intend of finding tools for a light-weight Linux desktop. I’ve become involved with the EDE project (“Equinox Desktop Environment”) that uses the FLTK toolkit and so people could argue that I’m not really unbiased anymore. Then again… I chose to become involved because that was the winner of my last test series – and chances are that the reasons for it are still valid.

And then there’s the “Desktop Demo DVD” subproject that never really took off. I had an Arch-based image with quite some desktops to choose from but there were a few problems: Trinity could not be installed alongside KDE, Unity for Arch was not exactly in good shape, etc. But the biggest issue was the fact that I did not have webspace available to store a big iso file.

My traffic statistics show that there has been a constant interest in the article about creating an Arch Linux live-CD. Unfortunately it is completely obsolete since the tool that creates it has changed substantially. I’d really like to write an updated version somewhen.

In fact I wanted to start over with the desktop tests this summer and had started with this. However Virtual Box hardware acceleration for graphics was broken on Arch, and since this is a real blocker I could not continue (has this been resolved since?).

OSes

I wrote an article about HURD in 2013, too, and wanted to re-visit a HURD-based system to see what happened in the mean time. ArchHURD has been in coma for quite some time. Just recently there was a vital sign however. I wish the new developer best luck and will surely do another blog post about it once there’s something usable to show off!

The experiments with Arch and an alternative libc (musl) were stopped due to a lack of time and could be taken further. This has been an interesting project that I’d like to continue some time in some form. I also had some reviews of interesting but lesser known Linux distros in mind. Not sure if I find time for that, though.

There has been a whole lot going about both FreeBSD and OpenBSD. Still I would have liked to do more in that field (exploring jails, ZFS, etc.). But that’s things I’ll do in 2016 for sure.

Hardware

I’ve played a bit with a Raspberry 2 and built a little router with it using a security orientated Linux distro. It was a fun project to do and maybe it is of any use to somebody.

One highlight that I’m looking forward to mess with is the RISC-V platform, a very promising effort to finally give us a CPU that is actually open hardware!

Other things

There are a few other things that I want to write about and hope to find time for soon. I messed with some version control tools a while back and this would make a nice series of articles, I think. Also I have something about devops in mind and want to do a brief comparison of some configuration management tools (Puppet, Chef, Salt Stack, Ansible – and perhaps some more). If there is interest in that I might pick it up and document some examples on FreeBSD or OpenBSD (there’s more than enough material for Linux around but *BSD is often a rather weak spot). We’ll see.

Well, and I still have one article about GPL vs. BSD license(s) in store that will surely happen next year. That and a few topics about programming that I’ve been thinking about writing for a while now.

So – goodbye 2015 and welcome 2016!

Happy new year everyone! As you can see, I have not run out of ideas. ๐Ÿ™‚

Thea: The gain of giving away for free

This post is inspired by the game Thea: The Awakening. No, Eerie Linux has not mutated into a games blog. Yes, I will give a short description of the game. But what this post is really about is some thoughts about software development in the past, today and what could be a more open future.

Why Thea? Because the developers did something very uncommon: They decided to give the game away for free – if you’re a Linux user that is!

Thea: The Awakening

The game in question is a turn-based strategy game with a strong focus on survival. There’s a nice background story: The world had turned to darkness (playing the game you will discover why) and is haunted by creatures and spirits of the dark. Now the sun is rising again and the gods have returned but both are very weak and darkness will not give up without a fierce fight. Slavic mythology makes for a very nice and rather uncommon setting.

In case you want to give it a try, you can find a download link here. And yes, it is really completely free. You don’t need to buy the Windows version first or something.

I’ve successfully run the game on the Mint laptop that I share with my wife and can confirm that it works well. No luck on a 32-bit machine that I installed Arch on to give the 32-bit version of the game a try. It won’t start and the console messages give no clues why this may be. So if you’re still stuck with 32-bit only systems, you’re probably out of luck. ๐Ÿ˜‰

The developers stated that they have not even tested the Linux version themselves! So what works and what doesn’t? Most things seem to work surprisingly well in fact. Sound, graphics, even the intro video. I’ve experienced graphical glitches with some white pixels appearing for a second (nope, no AMD video card – it’s Intel!). But this happens just rarely and is a fairly minor issue. Far more annoying is the fact that you cannot really use the keyboard: A key press works but the release event doesn’t… This is a known issue with the version of the Unity engine that Thea uses. It may or may not be addressed in a future release. You can however get the keys released by ALT-TABbing out of the game and back in. That way you can at least always access the menu.

You choose one of the gods when starting a game. I’ve played scenarios for multiple gods now. The main story (“Cosmic Tree”) gets pretty repetitive soon since it’s always the same. This is also true for a lot of the other quests. However the game has options to skip a lot of the text in case you already know it which certainly was a good idea. Some of the quests are different depending of which god you chose which keeps things interesting story-wise. Maps, resources, encounters, etc. are randomly generated for each game. This together with a challenging survival, plenty of combinations to try for crafting items and interesting gameplay, Thea might still cause a rather high motivation to replay the game often.

Software development models

I’d like to separate some development approaches here and sum them up by giving their model as I see it a name. These are no official models (I’m not a game developer) but an attempt to sum up the whole thing in one heading.

The shareware model

There was once a time when software was developed in a purely closed manner. It was developed internally and when it was ready, a release was done and advertised. The good thing was that games were often cut into “episodes” and the first one given away as shareware so people could try out the game for free and might decide to buy the full product.

The public relations model

Advertising grew bigger and bigger as well as more and more aggressive. Top titles games were often announced as development begun and some material was released along the development process to keep people hooked. This worked in some cases and failed in others (say Duke Nukem forever announced in 1996).

It was a reasonable move to try to build up an audience interested in a certain title early. The problem with that is mainly two things: You cannot keep people hooked for an arbitrary amount of time and such a continuing advertising campaign costs a whole lot of money way before you start earning anything from sales.

These problems lead to a new one, however. It puts very high pressure on the developers to meet deadlines to stay on schedule. And sometimes people in charge may even decide to release a half-baked product which almost always is a very bad idea… (what was the latest example? That Batman game perhaps?)

The community-aware model

It’s not a new insight that it is rather helpful for any title to have a large community. Some studios provide forums in an attempt to simplify building up of a community. And it’s also common knowledge today that feedback from that community is extremely valuable: Knowing your audience better helps a lot to provide the perfect product after all!

The most important point of this model is that interacting with the players is now bidirectional: There’s advertising targeting them but you certainly want to have (and honor) feedback provided by them. And it also makes sense to think about designing the game and/or providing the tools to easily modify the game and thus make it as easy as possible to create mods for the game. This can also be a huge plus when it leads to a bigger, more active and longer living community!

Independent of a single title, there is a possibility for a studio to get itself a good name by opening the source code for older games. This may require some cleaning up work first but some studios have also released code as-is (which can be rather terrible). But usually the community figures out what to do with it and before long the game is ported to new platforms, receives technical updates and enhancements. This has totally made some titles immortal: There are still new episodes, mods and total conversions for Wolfenstein being released. Yes, for a game from 1992 with extremely “poor graphics” (320×240, 8bit) by today’s standards! And there’s not one week without new maps for the mighty DooM (1993).

The community-supported model

There’s this interesting trend of “early access” games: Players are given the opportunity to playtest games before they are ready for release. People know they have to expect bugs but they can try out a game of their interest early and if they are very committed to it, they can report bugs as they encounter them.

This is a classical win-win situation: The developers get a broad testing done for free and the players can have a peak into the game early. Oh, and any form of interaction is of course always a good thing.

The community-backed model

That’s a rather new thing and basically means that some developers try to get their game crowd-funded. This can succeed and this can fail. There are examples for both cases. But while this is clearly a development model since it has a lot of impact on it, I’d say that it’s also more of a special case than a general model.

The future?

MuHa Games have made one clever step ahead with Thea as the gain of giving the title away for free on Linux is really considerable. How’s that? Well, if there was no Linux version, Linux people wouldn’t have bought the game, either. So giving it away is no actual loss: The number of people of the “hey, I would have bought it for Windows but why should I since I can play it for free on Linux!” kind are most likely extremely rare – if they exist at all.

No loss is fine, but where’s the actual gain? Well, there’s the “Just bought the Windows version. Besides: I don’t run Windows at all” type of guy. These people alone should suffice to cover the costs of the additional efforts to package a Linux release and upload it somewhere. But that’s not the main point at all: Can you say “Free advertising”? People talk about the game and people write about the game, many of which would not have done it if it had just been an ordinary game! Now with the free Linux release the game, MuHa managed to make it stand out (and that is not too easy today).

For these reasons giving it away proves to be a very sensible PR action! I do not mind if that was intended or not. That doesn’t change the facts.

Community-assisted model?

So what could the future hold? I can imagine that making the community engage even more would be a big benefit. From a studio’s perspective, fans do unpaid work because they love the product. And from the fan’s perspective it’s just cool to be part of one of your favorite games and help improve it.

What could this look like? My vision is to sort of blend closed source development with what we learned from open source development. It’s cool that people playtesting a game can report bugs via forum or email. But when will the first project set up a public bugtracker along with a tutorial on how to use that for bug reports and maybe (sensible) feature requests?

Then: What about translation? Open source achieved made very, very good results using translation frameworks like Transifex. Now Thea is only available in English. My native language is German and I would not have minded at all to dedicate some time translating a few strings (I got a nice game for free after all!). There’s a lot of potential in this.

And along that it would totally make sense to avoid using proprietary containers for files. I did not bother to try to extract text out of whatever format it is that MuHa uses for Thea. In 1999 ID Software did a clever thing for Quake III Arena: They used container files called “.pk3” – which were simply renamed, uncompressed Zip files. The benefit is obvious: Everybody can extract the resources, modify them and put things back together. Great! I noticed a lot of spelling mistakes in Thea. If I had had access to the game text you’d have received a series of patches from me (and by applying they you’d instantly see which ones are still valid and fixing mistakes). Wouldn’t that be a great way to improve the game?

Licensed Open Source model?

Can open source work for a commercial game? Well, why not? Open source alone does mean just that: The source is open. It does not say under which license and it does not say that it’s free. Now I generally support as much freedom as possible – but that last word there is important. A more open development is a nice improvement IMO. There’s no reason to demand more than that.

In this model the customers pay for the game data without which you obviously cannot play the games but the program source is open (or perhaps semi-open where it is included with the copy of the game you get when you buy it and you’re free to distribute a series of patches but not the source itself). I’m pretty sure that this can work. One potential problem here may be deadlines. Often the code in commercial games must be horrible – not because the programmers suck but because unrealistic deadlines blow. A lot of studios may hesitate to open up their code just for that very reason…

Addressing the problem could however also be easy: You sell games in early access? Buyers get the code and know that it’s early and may not be in perfect shape (and can actually help improving it). Again both sides win: The studio gets code review and maybe some patches plus some people may even attempt to port the game to platforms unsupported by the studio. The players get better games they can help to improve, take modding to the next level and even a chance to see what coding is like and get yourself some reference work if you intent to work in that industry!

There’s one other issue, though. In many cases studios will want to hide some things from competitors. That may be old (and at some point hopefully obsolete) thinking but we have to accept it as a present fact. So what about this? Well, those things could be put into libraries… It’s far better to have the program code open and make it use closed libraries than having nothing open at all!

Time for change

Who’s stepping forward making the next step in game development? I’m really curious if something in the direction of what I wrote here happens any time in the future. For each step there’s good press to catch for free again, you know? ๐Ÿ˜‰ Perhaps some small studio dares to make the move.

Update: I wrote this in a hurry on 11/30 to rush out my November post. And then I once again forgot to make it public. But now it is…

An interview with the Nanolinux developer

2014 is nearly over and for the last post this year I have something special for you again. Last year I posted an interview with the EDE developer and I thought that another interview would conclude this year of blogging quite fine.

In the previous post I reviewed Nanolinux (and two years ago XFDOS). Since I was in mail contact with the author about another project as well, it suggested itself that I’d ask him if he’d agree to give me an interview. He did!

So here’s an interview with Georg Potthast (known for a variety of projects: DOSUSB, Nanolinux and Netrider – to just name some of them) about his projects, the FLTK toolkit, DOS and developing Open Source software in general. Enjoy!

Interview with Georg Potthast

This interview was conducted via email.

Please introduce yourself first: How old are you and where are you from?

I am 61 years old and live in Ahlen, Germany. This is about 30 minutes drive from Dortmund where they used to brew beer and where the BVB Dortmund soccer team is currently struggling.

Do you have any hobbies which have nothing to do with the IT sector?

Not really. I did some Genealogy, which has to do a lot with IT these days. But now I have several IT projects I am working on.

DOS

You’re involved in the FreeDOS community and have put a lot of effort into XFDOS. A lot of people shake their heads and mumble something like “It’s 2014 now and not 1994…” – you know the score. What is your motivation to keep DOS alive?

I have been using DOS for a long time and wish it would not go away completely. So I developed these DOS applications, hoping to get more people to use DOS. But I have to agree that I have not been successful with that.

Potential software developers find only very few users for their applications which is demotivating. Also there is simply no hardware available today that is limited so much that you better use DOS on it. Everything is 32/64 bit, has at least 4 GB of memory and terabytes of disk space. And even the desktop PC market is suffering from people moving to tablets and smartphones.

People are still buying my DOSUSB driver frequently. They are using it mostly for embedded applications which shall not be ported to a different operating system for one reason or another.

Do you have any current plans regarding DOS?

I usually port my FLTK applications to DOS if it is not too much effort to do so. So they are available for Linux, Windows and DOS. Such as my FlDev IDE (Link here).

Recently I made a Qemu/FreeDOS bundle named DOS4WIN64 (Link here) that you can run as an application on any Windows 7/8 machine. This includes XFDOS. I see this as a path to run 16bit applications on 64bit Windows.

How complicated and time consuming is porting FLTK applications from Linux to DOS or vice versa?

It depends on the size and the dependencies on external libraries. I usually run ./configure on Linux and then copy the makefile to DOS where I replace-lXlib with -lNXlib plus -lnano-X. Then, provided the required external libraries could be downloaded from the djgpp site, it will compile if the makefile is not too complicated (recursive). Sometimes I also compile needed libraries for DOS which is usually not difficult if they have a command line interface.

You then have to test if all the features of the application work on DOS and make some adjustments here and there. Often you can use the Windows branch if available for the path definitions.

Porting DOS applications to Linux can be more complicated than vice versa.

Linux

For how long have you been using Linux?

I have been using Linux on and off. I began using SCO-Unix. However, I did not like setting things up with configuration files (case sensitive) scattered over many directories. It took me over a week to get serial communications to work to connect a modem. When I asked Linux developers for help they recommended to recompile the kernel first – which means they did not know how to do that either. So I returned to DOS at that time. But I have been using Linux a lot for several years now.

What is your distribution of choice and why?

I mainly use SUSE but I think Ubuntu may work just as well. This may sound dull but you do not have to spend time on adding drivers to the operating system or porting the libraries you need. The mainstream Linux distributions are well tested and documented and you do not have to spend the time to tailor the distro to your needs. They do just much more than you need so you are all set to start right away.

My own distro, Nanolinux, is a specialized distro which is meant to show how small a working Linux distro can be. It can be used on a flash disk, as an embedded system, a download on demand system or to quickly switch to Linux from within Windows.

However, if you have a 2 Terabyte hard disk available I would not use Nanolinux as the main distribution.

FLTK

Which programming languages do you prefer?

I like Assembler. To be able to use X11 and FLTK I learned C and C++ which I currently use. I have not done any assembler in a while though.

You seem to like the idea of minimalism. Do you do use those minimalist applications on a daily base or are they more of a nice hobby?

Having a DOS and assembler background I always try not use more disk space than necessary. Programming is just my hobby.

Many of your projects use the FLTK toolkit. Why did you choose this one and not e.g. FOX?

I had ported Nano-X to DOS to provide an Xlib alternative for DOS developers. In addition I ported FLTK to DOS as well since FLTK can be used on the basis of Nano-X. So I am now used to FLTK.

Compared to the more common toolkits, FLTK suffers from a lack of applications. Which three FLTK applications that don’t exist (yet) do you miss the most?

I think FLTK is a GUI toolkit for developers, so it is not so important what applications are available based on FLTK.

If you look at my Nanolinux – given I add the NetRider browser and my FlMail program to the distro – it comes with all the main office applications done in FLTK. However, the quality of these applications is not as good as Libre Office, Firefox or Gimp. I do not expect anyone to write Libre Office with a FLTK GUI.

When you awake at night, a strange light surrounds you. The good FOSS fairy floats in the air before you! She can do absolutely everything FOSS related; whether it’s FLTK 3 being completed and released this year, a packaging standard that all Linux distros agree on or something a bit less unlikely. ๐Ÿ˜‰ She grants you three wishes!

As with FLTK 3 I wish it would change its name and the development would concentrate on FLTK 1.3.

Regarding the floating fairy I would wish the internet would be used by nice and friendly people only. Currently I see it endangered by massive spam, viruses, criminals and even cyber war as North Korea apparently did regarding the movie the ruling dictator wanted to stop being shown.

Back to serious. What do you think: Why is FLTK such a little known toolkit? And what could be done about that?

I do not think it is little known, just most people use GTK and so this is the “market leader”. If you work in a professional team this will usually decide to go for GTK since most members will be familiar with that.

What could be done about that? If KDE and Gnome would be based on FLTK I think the situation will change.

From your perspective of a developer: What do you miss about FLTK that the toolkit really should provide?

Frankly speaking, as a DOS developer the alternative would be to write your own GUI. And FLTK provides more features than you could ever develop on your own.

What I do not like is the lack of support for third party schemes. Dimitrj, a Russian FLTK developer who frequently posts as “kdiman” on the FLTK forums, created a very nice Oxy scheme. But it is not added to FLTK since the developers do not have the time to test all the changes he made to make FLTK look that good.

What do you think about the unfortunate FLTK 2 and the direction of FLTK 3?

I think these branches have been very unfortunate for FLTK. Many developers expected FLTK 2 to supersede FLTK 1.1 and waited for FLTK 2 to finish before developing an FLTK application. But FLTK 2 never got into a state where it could replace FLTK 1.1. Now the same seems to happen with FLTK 3.

So they should have named FLTK2/3 the XYZ-Toolkit and not FLTK 2 to avoid stopping people to choose FLTK 1.1.

Currently there is no development on FLTK 2/3 that I am aware of and I think the developers should concentrate on one version only. FLTK 1.3 works very well and does all that you need as a software developer as far as I can say.

Somebody with a bit of programming experience and some free time would like to get into FLTK. Any tips for him/her?

I wrote a tutorial which should allow even beginners in C++ programming to use FLTK successfully (Link here).

Nanolinux

You’ve written quite a number of such applications yourself. Which of your projects is the most popular one (in terms of downloads or feedback)?

This is the Nanolinux distro. It has been downloaded 30.000 times this year.

NanoLinux… Can you describe what it is?

Let me cite Distrowatch, I cannot describe it better: Nanolinux is an open-source, free and very lightweight Linux distribution that requires only 14 MB of disk space. It includes tiny versions of the most common desktop applications and several games. It is based on the “MicroCore” edition of
the Tiny Core Linux distribution. Nanolinux uses BusyBox, Nano-X instead of X.Org, FLTK 1.3.x as the default GUI toolkit, and the super-lightweight SLWM window manager. The included applications are mainly based on FLTK.

After compiling the XFDOS distro I thought I would gain more users if I would port it to Linux. The size makes Nanolinux quite different from the others and I got a lot of downloads and reviews for it.

The project is based on TinyCore which makes use of FLTK itself. Is that the reason you chose this distro?

TinyCore was done by the former main developer of Damn Small Linux. So he had a lot of experience and did set up a very stable distro. Since I wanted to make a very small distro this was a good choice to use as a base. And I did not have to start from scratch and test that part of the distro forever.

NanoLinux uses an alternative windowing system. What can you tell us about the differences between NanoX and Xorg’s X11?

Nano-X is simply a tiny Xlib compatible library which has been used in a number of embedded Linux projects. Development started about 15 years ago as far as I recall. At that time many Linux application developers used X11 directly and therefore were willing to use an alternative like nano-X for their projects.

Since nano-X is not fully compatible to X11, a wrapper called NXlib was developed, which provides this compatibility and allows to base FLTK and other X11 applications on nano-X without code change. The compatibility is not 100% of cause, it is sufficient for FLTK and many X11 applications.

Since nano-X supported DOS in the early days I took this library and ported the current version to DOS again.

Netrider

The project you are working on currently is NetRider, a browser based on webkit and FLTK. Please tell us how you came up with the idea for it.

Over the years I looked at other browser applications and thought how I could build my own browser, just out of interest. Finally Laura, another developer from the US, and I discussed it together. She came up with additional ideas and thoughts. That made me have a go at WebKit with FLTK.

What are your aims for NetRider?

I wanted to add a better browser to my Nanolinux distro replacing the Dillo browser. Also, as a FLTK user I wanted to provide a FLTK GUI for the WebKit package as an alternative to GTK and Qt.

There’s also the project Fifth which has quite similar aims at first sight. Why don’t you work together?

Lauri, the author of Fifth, and I started out about the same time with our FLTK browser projects, not knowing of each other’s plans. Now our projects run in parallel. Even though we both use FLTK, the projects are quite different.

We have not discussed working together yet and our objectives are different. He wants to write an Opera compatible browser and competes with the Otter browser while I am satisfied to come up with something better than Dillo.

I did not ask Lauri whether he thinks we should combine the projects. I am also not sure if this would help us both because we implemented different WebKit APIs for our browsers so we would have to make a WebKit library featuring two APIs. This could be done though. Also he is not interested in
supporting Windows which Laura and I want to support.

Would you say that NetRider is your biggest project so far? And what plans do you have for it?

Setting up Nanolinux and developing/porting all the applications for it was a big project too, and I plan to make a new release beginning of next year.

As with NetRider it depends if people like to use it or are interested to develop for / port it. Depending on the feedback I will make my plans. Recently I added some of the observations I got from beta testers, did support for additional languages, initial printing support etc.

The last one is yours: Which question would you have liked me to ask in addition to those and what is the answer to it?

I think you already asked more questions than I would have been able to come
up with. Thank you for the interesting questions.

Thanks a lot Georg, for answering these questions! Best wishes for your current and future projects!

What’s next?

I have a few things in mind… But I don’t know yet which one I’ll write about next. A happy new year to all my readers!

Tiny to the extreme: Nanolinux

It has been more than two years since I wrote about XFDOS, a graphical FreeDOS distribution with the FLTK toolkit and some applications for it (the project’s home is here.)

Mr. Potthast didn’t stop after this achievement however. Soon afterwards he published Nanolinux. And now I finally found the time to re-visit the world of tiny FLTK applications – this time on a genuine Linux system! And while it shows that it is closely related to XFDOS (starting with the wallpaper), Nanolinux does not follow the usual way at all according to which newer things are “bigger, badder and better”. It is rather “even smaller, more sophisticated and simple to use”!

I needed three attempts to catch the startup process properly because Nanolinux starts up very fast. Probably the most important difference from the DOS version is that Nanolinux can run multiple applications at the same time (which is something that goes without saying today). But there’s of course some more to it. If it weren’t then this review wouldn’t make much sense, would it?

The startup process of Nanolinux

TinyCore + NanoX + FLTK apps = Nanolinux?

Yes, that is what Nanolinux basically is. But that’s in fact more than you might expect. The first thing that is noteworthy is the size of Nanolinux: Just like the name suggests, it’s very small. It runs on systems with as little as 64 MB of RAM – and the whole iso for it is only 14 MB in size.

The Nanolinux desktop (second cursor is from the host machine)

While many people will be impressed by this fact I can hear some of you yawn. Don’t dismiss the project just yet! It’s true that people have stuffed some Linux 2.2 kernel on a single floppy and still had enough space remaining to pull together a somewhat usable system. But Nanolinux can hardly be compared to one of these. You have a Linux 3.0 kernel here – and it features a graphical desktop together with a surprisingly high amount of useful applications!

Applications

Speaking of applications: Most of which are part of XFDOS can be found in Nanolinux, too, like e.g. FlWriter, FlView and Dillo. There are just a few exceptions as well: The DOS media player, PDF viewer etc. However there are also a few programs on board which you don’t know from the graphical DOS distribution. I’m going to concentrate on these.

Showing off the Nanolinux menu

A nice one is the system stats program: As you would expect it gives you an overview of system ressources like CPU and RAM usage. But it does a lot more than that! It also lists running processes, shows your mounts, can display the dmesg – and more. Pretty useful small tool!

Then we have Fluff from TinyCore. It is a minimalist file manager. Don’t start looking for icons or things like that. It follows a text-based approach you may know in form of some console file manager. It’s small but functional and works pretty well once you get used to it.

System stats and the Fluff file manager

Want to communicate with others on the net? Not a problem for Nanolinux. While it comes with Dillo, this browser is not really capable of displaying today’s websites correctly. But Nanolinux also has FlChat – a complete IRC client! So it allows you to talk to people all over the world without much trouble.

FlChat – a FLTK IRC client!

Or perhaps you want to listen to music? In this case you’ve got the choice between two FLTK applications: FlMusic and FlRadio. The former is a CD player and the second let’s you listen to web radio stations. Since Nanolinux runs from RAM after it has started, it is no problem to eject the CD and put in some audio CD of your choice instead.

FlMusic and FlRadio for your ears

Extensions

Even though that’s a pretty formidable collections of programs, there’s of course always the point where you need something Nanolinux does not provide. Like it’s mother, TinyCore, Nanolinux supports Extensions in this case. These are binary packages which can add pre-build applications to your system.

Let’s imagine you want to burn a CD. Nanolinux has an extension for FlBurn available. After clicking on it from the extension list, the system downloads and installs the extension. Once this is finished, FlBurn will be available on the system.

FlBurn installed from the extensions

There are a few extensions available for you. And what to do if you need a program that has not been packaged for Nanolinux? Well, you can always try to build it yourself. If you feel like it, there’s the compile_nl package for you which provides what you need.

Don’t be too ambitious however! Nanolinux comes with Nano-X, remember? That means any program which depends on some Xorg library won’t compile on your system. You’ll just end up with an error message like the one shown in the screenshot below!

Compiling your own packages with “compile_nl”

Summary

Nanolinux builds upon the core of the TinyCore Linux distribution – and while it remains below the ordinary TinyCore in size, it comes with many useful applications by default. It can run on a system with as little as 64 MB of RAM and is extensible if you need any programs which did not fit into the 14 MB iso image.

This little distribution can do that thanks to the use of Nano-X (think X11’s little brother) and a special version of the FLTK toolkit modified to cope with that slim windowing system. It is definitely worth a try if you’re at all into the world of minimalism. And even if you’re not – it can be a nice playing around just to see what is possible.

What’s next?

While I do have something in mind which would be fitting after this post, I’m not completely sure that I’ll manage to get it done within the remaining time of this year. Just wait and see!

The concepts of complexity and simplicity

Life in general is a very complex thing. Society is a complex matter, too. Also the IT world is a complex one. And so are many of today’s programs – for the good or the bad.

In many fields complexity is a necessity. You cannot build a simple microprocessor that satisfies today’s needs. And there is no way to have a very simple kernel that can do everything that we need. I agree to that and I do not want to condemn complexity as a whole. But – and I cannot stress that enough – while more and more sophisticated programs are being developed, projects have the tendency to become overly complex. And this is what I criticize.

A bit of history

Most of my readers are probably happy users of some Unix-like operating system. Some may live long enough to have witnessed how these systems changed over time. Many of us younger ones did not and so we only know what we have read about these times (or probably not even that).

Thinking about the heritage of Unix, another OS called Multix comes to one’s mind. This system was jointly developed by AT&T, GE and the MIT. It was a sophisticated operating system which had many remarkable or even truly revolutionary features for its time. A lot of effort and money was put into it. High expectations were put on Multics. And then eventually – it failed.

AT&T had pulled out of the project when they realized that it was rather slow and overly complex. They learned from it and attempted to create a system which followed the opposite approach: Aim for simplicity. This system lead to an incredible success: Unix.

So it is important to know that enthusiasm for technology and the urge to develop more and more complex programs is not a new phenomena at all. In fact I’d claim that it is the logical consequence of how man thinks. While all things begin with relatively simple forms, complexity as a concept does not follow after the concept of simplicity. On the contrary: Simplicity is the lesson learned after realizing the downsides of complexity.

Universalism and particularism

Some people seem to be fascinated with the idea to have one tool that should do nearly everything. Let’s assume we have that tool available today. The result will be an extremely complex application which has an overwhelming number of features. There will hardly be any single person who will know all these features (let alone bring all of them to use).

Now each feature you don’t use wastes space on your drive. While this is true, it is certainly the smallest problem when you’re not working in the embedded field. A bigger one is that it will surely be of low quality: While it can do a hell of a lot of things, it is very unlikely that all of its features will be comprehensive. The program is likely to be rather slow because optimizing a very complex program is extremely difficult. The worst thing however is that it is bound to contain a high amount of bugs, too!

It is a well-known fact that program code where functions are longer than the maximum lines that fit on the screen, contain far more bugs. For some reason a lot of programmers seem not interested in writing good code but either just want to get something done or aim at too ambitious goals which make the project overly complex.

On the other hand there are projects which specialize in a single, narrow field. If you suggest a new feature it may very well happen that it will be rejected. The people who work on this project do not care for stuff just because that’s currently ultra-hip. Instead they often refer to features which are not really needed as unnecessary bloat. These programs cannot do a lot of things by themselves but excel at what they can do.

Following the later idea is the Unix way of doing things. The true power comes from the combination of specialized tools which can yield mind-blowing results when used by an experienced user.

Featuritis?

There are quite a few programs which suffer from a strange illness which could be called “featuritis”. It often makes the host look handsome and appealing for many people. This illness is usually not deadly and often invisible for quite some time. But it does bear a very destructive aspect, too…

Two of the programs recently found infected are OpenSSL and BASH. The former kept so much legacy code in the project and even re-implemented things done better by others that it was impossible to have a good overview of the whole project code. The later implements a lot of features which are hardly ever used by anybody and also uses some functions of its own which are arguably wasted code since there are better alternatives out there.

Both projects succeeded in being widely distributed but read by few and understood by even fewer. And those few didn’t look at all the obscure parts of those unclear and confusing code. This is why severe bugs could exist for a very long time before anybody ever noticed.

Probably the most important project where I diagnose a particularly intense form of featuritis is Systemd. It acts like an init system but absorbed the functionality of so many other programs by now that I’m getting dizzy thinking of it. Worse: A lot of people who have looked at it more than just a bit claim that it is badly designed and the code is rather unclean. Even worse: The developers of Systemd have had a conflict with Linus Torvalds because they broke things with their code and even refused to fix it insisting that it was not their problem! And the true tragedy is that it has spread to a great many Linux distros. Once a really bad bug is found concerning Systemd, this will probably take suffering for admins and users to a whole new level.

An exit strategy for the brave

My respect for the OpenBSD guys continues to grow the more I read about it. They claim to have a very secure OS and from what they do I can only say that they mean it. The LibreSSL fork or the SystemBSD project are just two examples that show how dead serious they are. A lot of people seem to ridicule them because there are not too many OpenBSD users out there when compared to Linux. That’s true of course. Their OS may also not be very familiar from a Linux user’s point of view and the OpenBSD guys may not be too friendly towards newbies. But they are nice enough to make their projects portable so that everybody can profit from them!

And in case you want to stick with Linux, there’s a great source for this platform as well. The guys over at suckless aim at creating programs “that suck less”. Go ahead and read a bit – especially the sucks and rocks pages! On the first one you’ll flabbergasted at how bad the situation really is with a lot of programs. Yes, they are fundamentally broken – and their developers don’t care about that. Code correctness doesn’t pay of if you just want to target the masses. But if you want to do things right it does.

Are there really people out there who care? You bet there are. Think about this topic again and try out a few alternatives. You might well find a real gem here and there – if you are able to look over some of the shortcomings compared to the well-known, featureful and bloated defaults.

Shocked by the shell

The title of this post really suggested itself. I’m not writing about shell shock’s technical details; people who care have surely read about it more than enough by now.

The funny thing is that I had in fact already decided to write this month’s blog post about shells before the shell shock happened! For quite a while I’ve been under the impression that the BASH, while widely available and convenient to use, is fat, slow and ugly. Two weeks ago I begun playing with a variant of the korn shell called mksh, and realized that I finally might have found the alternative shell that I had been looking for.

Laziness (learn to use a whole new shell properly? Is that really worth so much effort?) and the usual lack of time soon lead to myself being in two minds about the topic. But I guess that I just received the final bit of motivation… So I’ll likely write about it soon.


The “shell shock” BASH bug hit us all

Shocked!

Back in the days when Linux was “just a hobby”, I begun to follow the big incidents in the *nix world. “Just for fun” (and because it was interesting). Now that I work for a hosting provider it is more important for me to catch these things and react to them.

While most of our customers have no preference when it comes to the server OS, some do insist on a specific distribution. And since the company I work for bought a competitor some years ago, their infrastructure was taken over as well. Adding even more operating systems, that is the reason for our quite diverse server landscape. So before long I had the opportunity to learn the differences between FreeBSD, OpenBSD, Ubuntu, Debian, Gentoo, Fedora, CentOS, …

Doing the normal updates is something that takes quite a bit of time. But it is something way different if you have to do updates in a hurry. It was a lot of work when suddenly the infamous OpenSSL bug made our hearts bleed not even half a year ago. Now the second catastrophic bug hit us – and this one is even more off the scale than the previous one.


The “heartbleed” bug logo

Vulnerable? / Still vulnerable?

In case of the OpenSSL bug there were a lot of systems which didn’t have the hole. Many distributions shipped older OpenSSL versions which weren’t affected. This time things are far worse: Just about any single Linux server has a BASH shell – and the hole has existed for more than two decades…

The only exception is some embedded Linux systems which often use Busybox because it is much smaller and doesn’t need as many system resources as BASH does. That – and *BSD. The BSDs don’t use the BASH by default. FreeBSD uses the tcsh and OpenBSD comes with the ksh in the base system. Still BASH is a dependency for some packages, so chances are that some version of the BASH is installed on many BSD systems as well.

Like one would expect, the distributions are reacting to the problem in different ways and at different times. When I turned on my computer at work and did the usual update, I noticed that the BASH received an upgrade. A while later I read about shell shock and pasted the test line on my terminal emulator – receiving an error. Obviously Arch Linux had already closed that hole with the update. My colleagues, running different distributions on their workstations (we are allowed to choose the operating system and distribution ourselves) did updates as well. Most of them were left with a BASH that was vulnerable.

The next day, BASH received another update on my system. Soon I heard that the first “fix” didn’t close the hole completely. Again my system wasn’t affected anymore while more or less all servers that we had updated showed the string “still vulnerable! :(” using the latest test. So they had to be updated again – including the ones that had been problematic yesterday. Not fun to do…

Problem solved?

After a long day at work I updated my machines at home, too. On a FreeBSD system I noticed that they added a new config option to restore the old behavior without giving the –import-functions parameter. Well, perhaps some people really need it. At least it is a much better idea to disable that functionality by default than it is to mess with some prefixes for the functions…

This morning I found some time to look at the issue again. Seems like it’s not over, yet… A lot of people seem to be taking a closer look at the BASH right now – which is a good thing without any question. But the big mess has already happened and of course we’re target to the mock and scorn of the advocates of closed source software. While I don’t think that we deserve it (the BASH bug was communicated and fixed rather quickly after all and now people do look at the source code which they couldn’t if it wasn’t available), it will not exactly be very helpful in building and maintaining a good reputation.

So what is the real problem? IMHO it is that the idea of simplicity is traded for complexity far too often. Right, in some cases complex software is needed. But there’s no excuse to make things more complex than necessary. The idea to have as many features as possible in a single program is certainly not Unix!

Where do we go?

We currently live to see what looks like the victory of complexity. Systemd conquers all major distributions, DEs like GNOME are already beginning to count on it being present. Monsters like cmake are called “modern solutions” and spread like cancer. Proprietary blobs are ubiquitous. All these things won’t make life easier for us. They might do so at first sight. But on the long run we will continue to run into trouble.

Fortunately there are people who have understood the problems of over-complexity and try to react to it. I’ll write about this topic next time.

Eerie’s second birthday!

Today is the first day of my third blogging year. It’s hard to believe that it has been two years already but I checked the date of the first post: 06/24/2012. So it is really true. Two years are a long time and since the last birthday post a lot has happened again. For that reason I’m going to try to sum that up for you. And don’t worry: I’ll try to keep this birthday post shorter than the last one. ๐Ÿ˜‰

Origins and goals

I started the EERIE project because I wanted to really “get into Linux” instead of only using it. I didn’t know at all where that journey would take me when I set the initial goal to compare Linux desktops. In the first year it was mostly desktop centered posts that I wrote. Besides that I evaluated which Linux distribution would fit my needs, how a live cd is created, etc. In the last few months my interests shifted and wrote about other Unix-like systems as I begun exploring Linux’s “neighbors”.

What has happened in the last year?

In the second year I tried to get back on track and continue with the toolkit tests. But I was soon distracted from it and drawn towards different fields. The new focus clearly was package management, package building and the creation of a Linux distribution.

Since I was pretty confident that I would succeed in putting together an experimental light-weight Linux distribution, I registered a website for it. It has been severely neglected and not received any updates since October… Not having done anything with HTML for about 10 years, using a free template had been an obvious choice to begin with. In the mean time I invested a few hours to learn proper HTML 4.01 and some CSS but I have no idea when I get around to re-design the website. These things are moving forward very slowly.

A short series of posts dealt with the updating of an old Linux distribution where active maintenance had stopped several years ago. It was interesting to do and has been of interest for others, too, since I got a bit more of feedback on that topic.

The most important thing was two distributions that I created: An Arch-like distribution for i586 and one experimental one where I tried to build as many packages as possible using clang, an alternative init system, etc. Both worked quite well and while I never uploaded the i586 work, the other distribution was published as Arch:E5.

In addition to that I’ve got in contact with some nice people and interesting projects which is something I value greatly!

Blog & statistics

The blog’s monthly visitors

As you can see in the picture, the monthly visitors have increased in the second year over the first. In most months I had between 650 and 700 visitors. Exceptions were September with less than 600 hits and October with over 750. The blog has exceeded 10.000 total visitors clearly and features over 30 comments now.

The wordpress Trophy Case

While the wordpress “Trophy Case” is basically just play, I actually like it because it also shows the date of the day when the “medal” was “earned”. This makes the whole thing graphically polished statistics with some actual value.

Hits by country

I’ve had visitors from 114 countries around the earth and thus the white parts on the map are getting fewer and fewer!

Future

I’m having far less time for my computer projects compared to when I started the blog – and I think that really shows. In June I’ve had the lowest monthly hit count since more than one year (at least right now; the month isn’t over yet).

The reason for it is that I’m no longer studying at the university (which gave me enough free time). During the last year I’ve moved to another federal state, sought a job and moved again when I found one. And if my job (as well as the hours that I spend each day to get there and back home again) didn’t mean enough work and time lost, I’ve got even less spare time for another reason. For a positive reason fortunately: The birth of my second child!

So what does that mean? Currently it’s a bit hard to publish at least one new post each month but I’m not willing to retire yet! I just can’t make any promises on exactly what I’ll be able to write about in the forseeable future. Will it be toolkits again? The musl based Arch-like distribution? Some BSD things? Maybe a bit of everything or maybe something entirely different. Who knows? (I don’t!)

Software licenses (pt. 1): A general introduction

This is obviously not about the EERIE distro or Arch:E5. The reason for that is simply that I didn’t succeed in getting everything working. And to be honest, I didn’t have much time to attempt it in the first place. My second child was born this month and I guess everybody would agree that family comes first. So here’s the first post of a series that I had in mind since well before Christmas. Time just goes by so damn fast!

I’ve been thinking about software licenses a bit and decided to write about it. It’s a rather special topic for sure. Many programmers like dealing with licenses just as much as they like to write documentation: Not at all. For that reason a lot of people who support open source software decide to take the easy way and simply GPL their code.

However licensing is a very important thing and should be taken seriously. This article is meant to give a quick introduction – while avoiding the major problem of the whole license issue: Being boring for most people!

What is a license?

Ever read a typical Microsoft EULA (“End User License Agreement”)? No? Well, it’s not just you. Most people haven’t. Still you probably should. Or at least read a bit of it. Even if you didn’t read the license, you’ve agreed to it if you’re using Microsoft software. And that means that you are bound by its content – no matter if you’re in fact ignorant about that.

But what is a license? In fact you can think of a license which comes with a program as a blueprint. It is a draft of what the author proposes to you. Simply put: If you accept it, it will become a valid legal treaty. Yes – accepting a license isn’t some neglectable action. In doing it, you’re signing a treaty which legally binds you.

In general licenses contain various items which permit and prohibit certain actions. For example you may be granted the right to install one copy of a program on one computer and use it. You’re usually forbidden to analyze the program by means of reverse-engeneering. However there may be additional requirements being imposed on you. Probably you have to pay a monthly fee to continue using some service or you’re required to supply a valid address and keep it up to date.

Once you understood that by accepting a license you’re signing a treaty, you might no longer do so carelessly. After all you’re legally accountable if you knowingly or unknowingly violate it.

There has been a steady development in terms of proprietary licenses over the past decade. Many of them are getting stricter and more intrusive all the time. There are probably some quite nasty passage in the next license you’re going to “agree” to. So you should probably care about it.

(Why) do licenses matter?

Software – like any other artificial thing – doesn’t emerge out of nowhere. It has one or more author(s) who wrote the program and dedicated time and effort to that project. Now it is absolutely comprehensible that the author gets to decide what he or she would like to with it. The creator of some software project may decide to keep it private, try to sell it or give it away for free. And of course it’s all the author’s decision whether to open-source the project or to keep it close.

Depending on the nature of the program either path may seem like a good choice. The really important thing however is that each possible decision is perfectly legitimate. Who creates something decides. So if a coder makes the decision to give the source to his program away to the public – then this project is open-source and we can do all the good things with it, right? Wrong. Unfortunately.

Our coder may give the source of his or her program away for free but still remains the sole copyright holder. If the code is made available to you that implies that you may take a look at it. Certainly a nice move of the author. But other than that you’re not actually allowed to do anything with it. It is the intellectual property of another person. So without permission you may not redistribute it, are not allowed to modify it or in fact to actually put it to some use! Yes: In each of these cases you’d have to ask for permission first.

Let’s assume you cannot reach the author since he or she got a new email address and didn’t update the project page. Or the author simply doesn’t have the desire (or time) to answer mail asking for permissions. Well, if you’re a coder yourself, the source code may help you to get an idea on how to solve certain problems. But that’s pretty much it. If you need some functionality you’ll have to repeat the work already done and reimplement it because you’re not allowed to reuse the code despite it being publically available.

So for these reasons the answer to the question in the headline can only be: Yes. Licensing does matter!

What’s next?

I currently have three things in the making: 1) The basic repo of a musl-based Arch-like Linux distro 2) Quite some i686 packages for ArchBSD 3) An article about “Optimistic and pessimistic licenses”. And 4) I have no idea how much time I can spend before my screen in the coming weeks. ๐Ÿ˜‰

Eerie Linux and Musl libc

Happy Easter, everyone! Guess it’s about time I admit that my previous post was just an april fools, right? It’s been 20 days already after all! Well, there are reasons why I couldn’t do it. Two actually, if you disregard obstacles due to real life!

Keyboard issues

The first one is that I’m currently trying to get used to an alternate keyboard layout called Neoยฒ. You’ve never heard of it? We’ll that’s actually quite likely as it is neither well-known nor in wide-spread use โ€“ even in my country.

It’s basically an ergonomic keyboard layout optimized primarily for the German language (but also well fit for English as many of us frequently use that language, too). It uses six(!) layers and quite some dead keys, thus allowing for many, many more characters that you can type with it. Have you heard about dvorak (wikipedia article) for example? Then you know the basic idea of rearranging keys (and the advantage of having more characters easily accessible, like e.g. the whole Greek alphabet, is pretty obvious).

Neoยฒ promises even more typing speed once I learned it properly and got some experience with it. I can already see that this claim makes sense as the keys are re-arranged very reasonably. However for the time being, it makes me type damn slow! Do I hear anybody laughing? Well, chances are that there’s an alternate keyboard for your language, too. Why don’t you try it out yourself and share some of the pain having to relearn something you’ve gotten used to for way over two decades? ๐Ÿ˜‰

Not really all an april fool

And the second reason is… that most of what I wrote was not actually an april fool! Ok, some of it was. While the size of the TinyCore kernel is true (I really built it), I do not intend to use it as the default kernel. Also E5 won’t switch to musl โ€“ because I consider E5 a project on halt. What IS true however, is the fact that I’ve been experimenting with musl for quite a while now. In fact I actually plan to build another experimental Arch-like system which will be based on musl.

This is clearly the most challenging experiment I’ve made so far. Many standard packages are tested only with glibc and may require excessive patching to play together with musl nicely. Fortunately one of my favorite Linux distributions, Alpine Linux, decided to switch to musl! Thanks to their experience with ยตclibc – another alternative libc – it’s without question that they have the technical knowledge to make this happen. I have been very excited since I read their announcement and had been following the “edge-musl” branch closely. Now only ten days ago they dropped the “edge-musl” branch. First I was shocked. But then I realized that musl is now the standard libc for “edge”!

Alpine has been a great resource for me while I was trying to build an Arch system on musl. Musl is also available on Arch thanks to the AUR, but there it’s only a secondary libc installed with a different prefix. It’s meant more or less for static linking only. The great news: There seem to be quite a few people around who are interested in both Arch and musl. So why not combine the two?

Project status

It didn’t really take me long to get a musl-based mini-system working in a chroot. Adding more packages and making the beast boot was actually quite painful. In some regards I had to resort to what I’d consider “cheating”: So far mkinitcpio is completely broken and I just copied an initramfs image from my previous Arch:E5 project. I was really happy, though, when I finally got the whole thing booting…

Then it took a few weeks to get pam and logging in working and runit (the init replacement I’m using) to spawn some tty… Welcome to a bare system that won’t provide much more functionality than just coreutils and the like! Next I added a text editor so messing with configuration files became easier and prepared all the dependencies for pacman. There were quite some struggles along the way but in the end I succeeded: Eerie has pacman now!

The logical next step was adding the basic development packages. Most of them even built without any special means necessary. However GCC proved really troublesome to build. I was stuck on that one for weeks (and even though it were weeks with little time on my hands, it proved to be a real blocker). In the end I gave in and used what Alpine provides (did the same thing for gcc-libs before, anyways!). So I at least have a working GCC.

Getting the net to work has been giving me the creeps. I have failed to get xinetd to compile against musl and I’ve also failed to find an alternative to it that would do the job instead. Now I know why many distros use busybox together with a dhclient script… Definitely have to take a look at how Alpine does it, but I’m not really knowledgable about openRC and would like to look into that one first. Maybe I’ll find a little time for that soon. Who knows. The most important thing is that network connection with EERIE is possible.

I’ve built a few packages natively on my EERIE system but most of them were built externally. My goal is of course to be able to build all of them on EERIE and thus make the project self-hosted.

Open development

So far I’ve done all my projects in a semi-open way. I came up with an idea and tried to cook something up behind closed doors. When I thought that it was ready, I made it public and that was it. However these projects were more or less personal experiments that I shared with anybody who might care. Now I’d like to take the next step and set up a project that’s actually useful (while still being experimental for the forseeable future).

For that reason I set up a repository to store all PKGBUILD files for EERIE using a DVCS (distributed version control system) called Fossil. It’s way less known that e.g. git, mercurial, etc., but it provides some nice extras. Look here for a little how-to on cloning the repository.

Join the feast!

Building an Arch-like Linux distribution on musl is a gigantic task. For that reason I could really use any bit of help I can get. If you care for Musl and like Arch, please consider supporting this project. And no, I’m not asking for money. If you think you have a bit of free time on your hands and the skill (or will to learn since that’s what we all start with) to mess with package building, just get in touch with me! Oh, and even if you don’t think you can help by making more packages available, you may just invest the one or two minutes that it takes to write me a comment here and show some interest in the project. That would also help and is greatly appreciated!

What’s next?

I’m busy getting the repos with binary packages up so that an EERIE system can be pacstrapped. I’ll try to make a release announcement as soon as possible. Probably in early May. Please bear with me!

Arch:E5 ditches eglibc and goes for musl libc!

Those of you who follow the development of Linux for embedded devices (or simply older hardware) will probably have noticed, that eglibc is effectively dead. According to their website “EGLIBC is no longer developed and such goals are now being addressed directly in GLIBC.”!

E5 and eglibc

Arch:E5 was built to be a bit lighter than todays mainline Arch. To do just that it’s obviously necessary to deal with the root of the trouble (yeah, a pretty lame pun, I know). The standard C library is – together with the kernel – the very heart of every Linux distro, that’s for sure! And since ordinary glibc is quite a memory hog, eglibc was a pretty self-evident choice to keep things a little smaller and prettier. But as this variant of the GNU C library will cease to exist after the 2.19 branch, E5 had to look for alternatives.

There are actually quite a few alternative C libraries around. Dietlibc for example. But it’s primarily meant for static linking. Yet there are more candidates. Like ยตclibc. A pretty nice libc actually which is also already proven and tested: There are even a few distributions built upon it. So could that be an option for E5? Nah, not really! Wanna know why not? ’cause it would be rather boring. And even worse: It has already been done for Arch and is even described in its own page on the ArchWiki!

So – is E5 doomed to eat humble pie and return to glibc, losing one of the main characteristics which made it diverge from mainline Arch? But no, fear not dear brethren! Luckily for us there’s also Musl, a rather young libc which you may not even have heard of. It has reached version 1.0 just these days and thus it practically calls for being used in place of our old C library! There are a few distros already which are based upon musl. But everything’s experimental and pretty much a mess to work with. In other words: Absolutely ideal preconditions!

I’ve been secretly working to make E5 run with musl as the distro’s libc for about two weeks now and just a few minutes ago I succeeded in getting E5 “Musl edition” to run! It was quite a bit of work and took quite some time. So far I have not spoken a word about it with anybody. But today’s the big day to announce it to the public!

Like the idea of Arch based on musl? If you don’t have a clue why you should like it, you may want to take a look at this table which compares musl to other libc projects. Convinced? Great! Now as proof that I’m not lying about the successful porting, here’s a screenshot which should confirm my claims:

Arch:E5 Musel’s boot screen!

Kernels…

While I was at it anyways, I thought that I might change the kernel as well. Ok, I have to admit that I’m still not cool enough to use some BSD kernel. But it’s clear that E5 (being an experimental distro!) needs something more… extraordinary. In the end I settled for a minimalistic Linux kernel by compiling one with the kernel settings from the TinyCore Linux project.

The result is really astounding if you ask me: That kernel image is just 2.5 Megs in size!! No, I’m not kidding. Don’t believe me? I can well understand you. But behold!, all you sceptics out there, here’s one more screenshot of the secret project for you:

Arch:E5 Musel with a TinyCore-like kernel!

So what do ‘ya say now? The “Arch Linux 3.10.33-1-LIBRE-LTS” line?? Uhm, well… *cough* Guess I gave that kernel a wrong name then! Yes, exactly that must be the case…

What’s next?

The logical next step would be publishing an E5-Musl repository, wouldn’t it??