Keyboards, layouts and ergonomics (pt. 1)

This post is about keyboards, keyboard layouts and… ergonomics. No surprise here if you’ve read the title! But how come that I write about such a topic? Well, I’ve invested some of the money I got from my family for Christmas into a new keyboard. I was skeptical at first because I’d really regret having bought it if it wasn’t really good (as it hasn’t exactly been a cheap one). However just days later I begun asking myself how I could have ever typed with something else. And after a few more days all that I do regret is the fact that I’ve typed for way more than 20 years on more or less conventional keyboards!

Yes, I’ve been going on a lot of people’s nerves praising my new gem. So I figured that I might as well complete the circle and publish my pean here on the blog, too! (Ok, the more serious reason is that I’m suffering from severe pain in my hands like many people who type a lot and that I’d like to talk about today’s possibilities to avoid this.)

This first part deals with a few general things about keyboards and the problems that typists often have. The second part will be about optimizing your typing with alternate layouts and the actual review of my new keyboard.

I had hoped to be able to add two photos to the post this weekend, but I didn’t manage to do that. However I do not want to delay the post further, so the pictures will be added later.

Any keyboard does the job, right?

If all you want is to play computer games or enter your password when you log in – probably yes. Even though it’s funny that there are “gaming keyboards” which can also be rather costly. But that’s a very different case.

For simple things every keyboard suffices. I own an old laptop that I still use sometimes just for the very reason that I can. Its keyboard misses some keys which makes it not all too pleasant to type on. But still it’s possible. Honestly: I have no clue how many keyboards I’ve had over the years that had some keys no longer working or no longer work properly. May it be due to Cola making some keys stick together or a worn off space bar which would only trigger if you pressed it with a considerable amount of force.

I used to retire those keyboards from daily use (simply because they were annoying to work with) but usually kept them somewhere – just in case. You never know if you’d ever need them again, right? Eventually I got rid of them. But in fact I actually did have a use for them a few times now and then!

So I can say that I’ve had quite a few keyboards in my life and that probably anyone does the job – depending on what that job is!

A matter of taste

There are a lot of different keyboards out there and in many aspects it really is a matter of taste. Some people like black keyboards while others want a white one. There are people who think it’s cool if the keys have background light. Some prefer the flat keys which you often find on laptops these days – others hate these. While many people like keys which do not need much force to be pressed, there are also those that prefer the opposite. Often I hear that somebody would never again purchase a wired keyboard but I know at least as many people who don’t like wireless ones.

It’s not hard to see that there just cannot be one single “perfect” keyboard for everybody. People have different needs and this alone defeats the idea of a universal keyboard. And as long as you don’t type a lot it probably remains just a matter of taste. However if you type on a daily base this may no longer be the case. Right: No matter how much you type, the color remains pretty irrelevant. But other characteristics are worth a closer look at.

But when are you “typing a lot”? If you can only type less than 30 WPM (words per minute) even if your life depends on it, you probably don’t qualify. Which of course doesn’t mean that you could not benefit from thinking about your keyboard again (everybody should try to get the right tool after all, right?). If you type faster than 40 words, you’ve clearly typed enough in your life to make it worthwhile thinking about how you type. Especially if you write “Eagle Finger” (“hunt and peck”) or use “your own system” as a whole lot of people do.

Typing – or typing properly

It’s perfectly sensible if people who barely type at all don’t bother to learn to touch-type. Unfortunately a surprisingly high amount of people who type regularly think the same. Why on earth to invest some time and effort that pays off hundredfold and thousandfold? Yeah, why? Just wait a minute – thousandfold? That’s a lot. And yes, I do mean it.

I had the luck to learn to touch-type properly when I was still at school. The hours that I needed for training were relatively few. After I knew where all the keys were, I just typed and typed and kept improving my skills while I was primarily doing other things. Even in a very pessimistic calculation the time that I saved thanks to proper typing by the end of the year was way more than the time I had “lost” by type training in the first place. Then it started to pay off and really saved me time for other things. And best of all: It still does and will continue to do so for all of my work life and probably beyond that.

Well, I’m the typealot kind of guy; I’ve been typing whole books into my pc at younger years because I can read texts written in Gothic type style (aka. black letter) pretty well and being interested in history, I thought that preserving old writings which are no longer copyrighted was cool. So learning touch-typing has paid off more than thousandfold for me. Might be a bit less for the casual typist, but I cannot recommend learning to touch-type often enough. If you’re using a computer regularly (and chances are, since you are reading a blog like this!), invest that time. Unless you are an old guy who is already counting the days left to retirement pension, you will benefit from learning it.

Ergonomics in keyboards

When I talk to young people about Repetitive Strain Injury (RSI), Tenosynovitis or Carpal Tunnel Syndrome (CTS), they tend to dismiss it: “Ah, why should I of all people get this?”. I can fully understand that. Why? ’cause I’ve been young myself and used to think right the same then.

As I grew older I begun taking these possibilities more serious – at least a bit. I had heard about those “ergonomic keyboards” that Microsoft was selling. More out of curiosity than of anything else I bought one of those “curved” things. Thinking of the conventional keyboard as the “normal” form, I found it to be strange at first. But I could type on it pretty well and it felt alright. It took me less than three days to adapt to the new keyboard and it was not too different from the conventional ones so that I had no problem typing on those, too.

I got more of these and used such keyboards for years. Not because the ergonomic design promised to protect my hands, but because I liked how it felt to type on it. Then I begun to suffer from Tenosynovitis. At first I just thought that I had a temporary problem with my right hand and that it would go away soon. It went away, yes. But it has been coming back again and again. Then I got it in the hand as well. In the end there were only “better days” and worse ones. In the meantime I have a rigid splint that helps a bit but is also not very comfortable to wear the whole work day so I use it only on the bad days.

So are those ergonomic keyboards a fraud? Mine didn’t save me from getting an injury after all! Well, I’m not really sure about the curved keyboard that I had in this regard. It might simply have happened even earlier on a conventional keyboard. But then there are even people who say that those keyboards are not any better than the ordinary ones. I wouldn’t go as far as that but they are probably far less effective than those marketing guys want you to believe. Still this would be the kind of keyboard that I’ll give to my children when they begin to type. Why? Because I think that it’s slightly better than the conventional form but still close enough to it so you can switch between them without much trouble (and let’s face it: There are simply far more ordinary keyboards out there!).

What’s next?

I switched to another ergonomic keyboard quite a while ago and learned an alternative layout. And finally I got what I consider a really great keyboard. That’s what I’ll blog about in the next post.

An interview with the Nanolinux developer

2014 is nearly over and for the last post this year I have something special for you again. Last year I posted an interview with the EDE developer and I thought that another interview would conclude this year of blogging quite fine.

In the previous post I reviewed Nanolinux (and two years ago XFDOS). Since I was in mail contact with the author about another project as well, it suggested itself that I’d ask him if he’d agree to give me an interview. He did!

So here’s an interview with Georg Potthast (known for a variety of projects: DOSUSB, Nanolinux and Netrider – to just name some of them) about his projects, the FLTK toolkit, DOS and developing Open Source software in general. Enjoy!

Interview with Georg Potthast

This interview was conducted via email.

Please introduce yourself first: How old are you and where are you from?

I am 61 years old and live in Ahlen, Germany. This is about 30 minutes drive from Dortmund where they used to brew beer and where the BVB Dortmund soccer team is currently struggling.

Do you have any hobbies which have nothing to do with the IT sector?

Not really. I did some Genealogy, which has to do a lot with IT these days. But now I have several IT projects I am working on.

DOS

You’re involved in the FreeDOS community and have put a lot of effort into XFDOS. A lot of people shake their heads and mumble something like “It’s 2014 now and not 1994…” – you know the score. What is your motivation to keep DOS alive?

I have been using DOS for a long time and wish it would not go away completely. So I developed these DOS applications, hoping to get more people to use DOS. But I have to agree that I have not been successful with that.

Potential software developers find only very few users for their applications which is demotivating. Also there is simply no hardware available today that is limited so much that you better use DOS on it. Everything is 32/64 bit, has at least 4 GB of memory and terabytes of disk space. And even the desktop PC market is suffering from people moving to tablets and smartphones.

People are still buying my DOSUSB driver frequently. They are using it mostly for embedded applications which shall not be ported to a different operating system for one reason or another.

Do you have any current plans regarding DOS?

I usually port my FLTK applications to DOS if it is not too much effort to do so. So they are available for Linux, Windows and DOS. Such as my FlDev IDE (Link here).

Recently I made a Qemu/FreeDOS bundle named DOS4WIN64 (Link here) that you can run as an application on any Windows 7/8 machine. This includes XFDOS. I see this as a path to run 16bit applications on 64bit Windows.

How complicated and time consuming is porting FLTK applications from Linux to DOS or vice versa?

It depends on the size and the dependencies on external libraries. I usually run ./configure on Linux and then copy the makefile to DOS where I replace-lXlib with -lNXlib plus -lnano-X. Then, provided the required external libraries could be downloaded from the djgpp site, it will compile if the makefile is not too complicated (recursive). Sometimes I also compile needed libraries for DOS which is usually not difficult if they have a command line interface.

You then have to test if all the features of the application work on DOS and make some adjustments here and there. Often you can use the Windows branch if available for the path definitions.

Porting DOS applications to Linux can be more complicated than vice versa.

Linux

For how long have you been using Linux?

I have been using Linux on and off. I began using SCO-Unix. However, I did not like setting things up with configuration files (case sensitive) scattered over many directories. It took me over a week to get serial communications to work to connect a modem. When I asked Linux developers for help they recommended to recompile the kernel first – which means they did not know how to do that either. So I returned to DOS at that time. But I have been using Linux a lot for several years now.

What is your distribution of choice and why?

I mainly use SUSE but I think Ubuntu may work just as well. This may sound dull but you do not have to spend time on adding drivers to the operating system or porting the libraries you need. The mainstream Linux distributions are well tested and documented and you do not have to spend the time to tailor the distro to your needs. They do just much more than you need so you are all set to start right away.

My own distro, Nanolinux, is a specialized distro which is meant to show how small a working Linux distro can be. It can be used on a flash disk, as an embedded system, a download on demand system or to quickly switch to Linux from within Windows.

However, if you have a 2 Terabyte hard disk available I would not use Nanolinux as the main distribution.

FLTK

Which programming languages do you prefer?

I like Assembler. To be able to use X11 and FLTK I learned C and C++ which I currently use. I have not done any assembler in a while though.

You seem to like the idea of minimalism. Do you do use those minimalist applications on a daily base or are they more of a nice hobby?

Having a DOS and assembler background I always try not use more disk space than necessary. Programming is just my hobby.

Many of your projects use the FLTK toolkit. Why did you choose this one and not e.g. FOX?

I had ported Nano-X to DOS to provide an Xlib alternative for DOS developers. In addition I ported FLTK to DOS as well since FLTK can be used on the basis of Nano-X. So I am now used to FLTK.

Compared to the more common toolkits, FLTK suffers from a lack of applications. Which three FLTK applications that don’t exist (yet) do you miss the most?

I think FLTK is a GUI toolkit for developers, so it is not so important what applications are available based on FLTK.

If you look at my Nanolinux – given I add the NetRider browser and my FlMail program to the distro – it comes with all the main office applications done in FLTK. However, the quality of these applications is not as good as Libre Office, Firefox or Gimp. I do not expect anyone to write Libre Office with a FLTK GUI.

When you awake at night, a strange light surrounds you. The good FOSS fairy floats in the air before you! She can do absolutely everything FOSS related; whether it’s FLTK 3 being completed and released this year, a packaging standard that all Linux distros agree on or something a bit less unlikely. ;) She grants you three wishes!

As with FLTK 3 I wish it would change its name and the development would concentrate on FLTK 1.3.

Regarding the floating fairy I would wish the internet would be used by nice and friendly people only. Currently I see it endangered by massive spam, viruses, criminals and even cyber war as North Korea apparently did regarding the movie the ruling dictator wanted to stop being shown.

Back to serious. What do you think: Why is FLTK such a little known toolkit? And what could be done about that?

I do not think it is little known, just most people use GTK and so this is the “market leader”. If you work in a professional team this will usually decide to go for GTK since most members will be familiar with that.

What could be done about that? If KDE and Gnome would be based on FLTK I think the situation will change.

From your perspective of a developer: What do you miss about FLTK that the toolkit really should provide?

Frankly speaking, as a DOS developer the alternative would be to write your own GUI. And FLTK provides more features than you could ever develop on your own.

What I do not like is the lack of support for third party schemes. Dimitrj, a Russian FLTK developer who frequently posts as “kdiman” on the FLTK forums, created a very nice Oxy scheme. But it is not added to FLTK since the developers do not have the time to test all the changes he made to make FLTK look that good.

What do you think about the unfortunate FLTK 2 and the direction of FLTK 3?

I think these branches have been very unfortunate for FLTK. Many developers expected FLTK 2 to supersede FLTK 1.1 and waited for FLTK 2 to finish before developing an FLTK application. But FLTK 2 never got into a state where it could replace FLTK 1.1. Now the same seems to happen with FLTK 3.

So they should have named FLTK2/3 the XYZ-Toolkit and not FLTK 2 to avoid stopping people to choose FLTK 1.1.

Currently there is no development on FLTK 2/3 that I am aware of and I think the developers should concentrate on one version only. FLTK 1.3 works very well and does all that you need as a software developer as far as I can say.

Somebody with a bit of programming experience and some free time would like to get into FLTK. Any tips for him/her?

I wrote a tutorial which should allow even beginners in C++ programming to use FLTK successfully (Link here).

Nanolinux

You’ve written quite a number of such applications yourself. Which of your projects is the most popular one (in terms of downloads or feedback)?

This is the Nanolinux distro. It has been downloaded 30.000 times this year.

NanoLinux… Can you describe what it is?

Let me cite Distrowatch, I cannot describe it better: Nanolinux is an open-source, free and very lightweight Linux distribution that requires only 14 MB of disk space. It includes tiny versions of the most common desktop applications and several games. It is based on the “MicroCore” edition of
the Tiny Core Linux distribution. Nanolinux uses BusyBox, Nano-X instead of X.Org, FLTK 1.3.x as the default GUI toolkit, and the super-lightweight SLWM window manager. The included applications are mainly based on FLTK.

After compiling the XFDOS distro I thought I would gain more users if I would port it to Linux. The size makes Nanolinux quite different from the others and I got a lot of downloads and reviews for it.

The project is based on TinyCore which makes use of FLTK itself. Is that the reason you chose this distro?

TinyCore was done by the former main developer of Damn Small Linux. So he had a lot of experience and did set up a very stable distro. Since I wanted to make a very small distro this was a good choice to use as a base. And I did not have to start from scratch and test that part of the distro forever.

NanoLinux uses an alternative windowing system. What can you tell us about the differences between NanoX and Xorg’s X11?

Nano-X is simply a tiny Xlib compatible library which has been used in a number of embedded Linux projects. Development started about 15 years ago as far as I recall. At that time many Linux application developers used X11 directly and therefore were willing to use an alternative like nano-X for their projects.

Since nano-X is not fully compatible to X11, a wrapper called NXlib was developed, which provides this compatibility and allows to base FLTK and other X11 applications on nano-X without code change. The compatibility is not 100% of cause, it is sufficient for FLTK and many X11 applications.

Since nano-X supported DOS in the early days I took this library and ported the current version to DOS again.

Netrider

The project you are working on currently is NetRider, a browser based on webkit and FLTK. Please tell us how you came up with the idea for it.

Over the years I looked at other browser applications and thought how I could build my own browser, just out of interest. Finally Laura, another developer from the US, and I discussed it together. She came up with additional ideas and thoughts. That made me have a go at WebKit with FLTK.

What are your aims for NetRider?

I wanted to add a better browser to my Nanolinux distro replacing the Dillo browser. Also, as a FLTK user I wanted to provide a FLTK GUI for the WebKit package as an alternative to GTK and Qt.

There’s also the project Fifth which has quite similar aims at first sight. Why don’t you work together?

Lauri, the author of Fifth, and I started out about the same time with our FLTK browser projects, not knowing of each other’s plans. Now our projects run in parallel. Even though we both use FLTK, the projects are quite different.

We have not discussed working together yet and our objectives are different. He wants to write an Opera compatible browser and competes with the Otter browser while I am satisfied to come up with something better than Dillo.

I did not ask Lauri whether he thinks we should combine the projects. I am also not sure if this would help us both because we implemented different WebKit APIs for our browsers so we would have to make a WebKit library featuring two APIs. This could be done though. Also he is not interested in
supporting Windows which Laura and I want to support.

Would you say that NetRider is your biggest project so far? And what plans do you have for it?

Setting up Nanolinux and developing/porting all the applications for it was a big project too, and I plan to make a new release beginning of next year.

As with NetRider it depends if people like to use it or are interested to develop for / port it. Depending on the feedback I will make my plans. Recently I added some of the observations I got from beta testers, did support for additional languages, initial printing support etc.

The last one is yours: Which question would you have liked me to ask in addition to those and what is the answer to it?

I think you already asked more questions than I would have been able to come
up with. Thank you for the interesting questions.

Thanks a lot Georg, for answering these questions! Best wishes for your current and future projects!

What’s next?

I have a few things in mind… But I don’t know yet which one I’ll write about next. A happy new year to all my readers!

Tiny to the extreme: Nanolinux

It has been more than two years since I wrote about XFDOS, a graphical FreeDOS distribution with the FLTK toolkit and some applications for it (the project’s home is here.)

Mr. Potthast didn’t stop after this achievement however. Soon afterwards he published Nanolinux. And now I finally found the time to re-visit the world of tiny FLTK applications – this time on a genuine Linux system! And while it shows that it is closely related to XFDOS (starting with the wallpaper), Nanolinux does not follow the usual way at all according to which newer things are “bigger, badder and better”. It is rather “even smaller, more sophisticated and simple to use”!

I needed three attempts to catch the startup process properly because Nanolinux starts up very fast. Probably the most important difference from the DOS version is that Nanolinux can run multiple applications at the same time (which is something that goes without saying today). But there’s of course some more to it. If it weren’t then this review wouldn’t make much sense, would it?

The startup process of Nanolinux

TinyCore + NanoX + FLTK apps = Nanolinux?

Yes, that is what Nanolinux basically is. But that’s in fact more than you might expect. The first thing that is noteworthy is the size of Nanolinux: Just like the name suggests, it’s very small. It runs on systems with as little as 64 MB of RAM – and the whole iso for it is only 14 MB in size.

The Nanolinux desktop (second cursor is from the host machine)

While many people will be impressed by this fact I can hear some of you yawn. Don’t dismiss the project just yet! It’s true that people have stuffed some Linux 2.2 kernel on a single floppy and still had enough space remaining to pull together a somewhat usable system. But Nanolinux can hardly be compared to one of these. You have a Linux 3.0 kernel here – and it features a graphical desktop together with a surprisingly high amount of useful applications!

Applications

Speaking of applications: Most of which are part of XFDOS can be found in Nanolinux, too, like e.g. FlWriter, FlView and Dillo. There are just a few exceptions as well: The DOS media player, PDF viewer etc. However there are also a few programs on board which you don’t know from the graphical DOS distribution. I’m going to concentrate on these.

Showing off the Nanolinux menu

A nice one is the system stats program: As you would expect it gives you an overview of system ressources like CPU and RAM usage. But it does a lot more than that! It also lists running processes, shows your mounts, can display the dmesg – and more. Pretty useful small tool!

Then we have Fluff from TinyCore. It is a minimalist file manager. Don’t start looking for icons or things like that. It follows a text-based approach you may know in form of some console file manager. It’s small but functional and works pretty well once you get used to it.

System stats and the Fluff file manager

Want to communicate with others on the net? Not a problem for Nanolinux. While it comes with Dillo, this browser is not really capable of displaying today’s websites correctly. But Nanolinux also has FlChat – a complete IRC client! So it allows you to talk to people all over the world without much trouble.

FlChat – a FLTK IRC client!

Or perhaps you want to listen to music? In this case you’ve got the choice between two FLTK applications: FlMusic and FlRadio. The former is a CD player and the second let’s you listen to web radio stations. Since Nanolinux runs from RAM after it has started, it is no problem to eject the CD and put in some audio CD of your choice instead.

FlMusic and FlRadio for your ears

Extensions

Even though that’s a pretty formidable collections of programs, there’s of course always the point where you need something Nanolinux does not provide. Like it’s mother, TinyCore, Nanolinux supports Extensions in this case. These are binary packages which can add pre-build applications to your system.

Let’s imagine you want to burn a CD. Nanolinux has an extension for FlBurn available. After clicking on it from the extension list, the system downloads and installs the extension. Once this is finished, FlBurn will be available on the system.

FlBurn installed from the extensions

There are a few extensions available for you. And what to do if you need a program that has not been packaged for Nanolinux? Well, you can always try to build it yourself. If you feel like it, there’s the compile_nl package for you which provides what you need.

Don’t be too ambitious however! Nanolinux comes with Nano-X, remember? That means any program which depends on some Xorg library won’t compile on your system. You’ll just end up with an error message like the one shown in the screenshot below!

Compiling your own packages with “compile_nl”

Summary

Nanolinux builds upon the core of the TinyCore Linux distribution – and while it remains below the ordinary TinyCore in size, it comes with many useful applications by default. It can run on a system with as little as 64 MB of RAM and is extensible if you need any programs which did not fit into the 14 MB iso image.

This little distribution can do that thanks to the use of Nano-X (think X11’s little brother) and a special version of the FLTK toolkit modified to cope with that slim windowing system. It is definitely worth a try if you’re at all into the world of minimalism. And even if you’re not – it can be a nice playing around just to see what is possible.

What’s next?

While I do have something in mind which would be fitting after this post, I’m not completely sure that I’ll manage to get it done within the remaining time of this year. Just wait and see!

The concepts of complexity and simplicity

Life in general is a very complex thing. Society is a complex matter, too. Also the IT world is a complex one. And so are many of today’s programs – for the good or the bad.

In many fields complexity is a necessity. You cannot build a simple microprocessor that satisfies today’s needs. And there is no way to have a very simple kernel that can do everything that we need. I agree to that and I do not want to condemn complexity as a whole. But – and I cannot stress that enough – while more and more sophisticated programs are being developed, projects have the tendency to become overly complex. And this is what I criticize.

A bit of history

Most of my readers are probably happy users of some Unix-like operating system. Some may live long enough to have witnessed how these systems changed over time. Many of us younger ones did not and so we only know what we have read about these times (or probably not even that).

Thinking about the heritage of Unix, another OS called Multix comes to one’s mind. This system was jointly developed by AT&T, GE and the MIT. It was a sophisticated operating system which had many remarkable or even truly revolutionary features for its time. A lot of effort and money was put into it. High expectations were put on Multics. And then eventually – it failed.

AT&T had pulled out of the project when they realized that it was rather slow and overly complex. They learned from it and attempted to create a system which followed the opposite approach: Aim for simplicity. This system lead to an incredible success: Unix.

So it is important to know that enthusiasm for technology and the urge to develop more and more complex programs is not a new phenomena at all. In fact I’d claim that it is the logical consequence of how man thinks. While all things begin with relatively simple forms, complexity as a concept does not follow after the concept of simplicity. On the contrary: Simplicity is the lesson learned after realizing the downsides of complexity.

Universalism and particularism

Some people seem to be fascinated with the idea to have one tool that should do nearly everything. Let’s assume we have that tool available today. The result will be an extremely complex application which has an overwhelming number of features. There will hardly be any single person who will know all these features (let alone bring all of them to use).

Now each feature you don’t use wastes space on your drive. While this is true, it is certainly the smallest problem when you’re not working in the embedded field. A bigger one is that it will surely be of low quality: While it can do a hell of a lot of things, it is very unlikely that all of its features will be comprehensive. The program is likely to be rather slow because optimizing a very complex program is extremely difficult. The worst thing however is that it is bound to contain a high amount of bugs, too!

It is a well-known fact that program code where functions are longer than the maximum lines that fit on the screen, contain far more bugs. For some reason a lot of programmers seem not interested in writing good code but either just want to get something done or aim at too ambitious goals which make the project overly complex.

On the other hand there are projects which specialize in a single, narrow field. If you suggest a new feature it may very well happen that it will be rejected. The people who work on this project do not care for stuff just because that’s currently ultra-hip. Instead they often refer to features which are not really needed as unnecessary bloat. These programs cannot do a lot of things by themselves but excel at what they can do.

Following the later idea is the Unix way of doing things. The true power comes from the combination of specialized tools which can yield mind-blowing results when used by an experienced user.

Featuritis?

There are quite a few programs which suffer from a strange illness which could be called “featuritis”. It often makes the host look handsome and appealing for many people. This illness is usually not deadly and often invisible for quite some time. But it does bear a very destructive aspect, too…

Two of the programs recently found infected are OpenSSL and BASH. The former kept so much legacy code in the project and even re-implemented things done better by others that it was impossible to have a good overview of the whole project code. The later implements a lot of features which are hardly ever used by anybody and also uses some functions of its own which are arguably wasted code since there are better alternatives out there.

Both projects succeeded in being widely distributed but read by few and understood by even fewer. And those few didn’t look at all the obscure parts of those unclear and confusing code. This is why severe bugs could exist for a very long time before anybody ever noticed.

Probably the most important project where I diagnose a particularly intense form of featuritis is Systemd. It acts like an init system but absorbed the functionality of so many other programs by now that I’m getting dizzy thinking of it. Worse: A lot of people who have looked at it more than just a bit claim that it is badly designed and the code is rather unclean. Even worse: The developers of Systemd have had a conflict with Linus Torvalds because they broke things with their code and even refused to fix it insisting that it was not their problem! And the true tragedy is that it has spread to a great many Linux distros. Once a really bad bug is found concerning Systemd, this will probably take suffering for admins and users to a whole new level.

An exit strategy for the brave

My respect for the OpenBSD guys continues to grow the more I read about it. They claim to have a very secure OS and from what they do I can only say that they mean it. The LibreSSL fork or the SystemBSD project are just two examples that show how dead serious they are. A lot of people seem to ridicule them because there are not too many OpenBSD users out there when compared to Linux. That’s true of course. Their OS may also not be very familiar from a Linux user’s point of view and the OpenBSD guys may not be too friendly towards newbies. But they are nice enough to make their projects portable so that everybody can profit from them!

And in case you want to stick with Linux, there’s a great source for this platform as well. The guys over at suckless aim at creating programs “that suck less”. Go ahead and read a bit – especially the sucks and rocks pages! On the first one you’ll flabbergasted at how bad the situation really is with a lot of programs. Yes, they are fundamentally broken – and their developers don’t care about that. Code correctness doesn’t pay of if you just want to target the masses. But if you want to do things right it does.

Are there really people out there who care? You bet there are. Think about this topic again and try out a few alternatives. You might well find a real gem here and there – if you are able to look over some of the shortcomings compared to the well-known, featureful and bloated defaults.

Shocked by the shell

The title of this post really suggested itself. I’m not writing about shell shock’s technical details; people who care have surely read about it more than enough by now.

The funny thing is that I had in fact already decided to write this month’s blog post about shells before the shell shock happened! For quite a while I’ve been under the impression that the BASH, while widely available and convenient to use, is fat, slow and ugly. Two weeks ago I begun playing with a variant of the korn shell called mksh, and realized that I finally might have found the alternative shell that I had been looking for.

Laziness (learn to use a whole new shell properly? Is that really worth so much effort?) and the usual lack of time soon lead to myself being in two minds about the topic. But I guess that I just received the final bit of motivation… So I’ll likely write about it soon.


The “shell shock” BASH bug hit us all

Shocked!

Back in the days when Linux was “just a hobby”, I begun to follow the big incidents in the *nix world. “Just for fun” (and because it was interesting). Now that I work for a hosting provider it is more important for me to catch these things and react to them.

While most of our customers have no preference when it comes to the server OS, some do insist on a specific distribution. And since the company I work for bought a competitor some years ago, their infrastructure was taken over as well. Adding even more operating systems, that is the reason for our quite diverse server landscape. So before long I had the opportunity to learn the differences between FreeBSD, OpenBSD, Ubuntu, Debian, Gentoo, Fedora, CentOS, …

Doing the normal updates is something that takes quite a bit of time. But it is something way different if you have to do updates in a hurry. It was a lot of work when suddenly the infamous OpenSSL bug made our hearts bleed not even half a year ago. Now the second catastrophic bug hit us – and this one is even more off the scale than the previous one.


The “heartbleed” bug logo

Vulnerable? / Still vulnerable?

In case of the OpenSSL bug there were a lot of systems which didn’t have the hole. Many distributions shipped older OpenSSL versions which weren’t affected. This time things are far worse: Just about any single Linux server has a BASH shell – and the hole has existed for more than two decades…

The only exception is some embedded Linux systems which often use Busybox because it is much smaller and doesn’t need as many system resources as BASH does. That – and *BSD. The BSDs don’t use the BASH by default. FreeBSD uses the tcsh and OpenBSD comes with the ksh in the base system. Still BASH is a dependency for some packages, so chances are that some version of the BASH is installed on many BSD systems as well.

Like one would expect, the distributions are reacting to the problem in different ways and at different times. When I turned on my computer at work and did the usual update, I noticed that the BASH received an upgrade. A while later I read about shell shock and pasted the test line on my terminal emulator – receiving an error. Obviously Arch Linux had already closed that hole with the update. My colleagues, running different distributions on their workstations (we are allowed to choose the operating system and distribution ourselves) did updates as well. Most of them were left with a BASH that was vulnerable.

The next day, BASH received another update on my system. Soon I heard that the first “fix” didn’t close the hole completely. Again my system wasn’t affected anymore while more or less all servers that we had updated showed the string “still vulnerable! :(” using the latest test. So they had to be updated again – including the ones that had been problematic yesterday. Not fun to do…

Problem solved?

After a long day at work I updated my machines at home, too. On a FreeBSD system I noticed that they added a new config option to restore the old behavior without giving the –import-functions parameter. Well, perhaps some people really need it. At least it is a much better idea to disable that functionality by default than it is to mess with some prefixes for the functions…

This morning I found some time to look at the issue again. Seems like it’s not over, yet… A lot of people seem to be taking a closer look at the BASH right now – which is a good thing without any question. But the big mess has already happened and of course we’re target to the mock and scorn of the advocates of closed source software. While I don’t think that we deserve it (the BASH bug was communicated and fixed rather quickly after all and now people do look at the source code which they couldn’t if it wasn’t available), it will not exactly be very helpful in building and maintaining a good reputation.

So what is the real problem? IMHO it is that the idea of simplicity is traded for complexity far too often. Right, in some cases complex software is needed. But there’s no excuse to make things more complex than necessary. The idea to have as many features as possible in a single program is certainly not Unix!

Where do we go?

We currently live to see what looks like the victory of complexity. Systemd conquers all major distributions, DEs like GNOME are already beginning to count on it being present. Monsters like cmake are called “modern solutions” and spread like cancer. Proprietary blobs are ubiquitous. All these things won’t make life easier for us. They might do so at first sight. But on the long run we will continue to run into trouble.

Fortunately there are people who have understood the problems of over-complexity and try to react to it. I’ll write about this topic next time.

RISC-V – Open Hardware coming to us?

There is no question: Open Source Software is a success. In the beginning of computing it went without saying that software was distributed along with the source code for it. This was before commercialism came up with the idea to close the source and hide it away. But even though there still are many companies who make millions of Dollars with their closed software solutions, OSS has a strong stand.

There are various operating systems which are developed with an open source model and there are open source applications for just about every use you could imagine. Today it’s not hard at all to run your computer with just OSS for all of your daily tasks (unless you have very special needs). I do – and I am more happy with it then I was at home in the over-colored world of put-me-under-tutelage operating systems and the software typically running on them.

Open Source Hardware?

While – like I just said – OSS has largely succeeded, Open Source Hardware is really just in its very early stages. But what is OSH after all? Unlike software it does not have any “source code” in the common meaning of the word. And for sure hardware can’t simply be copied and thus easily duplicated!

The later is completely true of course. And it’s also the reason that makes open hardware a thing very different from free software. While anybody can learn to code, write a program and give it away for free without having any expenses, this is simply not possible with hardware. You cannot do something once and are done with it (except for maintaining or perhaps improving). Every single piece of hardware takes time to assemble. You cannot create hardware ex nihilo (“from nothing”) – you need material and machinery for it.

So while we often come across the term Free and Open Source Software, there can’t be Free Hardware (unless it’s donated – which still means that there’s only a limited amount available). So we are left with the next best thing: Open Source. In case of hardware this means that the blueprints for the hardware are available. But what is the benefit of this?

Research, Innovation, Progress

Of course OSH doesn’t enable you to have a look at the source, change it and recompile. But while modifying it is a lot more complicated than that, it is at least possible at all! Interested in how some piece of hardware works? If it is some conventional hardware – tough luck (unless you’re working as a specialist in analysing hardware from your competitors). If it’s OSH however, just get the circuit diagram and the other papers and given that your knowledge in Electronics suffice, you won’t have too many problems.

Usually hardware vendors try to keep their products as untransparent as possible. The idea is to keep the gap between company A and the competitors as large as possible by making it hard for those to make use of any results from the research of company A. From a purely commercial point of view this is the most natural thing to do. From a technical one however, it is not good practice at all. Any new vendor is doomed to repeat much of the research that was already conducted by others.

With OSH, not only the results of the research (the hardware pieces) are made available, but the complete documentation is also published. This means that others can have a look at how that hardware is being built. They can copy the design and concentrate on improving it instead of having to do fundamental research beforehand, maybe only to find a solution to essential problems that have been solved several hundred times before… This means that while individual companies profit a lot from closed sources, this practice is a major waste of resources and time. Just imagine how much better hardware we might already have today if all these wasted resources had been put to good use!

Trust and Security

Another important issue is security. We know that e.g. just about all routers available contain backdoors intended mostly for intelligence services. But even if you don’t care about this: Keep in mind that if there are holes in your hardware, ordinary criminals could exploit them as well. Do you want this? Is it a pleasant thought that your hardware which you bought for a lot of money, keeps spying on you? Think about it for a moment. The bitter conclusion will be that there is not much you can do.

Some people recommend not to buy electronics from the US anymore. A good idea at first glance. Still let’s be realistic: The USA might have the biggest spying apparatus of the world and thanks to Mr. Snowden we are alarmed about it. But that doesn’t mean other nations don’t do the same. This is rather unlikely if you ask me. And in the end, currently it’s the choice whether your hardware tells everything about you to the Americans or to the Chinese…

This is a very dissatisfactory situation. Open Source Hardware could help here a lot: If you’re really a serious target of intelligence, you probably even deserve it. In this case they have the means to intercept your hardware and – make subtle changes you probably won’t ever notice. But it could make todays mass-surveillance much harder or even impossible. Got your new piece of OSH? Thanks to the detailed specifications you could even substitute its firmware with a custom one thwarting the purpose of the one that was installed by whoever person or group.

Open Source CPUs

One of the most important pieces of OSH would surely be a computer’s CPU. What matters most in this regard is – next to the actual design – the instruction set. It is this instruction set which determines a processor’s capabilities. Each family of CPUs (i.e. those which have the same or a compatible instruction set) is its own platform. Programs have to be compiled to run on it and won’t run on another (unless emulated).

There have been various attempts to create OSH CPUs or at least come somewhere close to it. Since the beginning of the 1990’s, the MIPS CPU’s design are licensable. While this does not mean the diagrams are available to the public, it is at least possible to acquire them if you are willing to pay for it. If you are, you can produce your own CPUs based on their design. SUN attempted to go the same way with its SPARC architecture but was less successful.

In the recent years the ARM platform has gained a lot of attention – thanks to basically following the same strategy and licensing their CPU designs to their customers. This development is a good step in the right direction and certainly commendable: One company specializes in conducting research and designing CPUs and others license and build theirs based on them.

But then there are projects which really qualify as OSH. Yes, they are underdogs compared to the other hardware which is no wonder because they of course lack the financial means which back the others. But we are getting there. Slowly but steadily.

RISC-V

This month the Berkley university released what was originally started just for learning purposes but grew into a very promising project: The RISC-V. RISC stands for “Reduced instruction set computing” which basically means the decision to build a CPU that uses a simpler instruction set to reach high performance.

While there’s also another OSH project known as OpenRISC, it never gained enough traction. They managed to design a 32-bit CPU and the architecture has been implemented in emulators such as Qemu and can run Linux. A 64-bit variant has been in development for quite some time and while the project collected money via donations for years it has been unsuccessful to actually produce their hardware. So OpenRISC exists merely as a soft-CPU and using FPGAs (Field programmable gate array). OpenRISC is licensed under the (L)GPL.

RISC-V is made available under the proven BSD license. In contrast to the GPL it is a permissive license. While I was keeping my fingers crossed for OpenRISC for years, now I am really excited at this news! Especially with such a reputable entity as Berkley behind the project – and the fact that things look like they are really moving forward this time!!

One of the most promising efforts is lowRISC: This British endeavor is actually a non-for-profit organisation claiming to work together closely with the university of Cambridge and some people of Raspberry PI fame. A dream come true! The idea is to implement a 64-bit RISC processor via FPGA during the next months and have a test chip ready by the end of 2015. They estimate that about a year later, in the end of 2016, the first chips could be produced. Sounds like a plan, doesn’t it?

I will definitely keep an eye on this. And even if the release must be postponed, it will surely be worth the wait. Open Source Hardware is coming to us. It will most likely be “expensive” (since it can only be produced in much lower numbers than conventional hardware for the mass market) and quite a bit “weaker” in terms of performance. Nevertheless it will be trailblazing the future for OSH and thus drawbacks like these are very much acceptable.

No RISC no fun!

Craven New World – or how to ruin the net

Alright. I never expected to write about anything remotely “political” on my blog… It’s about technical things, right? Ok, ok, free software is “political” all by itself. Kind of. But that’s about it.

While at times I’m really sick of what happens on the world, that doesn’t fit well on a blog about computer topics. I admit that I was tempted two or three times to write something about all the blatant and ruthless lies against Russia and things like that. But this is not the right place for those topics. So I resisted. Then came July 1st…

I begun to write a full-sized rant on that day but in the end decided to drop it and re-think things when I got calm again. Since I’m still stunned and angry at the same time, I’ve simply got to write an article now nevertheless.

The one and only

In that morning I read about how Paypal froze ProtonMail’s account. While it is nothing new that Paypal freezes accounts, the rationale was quite interesting. ProtonMail is a provider of email services. What makes them stand out is that they are developing an easy-to-use email system that features end-to-end encryption.

Now it’s a well-known fact that there are powers out there who have no respect at all for your privacy. They want to know where you go, what you download and what you talk about when you mail grandma. You could be a dangerous villain, skilled to pretend the contrary after all – and if they can’t find out what color your underwear is, you might even get away with it!

From that point of view, encryption is… well, irritating to say the least. Which makes it a clear thing that ProtonMail sucks big time. How dare they help people who prefer to keep private things private? So Paypal froze their account, because that company “wasn’t sure whether ProtonMail had approval by the gouvernment” for their business. As a matter of fact, the US have quite a few strange laws. But that’s another thing and it’s perfectly fine if an American company doesn’t wish to assist another American company in doing something unlawful. Except – ProtonMail is not an American company… It’s based in Switzerland!

How can it be that a Swiss company would require US approval for their business? And it’s not even the first time that something like that happens. The USA has blackmailed Switzerland not too long ago. And with their “compliance” ideology they are choking many others, too. This is a very alarming and gross practice. But it is, I cannot repeat it often enough, nothing new.

Just hand it to us!

A while later I read about how Microsoft had just seized more than 20 domains owned by no-ip. This cut off almost two million users from using the no-ip service. And what was the reason for such a draconian action? Was the life of the president at stake? Nope. Was the whole country threatened by some ancient evil perhaps? Not really. It was far worse than that: Microsoft had found a judge which allowed the domain seizure because Microsoft claimed that there were two accounts involved in spreading malware…

This was the moment I had to take a look at the calendar just to make sure that I didn’t mess up things and it was actually April 1st! But no – unfortunately not.

I just want to add that I am not an no-ip user and wasn’t affected personally. But I know people who were – one was even affected enough to finally give Linux more room both for private use and in his company. So while the whole thing is pretty much insane it has its good sides, too. Especially since I expect more people to be really upset after what Microsoft did. Maybe they should rather spend their time fixing their own broken windows than throwing stones at other people’s business?

Oy vey, we want your money!

Ah, what a day. We had some news which were hard to believe if such things weren’t happening over and over again. Then there was some news which left me incredulously shaking my head. What Microsoft did was ludicrous and the fact that some judge ruled in their favor is downright absurd. That cannot possibly be surpassed, can it? Yes. Unfortunately it can.

The last news is just so completely off the scala, that I don’t find any words for it (even in my native language that is). While the Microsoft case makes you question your sanity, the other thing that happened makes you struggle for your faith in mankind. Seriously.

So what happened? Well. More or less this:

Group A (private individuals) who are citizens of
state B (Israel) mandate
organisation C (a jewish law firm) to sue
state D (shiit (!) theocracy Iran) in
state E (the USA) for alleged financial support of
organisation F (sunni (!) Hamas) who are accused of
action G (a terrorist attack) in
territory H (the Gaza stripe) which belongs to
state I (Palestine) as group A claims they have suffered from action G.

Now under normal circumstances you’d laugh at any weirdo who can come up with such an idea – let alone actually carry it out… When you’re finished laughing and wiped the tears out of your eyes, you wish him that he’ll find a good mental doctor.

The story is not over, however. The US court rules in favor of the claimant – and since Iran did what any sane person would do and denies this arrogant impertinence, there’s now the fine (like I said I’m at a loss for words) idea: distrainment of the Iranian TLD (.ir)!!

Come on! Distrain a TLD on the net? Seems like they are really working hard to ruin the net. Congratulations to all those bright people involved.

What’s the world coming to?

In my country (Germany) the phenomena of anti-americanism is on the rise. Many people are in rage because of what the NSA did (and without any doubt continues to do). This is a rather sad thing actually, but in many cases I agree with what people say. The US government is one of the most corrupted an unsound entities of the world. Yet – and that deserves to be emphasized – that doesn’t make all Americans warmongers or liars.

The government in my country is run by criminals as well and so I’m probably not in the best position to complain. After all former chancellor Schröder openly admitted (in one of the biggest newspapers of the country!) that the NATO bombings in Yugoslavia (which he supported) were against international law. By stating so he confessed to be a war criminal – and that had no consequences whatsoever. Funny, isn’t it? And still I’d admit any time that I think of him as a more “honest” person than current chancellor Merkel…

Action!

I’d really like to ask every and all Americans to try hard and reclaim their country. But there’s not too much people who value freedom can do right now. Yet there is one thing we can all do: Start using encryption. Yes, invest that half of an hour to teach your grandmother how to write and read encrypted mail. It’s not that hard.

You are telling me that you have nothing to hide? That’s great! Why? Simple: Same here. It’s great because it is this important little fact that makes us qualify to begin encrypting. Currently it makes you suspect if you use encryption. Well, I can live with that.

I also don’t mind if those who think they absolutely have to know what I mail my grandmother break the encryption. But if they want to, they may well invest quite a bit of effort. If they find it worth the time and resources to learn how much my children have grown since we last visited her, that’s fine for me. If everybody used encryption it would be a normal activity. Let’s aim for that!

So – what about you?