A SPARC in the night – SunFire v100 exploration

While we see a total dominance of x86_64 CPUs today, there are at least some alternatives like ARM and in the long run hopefully RISC-V. But there are other interesting architectures as well – one of them is SPARC (the Scalable Processor ARChitecture).

This article is purely historic, I’m not reviewing new hardware here. It’s more of a “20 years ago” thing (the v100 is almost that old) written for people interested in the old Sun platform. The intended audience is persons who are new to the Sun world, who are either to young like me (while I had a strong interest in computers back in the day, I hadn’t even finished school, yet, and heck… I was still using Windows!) or never had the chance to work with that kind of hardware in their professional career. Readers who know machines like that quite well and don’t feel like reading this article for nostalgic reasons might just want to skip it.

The SPARC platform

SPARC is a Reduced Instruction Set Computing (RISC) Instruction Set Architecture (ISA) developed by Sun Microsystems and Fujitsu in 1986. Up to the Sun-3 series of computers, Sun had used the m68k processors but with Sun-4 started to use 32-bit SPARC processors instead. The first implementation is known as SPARCv7. In 1992 Sun introduced machines with v8, also known as SuperSPARC and in 1995 the first processors of SPARCv9 became available. Version 9, known as UltraSPARC, is a 64-bit architecture that is still in use today.

SunFire v100: Top and front view

SPARC is a fully open ISA, taken care of by SPARC International. Architecture licenses are available for free (only an administration fee of 99$ has to be payed) and thus any interested corporation could start designing, manufacturing and marketing components conforming to the SPARC Architecture. And Sun did really mean it with OpenSPARC: They released the Verilog code for their T1 and T2 processors under the GPLv2, making them the first ever 64 bit processors that were open-sourced. And not enough with that – they also released a lot of tools along with it like a verification suite, a simulator, hypervisor code and such!

After Sun was acquired by Oracle in 2010, the future of the platform became unclear. Initially, Oracle continued development of SPARC processors, but in 2017 completely terminated any further efforts and laid off employees from the SPARC team.

Fujitsu has made official statements that they are continuing to develop the SPARC-based servers and even about a “100 percent commitment”. In the beginning of this year, they even wrote about a Resurgence of SPARC/Solaris on the company’s blog and since they are the last one to provide SPARC servers (which are still highly valued by some customers), chances are that they will continue improving SPARC. According to their roadmap, even a new generation is due for 2020.

So while SPARC is not getting a lot of attention these days, it’s not a dead platform either. But will it survive in the long run? Time will tell.

SunFire v100

I’m working for company that offers various hosting services. We run our own data center where we also provide colocation for customers who want that. Years ago a customer ran a root server with an (now) old SunFire v100 machine. I don’t remember when it was decommissioned and removed from the rack, but that must have been quite a while ago.

SunFire v100: Back view

That customer was meant to come over to collect the old hardware and so we put the machine in the storage room. For whatever reason, he never came to get it. Since it had been sitting there for years now, I decided to mail the customer and asked if he still wanted the machine. He didn’t and would in fact prefer to have us to dispose of it. So I asked if he’d be ok with us shreddeing the hard drives and me taking the actual machine home. He didn’t have any objections and thus I got another interesting machine to play with.

The SunFire v100 is a 1U server that was introduced in 2001 and went EOL in 2006. According to the official documentation, the machine came with 64 bit Solaris 8 pre-installed. It was available with an UltraSPARC IIe or IIi processor and had a 40 GB, 7200 RPM IDE HDD built-in. My v100 has 1GB of RAM and a 550 MHz UltraSPARC IIe. I also put a 60 GB IBM HDD into it.

It has a single PDU, two ethernet ports as well as two USB ports. It also features two serial ports – and these are a little special. Not only are they RJ-45, but they have two different uses cases. One is for the LOM (we’ll come to that a little later), the other one is a regular serial port that can be used e.g. to upload data uninterrupted (i.e. not going to be processed by the LOM). The serial connection uses 9600 baud, no parity, one stop bit and full duplex mode.

RJ-45 to DB9 cable and DB9 to USB cable

The other interesting thing is the system configuration card. It stores host ID and MAC address of the server as well as NVRAM settings. What is NVRAM? It’s an acronym for Non-Volatile Random-Access Memory, a means for storing information that must not be lost when the power goes off like regular RAM does. If you’re thinking “CMOS” in PC terms, you’re right – except it seems that Sun used a proper means of NVRAM and not an in fact volatile source made “non-volatile” by keeping the data alive with the help of a battery. The data is stored on a dedicated chip, or in this case on a card. The advantage of the latter is that it can be easily transferred to another system, taking all the important configuration with it! Pretty neat.

Inside the v100

When I opened up the box, I was actually astonished by how much space there was inside. I know some old 1U x86 servers from around that time (or probably a little later) that really are a pain to work with. Fitting two drives into them? It’s sure possible, but certainly not fun at all. At least I hated doing anything with them. And those at least used SATA drives – I haven’t seen any IDE machines in our data center, not even with the oldest replacement stuff (it was all thrown out way before I got my job). But this old Sun machine? I must say that I immediately liked it.

SunFire v100: Inside view

Taking out the HDD and replacing it with another drive was a real joy compared to what I had feared that I’d be in for. The drive bays are fixed using a metal clamp that snaps into a small plastic part (the lavender ones in the picture). I’ve removed the empty bay and leaned it against the case so that it’s easier to see what they look like. It belongs where the ribbon cable lies – rotated 90 degrees of course.

Old x86 server for comparison – getting two drives in there is very unpleasant to do…

All the other parts are easily accessible as well: The PDU in the upper left corner of the picture, the CDROM drive in the lower right, as well as the RAM modules in the lower left one. It’s all nicely laid out and well assembled. Hats off to Sun, they really knew what they were doing!

Lights out!

I briefly mentioned the LOM before. It’s short for Lights-Out Management. You might want to think IPMI here. While this LOM is specific to Sun, its basic idea is the same as the wide-spread x86 management system: It allows you do things to the machine even when it’s powered off. You can turn it on for example. Or change values stored in the NVRAM.

LOM starting up

How do we access it? Well, the machine has a RJ-45 socket for serial connections appropriately labeled “LOM”. The server came with two cables to use with it, one RJ-45 to DB26 (“parallel port”) used with e.g. a Sun Workstation, and a RJ-45 to DB9 (“serial port” a.k.a. “COM port”). Then you can use any of the various tools usually used for serial connections like cu, tip or even screen.

Just plug your cable into say your laptop and the other end into the A/LOM port, then you can then access the serial console. If you plug in the power cable of the SunFire machine now, you will see the LOM starting up. Notice that the actual server is still off. It’s in standby mode now but the LOM is independent of that.

LOM help text

By default, the LOM port operates in mixed mode, allowing to access both the LOM and the serial console. These two things can be separated if desired; then the A port is dedicated to the LOM only and the console can be accessed via the B port.

In case you have no idea how to work with the LOM, there’s a help command available to at least give you an idea what commands are supported. Most of these commands have names that make it pretty easy to guess what they do. Let’s try out some!

LOM monitoring overview (powered off)

Viewing the environment gives some important information about the system. Here it reveals that ALARM 3 is set. Alarm 1, 2 and 3 are software flags that don’t do anything by themselves. They can be set and used by software installed on the Solaris operating system that came with the machine.

I really have no idea why the alarm is set. It was that way when I got the server. Even though it’s harmless, let’s just clear it.

Disabling alarm, showing users and booting to the ok prompt

The LOM is pretty advanced in even supporting users and privileges. Up to four LOM users can be created, each with an individual password. There are four privileges that these can have: A for general LOM administration like setting variables, U for managing LOM users, C to allow console access as well as R for power-related commands (e.g. resetting the machine). When no users are configured, the LOM prompt is not protected and has full privileges.

OpenBoot prompt

It is also possible to set the boot mode in the LOM. By doing this, the boot process can e.g. be interrupted at the OpenBoot prompt which (for obvious reasons) is also called the ok prompt. In case you wonder why the command is “boot forth” – this is because of the programming language Forth which the loader is written in (and can be programmed in).

ok prompt help

In the ok prompt you can also get help if you are lost. As you can see, it is also somewhat complex and you can get more help on the respective areas that interest you.

Resetting defaults and probing devices

OpenBoot has various variables to control the boot sequence. Since I got a used machine, it’s probably a good idea to reset everything to the defaults.

From the ok prompt it’s also possible to probe for devices built into the server. In this case, an HDD and a CDROM drive were found which is correct.

Setting NVRAM variables, escaping to LOM, returning to the ok prompt and resetting the machine

The ok prompt allows for setting variables, too, of course. Here I create an alias for the CDROM drive to get rid of working with the long and complex device path. Don’t ask me about the details of the latter however. I found this alias on the net and it worked. I don’t know enough about Solaris’ device naming to explain it.

Next I set the boot order to CDROM first and then HDD. Just to show it off here, I switch back to the LOM – using #. (hash sign and dot character). That is the default LOM escape sequence, however it can be reconfigured if desired. In the LOM I use the date command to display how long the LOM has been running and then switch back to the ok prompt using break.

LOM monitoring overview while the machine is running

Finally I reset the machine, so that the normal startup process is initiated and an attempt at booting from the CDROM is being made. I threw in a FreeBSD CD and escaped to the FreeBSD bootloader (which was also written in Forth until it was replaced with a LUA-based one recently).

Showing the monitoring overview while the machine is actually running is much more interesting of course. Here we can see that all the devices still work fine which is great.

LOM log and date, returning to console and powering off

Finally I wanted to show the LOM log and returning to the console. The latter shows the OK prompt now. Mind the case here! It’s OK and not ok. Why? Because this is not the OpenBoot prompt from the SunFire but the prompt from the FreeBSD loader which is the second-stage loader in my case!

That’s it for the exploring this old machine’s capabilities and special features. I just go back to the LOM again and power down the server.

Conclusion

The SunFire v100 is a very old machine now and probably not that useful anymore (can you say: IDE drive?). Still it was an interesting adventure for me to figure out what the old Sun platform would have been like.

While I’m not entirely sure if this is useful knowledge (SPARC servers in the wild are more exotic then ever – and who knows what the platform has evolved into in almost 20 years!), I enjoy digging into Unix history. And Sun’s SPARC servers are most definitely an important mosaic in the big picture!

What’s next?

Reviewing this old box without installing something on there would feel very incomplete. For that reason I plan to do another article about installing a BSD and something Solaris-like on it.

4 thoughts on “A SPARC in the night – SunFire v100 exploration

  1. This is a nice littler article on the V100!

    I have some of these, and I was just considering resurrecting them (which is how I came across your blog!)

    My thought was to use solid state IDE boot disks, IDE flash for ZFS L2ARC, external USB for storage, and tune down the ZFS ARC to make an OpenSolaris compatible system usable.

    Have a great day!

    1. Thank you, glad you liked the article! Your project sounds like an interesting one. What Operating System did you have in mind? I wrote another article (https://eerielinux.wordpress.com/2019/10/30/illumos-v9os-on-sparc64-sunfire-v100) about trying out illumos on the SFv100 and remember that there wasn’t too much choice. Unfortunately most illumos distros seem to be amd64-only these days…

      I also have a spare Sun Ultra 80 that I wanted to play with and write about. Your comment makes me want to finally figure out what the problem with it was and get to it! 🙂

      1. I will be using an Illumos based split… there is a big push to make some fully functional SPARC ports, before a fork from the main project.

        I just powered up a T5120 and that will become my primary, since the chassis can handle more RAM and can handle LDoms.

        My thought is we will be able to do much faster booting & installation to an LDom, possible for new builds, and testing builds.

        The V100’s and V120’s will likely also get started back up again. The T2 system will take priority right now.

        My Ultra60 died some time back, but Inhave spare parts for it, if you have need. It will likely be a pretty wild ride, over the next few months!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.