Advance!BSD – thoughts on a not-for-profit project to support *BSD (2/2)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/advance_bsd_pt2.gmi

The previous article provided an introduction of the Advanced!BSD project idea by providing some background as well as discussing the “why”. This article focuses on the “what”.

Shared services or virtualization?

Another general question is what makes more sense to focus on: Some form of virtualization (either full-fledged VMs or paravirtualization like FreeBSD’s jails either with or without VNET) or rather providing shared services? There certainly is a need for both. And while I’m certain that the vast majority of BSD people would be perfectly capable of renting a VM and doing all the things they need themselves, I’m not sure if everybody wants to.

Thinking about what my own use cases would be, the result is likely a mixture of both. If by accident a) the shared service was built upon exactly the software components that I prefer, b) there was a nice interface providing the level of control that I need and c) somebody else kept the whole thing nicely up to date, I wouldn’t say no! If however some service is provided by something that I loathe (let’s say, exim or even sendmail for email!), I’d rather do things myself.

Depending on what preferences people have, the focus of the project might be either way. Personally I’d like to see some high-quality shared services at some point. But that doesn’t necessarily mean the project would have to start with that.

Which services?

It may look like a logical decision to start with domains. For practical reasons I’d rather stay away from that for the beginning, however. Here’s why: Domain business is hard. The profit margin is very low even for the big players – and you need to have a lot of domains in your portfolio to get decent prices. Before that you’re losing money if you want to compete with the established providers. In addition to that, it’s not exactly simple to properly automate things like domain transfers that can keep real people occupied otherwise. Considerations like this don’t make that field particularly attractive.

DNS is probably a much better field to start with (provided we can get at least one person on board who’s experienced in advanced DNS topics like e.g. DNSSEC and DANE). The initial time to invest into proper configuration of nameservers is not to be neglected but much more on the doable side compared to domain registration and migration for multiple TLDs. And since for DNS you always at least need two servers, this would also be a great one to make use of two different BSD operating systems right from the start!

Plain Web hosting would certainly be the easiest thing to start with. However you definitely want to support TLS certificates with LE today – and supporting wildcard certs for domains requires DNS to be ready for it. Also intermediate hosting is a bit more complex already: A lot of people will want PHP. Some require Python. Others still want Perl. Throw in things like mod_security & friends and the topic gets a lot more complicated pretty quickly. Plus: What webserver to use? Will OpenBSD HTTPd suffice? Should it be Lighty? Nginx perhaps? Or do people need Apache HTTPd?

Email is a huge topic. We’d have to investigate anti-spam and anti-virus software. We’d need to build up reputation for our IPs so that the major mail providers would take email messages from us. And – of course – we should get DNS working first for things like the MX records, SPF, …

Another possibility would be offering complete “products” like a Gitea instance, an XMPP chat server, the Sogo groupware, etc. This only makes sense if we find out that by chance there are a lot of people interested in the same services. In general on the net I’d say WordPress is a sure bet. But my guess is that many people who might be interested in a project like this rather use Hugo, Jekyll and the like.

Each of those topics would of course need to be discussed further if actually a real group forms to make Advance!BSD happen.

Involving neighbors and others?

While I have no doubts that the BSD community is big enough for such a project to succeed with, it might still make sense to connect with some other communities that are also “niche” (i.e. underrepresented due to mainstream exclusivism) but nevertheless important or interesting.

One “neighbor” that comes to mind is the illumos community. While I’d clearly like to focus on BSD, I’d be open to let them participate if they wish. Eventually supporting illumos, too, if people who are capable of providing it join in, would not be a bad thing. We can even offer to give all the revenue that comes from illumos hosting back to that community. It wouldn’t be a loss for us since customers who want that wouldn’t have booked a BSD jail / VM / service in the first place anyway. At the same time it would further diversify the project and help it grow. So there could be a mutual interest.

I’m personally interested in the Gemini project (see FAQ link at the top of this article if you don’t know what it is). As an effort within the so-called “Small Internet” it’s not an incredibly huge community, yet. Due to its nature it’s a pretty technically versed one, though. There’s probably a majority of Linux users there, but taking it into consideration could be for mutual benefit again! Running a gemini server and maintaining a site in Geminispace in addition to the Web that informs about Advance!BSD and the services provided, is really not much hassle. It might well get the project some wider recognition, however. And if we’ve got that Gemini server, anyway, we might consider Gemini hosting as a bonus provided for free to our customers (if they want it). It’s very low on resources and Gemini people would get another free hosting service while we’d benefit from them spreading the word. Win-win.

If there’s enough interest regarding Gemini in the future, we could even decide to dive all in and provide a means of interacting with the services provided by an experimental interface that uses the Gemini protocol! We’d have to explore how to do something like this. Gemini is deliberately a simple protocol, but people have built search engines with it, there’s a wiki system and so on. One nice thing is that client certificates are in common use with Gemini and are often supported by the browser applications. That’s a pretty nice way of proving your identity either without the need for a password of as a second factor.

Just one more example: The POWER platform. It has a much lower market share than amd64 or aarch64 obviously, but there are some people who like it quite a bit. Hosting on ARM is nothing too exciting anymore these days, but hosting on POWER certainly is. We might consider platform diversity as another field that people could select to fund with their money. I can imagine that some people who are not necessarily into *BSD would consider using our services (initially on amd64, of course!) if the money would eventually be used to purchase a POWER 9 server.

Hosting on *BSD running on POWER would certainly be experimental for some time to come, but enthusiasts probably wouldn’t mind that too much. This could be a great chance for the BSD operating systems to improve support on that architecture and it would also be a nice opportunity to diversify the POWER ecosystem. In other words: Another win-win situation (if there’s enough interest in such a thing).

How to get this started?

Well, I’m not going to claim that I now the best way or even have a solid master plan. But here’s some steps that come to mind when I think about it:

Phase 0: Figure out if there’s any interest in something like this at all. Check – there is!

Phase 1: Determine what means of communication would work best for most people interested in the project: A Subreddit? Classic mailing list? IRC / some messenger (which one)? A Web forum? Something else?

Phase 2: Form an early community, collect ideas and thoughts that people have and discuss them! Consider problems that are likely to arise.

Phase 3: Decide on project governance and for example found a core team.

Phase 4: Try to reach some kind of consensus on the concrete goals of the project as well as which are short-term, mid-term and long-term.

Phase 5: Ask people interested about their areas of proficiency, make a list of who can do what. In parallel come up with a list of necessary tasks, then find people who can do the actual work and get things going.

Phase 6+: Let’s not decide on this before phase 2 ends!

Requirements

There’s two kinds of resources that we need server-related (both hardware and software) and person-related (skills as well as some free time). Regarding servers, any reasonably new machines will probably do. When it comes to the skill sets of people required for the project, things become a bit more complicated. Here’s just a few roles that would need to be filled:

  • We obviously need people who install machines, keep them up to date and make sure services continue to run (by setting up proper monitoring and paying attention to notifications) – i.e. administrators.
  • Very early on we need web developers: While it is theoretically possible to only interact with customers via email, this is absolutely not a feasible solution. There needs to be some kind of platform where people can login and view / change their services. Also without a somewhat respectable website it will be hard to find any people who’d actually book any service.
  • DevOps people. We live in the age of “automate or die!”. Also proper automation does not only mean less manual work but also fewer human mistakes.
  • Supporters. We need people who agree to dedicate a bit of time every week to look into problems people have, find a solution to them and interact with the customers via mail.

And so on.

Project identity

I’d like the project to follow some kind of theme early on. It should convey “yes, we are BSD!” and as long as it’s about a community-driven project, a bit of humor would certainly be appealing to people. We have the daemon and the fork that kind of symbolize BSD as a whole. Which means that we could either use those or try to fit all the specific BSD mascots into the picture. The latter might be harder to do without making it look too artificial and affected. So what about Project: Trident? Just kidding of course! 😉

Here’s one proposal: The mascot could be the polar beast: A daemon riding a wild-looking polar bear in a knightly fashion with a long fork in hand as a lance, a small orange pennant tied to the latter and a halo over its head. Thanks to the bear it’s clear that he’s at the north pole. This theme would allow for a couple of jokes like this: “Pretty cool but not cold(-hearted)” [the south pole has much lower overall temperature] and “Best of all: No clumsy flightless birds here!” 😉

We should make clear though, that at the end of the day we’re all friends in Open Source and despite regularly making fun of each other are not hostile towards Linux but want to go our own way.

Things like that. I’m sure that people come up with all kinds of funny themes if they give it a try.

What’s next?

I’m hoping that this provides some more food for thought and that we’ll be able to establish some kind of communication channel that people interested in this will use to really start discussing things. Regarding my usual articles I’ll probably write about Poudriere next.

Advance!BSD – thoughts on a not-for-profit project to support *BSD (1/2)

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/advance_bsd_pt1.gmi

There are multiple reasons why I am a BSD user and enthusiast. For one thing I enjoy using systems where the design benefits from the holistic approach (whole OS vs. “kernel + some packages”). I’ve come to appreciate things like good documentation, preference of simplicity over unnecessary complexity and permissive licensing. There’s a different spirit to the community that makes me feel more at home. Well, and truth be told: I do have a heart for “outsiders” who are doing well but get far less attention due to the towering popularity of a certain other system. In addition to that I’m very much concerned about the new de-facto monopoly in open source.

While the broader BSD community is far less fragmented than what you know from the Linux ecosystem, it’s also much smaller. Considering the little manpower that the BSD projects have, they are doing an outstanding job. But a lot of people seem to agree on the fact that due to the small amount of resources available, the BSDs are pretty far away from maximizing their full potential.

There are in fact a lot of people out there who’d like to help improve the situation. But coordination of volunteer effort is hard. Linux is what Linux is for a substantial part due to corporate funding. And while there are also companies that support BSD, the question is: Could we probably do better?

A (nonprofit) BSD-first service provider?

After thinking about this for quite a while, I finally just asked on Reddit what other people think about such a project – and I’ve been blown away by the response!

While I had hoped that most people would agree that this could be an interesting thing or even consider supporting it, I had not anticipated that the most popular option on the poll would be the one where people express their interest in the project with the prospect of perhaps participating actively in getting it started!

20 votes (after only one day) for maybe participating in the project!

A lot of projects struggle for years to find people who might be willing to join. With projects that weren’t even started yet, and thus have nothing to show off, it’s even harder to get some attention. Getting 20 people to support the cause in just one day was quite a surprise for me. Sure, that’s only a poll vote on Reddit and thus completely without any obligation. But let’s assume that 1/4 would actually join such a project and contribute – 5 people is not bad at all. Depending on what skills they bring in, even 2 or 3 might suffice to get something started, further increasing the odds that more people join in. The hardest part in getting a project on they way is finding people who are willing to pioneer it and make something that sounds interesting actually work well.

Why “Advance!BSD”?

The name is just a working title, BTW. I won’t insist on it and am in fact pretty sure that something better could easily be found.

Recently there have been two longer discussions (also on Reddit) about what the BSDs lack to be more competitive. There were a lot of great ideas and I’m one of the people who’d like to see at least some of them being implemented eventually. But let’s be realistic: This is not very likely to happen. There’s enough work going on regarding the must-haves, and when developers decide to work on a nice-to-have, it’ll be something that they want. If we really want to see some things not of very high priority to the various projects eventually land, we need to take care of that ourselves. If you’re a developer with the required skills and enough free time: Great! Nobody can really stop you. If you aren’t: Too bad.

I’ve thought about the possibilities of crowd-funding that we have today. In theory we could try to find a enough people who’d like to see a feature and are willing to spend some money so that the group could contract a developer. Even though I believe that there are enough developers out there who’d love to do some paid work on *BSD and would probably even prefer such a task over one that pays better, I’m not very optimistic about it. There’s not such a high chance of finding people who want the same feature implemented and would be willing to pay for it. Certainly not over a longer time which would be required to collect enough money. Also I don’t really like that begging for people’s money approach, either. So I came up with a different idea:

If you like *BSD you’re almost certainly an IT person. IT people all have a couple of things in common: We love tech. We use email. Almost everybody has his or her own homepage. Which means: We need domains. We need DNS. We need Webspace, etc. And we get that from service providers. Some of which are somewhat BSD-friendly, many of which are not. And even cloud providers for example that offer BSD as an option usually don’t do that because they love it (and regularly that shows…).

So what about providing services instead of just asking for money? Imagine starting a “hosting club” where like-minded people get something rolling that works for them and that could at some point be turned into a not-for-profit hosting provider that lives and breathes BSD! The latter would use the money acquired to pay the running costs and spend the rest on improving the BSDs. What could that look like?

Well, for example like this: Members could vote for a general topic for each year (like e.g. “desktop”, “drivers”, “de-GPLing”, “porting efforts”, …) and propose (as well as discuss) concrete project ideas over the year then voting again at the end of the year when the available money is to be spent. Any project that is beneficial for more than one specific BSD gets bonus points.

Which BSD operating system(s)?

In a follow-up poll I asked people interested in the project about which BSDs they are proficient with. Considering the general market share within *BSD it came to no surprise:

The largest group of votes were for FreeBSD somewhat closely followed by OpenBSD. People voting NetBSD, DragonFly BSD or multiple followed only after a huge gap.

Most people favor FreeBSD and OpenBSD

Pretty early on one person commented that even though NetBSD was his favorite OS, it might make most sense to go with FreeBSD for such a project for example due to jails being a very useful feature for that. I can imagine that there might be more people who decided to put their personal preferences aside and vote for what they think makes most sense for this use case.

To be honest, I’d like to see all of the BSDs being part of this in some way or another. But to be realistic, I think we need to start with something. This should be one or two BSD systems that project members are familiar with. Right now FreeBSD and OpenBSD seem to be the reasonable choices here. Of course we need to take into account what those systems excel in and which one to use for what.

Problems with the not-for-profit status

Like with everything, there’s pros and cons regarding the aim for this to be a not-for-profit eventually.

Pro:

It ensures that nobody involved in it could ever become greedy and try to sabotage the original idea of providing money to support *BSD. It would also protect such an organization from becoming attractive for buyout by a for-profit competitor should it go well. There would be benefits regarding taxes. And I’d imagine that it gives a good feeling to the customers which could be turned into a competitive advantage.

Contra:

The price to pay is inflexibility. A not-for-profit can donate money only to another not-for-profit organizations – and that very likely only in the country that it was formed in. With e.g. the FreeBSD foundation and the OpenBSD foundation we have to potential organizations that we might want to donate to; however one is US-based while the other one is Canadian. A for-profit company is free to spend money however it wishes. There might be other limitations that I’m not aware off. Going with not-for-profit would require consulting lawyers upfront.

A federated model?

One interesting idea that came up on Reddit was that of possibly going with a federated model. In this case project members / supporters who own a server but don’t need all of its resources would dedicate some percentage of CPU, memory and disk space for a VM that could be used for the project. The user who suggested this envisions that there could be some sort of market place where people can offer those resources to people who want a VM. The person who donates the resources gets to decide what field the biggest part of the money made from this would be spent on. The customer on the other hand can also influence the the direction by picking e.g. VM a) where the money goes to desktop improvements over option b) were he’d support permissively licensed alternatives.

I like this idea and think that we should discuss it. It has the obvious benefit of being able to start without somewhat high upfront costs for hardware. Also more people might be ok with a “donate some spare resources” approach for machines that are running anyway. The downside certainly is: Reliability. Donated resources might basically be withdrawn anytime. What could / should we do about that?

What’s next?

While this part 1 mostly covered the “why”, part two of the article is more about the “what” and the “how”.

FreeBSD package building pt. 5: Sophisticated Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt5.gmi

In the previous posts of this series, there was an introduction to package building on FreeBSD and we discussed basic Synth usage. The program’s configuration, working with compiler cache and using the tool to update the installed applications was covered as well. We also discussed Synth web reports, serving repositories over HTTP and building package sets. We also took a brief look at the logs to find out why some ports failed.

In this article we are going to sign our package repository and explore make.conf options. We will also add an additional profile to build packages for an obsolete FreeBSD release that is 32-bit and automate package building with cron.

Changing the structure

So far we’ve only considered building basically one local package set (even though we’ve shared it). If we want to have a single build server manage multiple package sets, we will need a somewhat more complex directory structure to allow for separate repositories and such. Should you be building on a UFS system, it’s easy: Just create the additional directories. I’m using ZFS here, though, and need to think if I want to create the whole structure as several datasets or just two single datasets with custom mount? As usually there’s pros and cons for both. I’m going with the former here:

# rm -r /var/synth
# zfs create zroot/var/synth
# zfs create zroot/var/synth/www
# zfs create zroot/var/synth/www/log
# zfs create zroot/var/synth/www/log/13.0_amd64
# zfs create zroot/var/synth/www/packages
# zfs create zroot/var/synth/www/packages/13.0_amd64
# synth configure

Now we need to adapt the Synth configuration for the new paths:

B needs to be set to /var/synth/www/packages/13.0_amd64 and E to /var/synth/www/log/13.0_amd64. Normally I’d create a custom profile for that, but as I’m covering that a little later in this article, we’re going to abuse the LiveSystem default profile for now.

Next is re-configuring the webserver:

# vi /usr/local/etc/obhttpd.conf

Remove the block return directive in the location “/” block on the synth.local vhost and replace it with:

directory auto index

Then change the location to “/*”. I’m also removing the second location block. Create a new .htpasswd and bring over authentication to the main block if you want to.

# service obhttpd restart

Repository signing

To be able to use signing, we need a key pair available to Synth. Use the openssl command to create a private key, change permissions and then create a public key, too:

# openssl genrsa -out /usr/local/etc/synth/LiveSystem-private.key 2048
# chmod 0400 /usr/local/etc/synth/LiveSystem-private.key
# openssl rsa -pubout -in /usr/local/etc/synth/LiveSystem-private.key -out /usr/local/etc/synth/LiveSystem-public.key

Mind the filenames here! The LiveSystem part refers to the name of the profile we’re using. If you want to sign different repositories resulting from various profiles, make sure that you place the two key files for each of the profiles in /usr/local/etc/synth.

While you’re at it, consider either generating a self-signed TLS certificate or using Let’s Encrypt (if you have own a proper domain). If you opted to use TLS, change the webserver configuration once more to have it serve both the log and the package vhosts via HTTPS. There’s an example configuration (obhttpd.conf.sample) that comes with obhttpd in case you want to take a look. It covers HTTPS vhosts.

Alright! Since we changed the paths, we don’t currently have any repository to sign. Let’s build a popular browser now:

# synth build www/firefox

Firefox failed in the configure phase!

Firefox failed to build. This is what the log says:

DEBUG: Executing: `/usr/local/bin/cbindgen --version`
DEBUG: /usr/local/bin/cbindgen has version 0.18.0
ERROR: cbindgen version 0.18.0 is too old. At least version 0.19.0 is required.

Please update using 'cargo install cbindgen --force' or running
'./mach bootstrap', after removing the existing executable located at
/usr/local/bin/cbindgen.

===>  Script "configure" failed unexpectedly.
Please report the problem to gecko@FreeBSD.org [maintainer] and attach the
"/construction/xports/www/firefox/work/.build/config.log" including the output
of the failure of your make command. Also, it might be a good idea to provide
an overview of all packages installed on your system (e.g. a
/usr/local/sbin/pkg-static info -g -Ea).
*** Error code 1

Stop.
make: stopped in /xports/www/firefox



--------------------------------------------------
--  Termination
--------------------------------------------------
Finished: Friday, 11 JUN 2021 at 03:03:06 UTC
Duration: 00:03:06

Oh well! We’ve hit a problem in the ports tree. Somebody updated the Firefox port in our branch to a version that requires a newer cbindgen port than is available in the same branch! Breakage like this does happen sometimes (we’re all human after all). What to do about it? In our case: Ignore it as it’s only an example. Otherwise I’d advise you to update to a newer ports tree as these problems are usually quickly redeemed.

Synth is asking whether it should rebuild the repository. Yes, we want to do that. Then it asks if it should update the system with the newly built packages. And no, not now. Also note: The synth build command that we used here, is interactive and thus not well fit if you want to automate things:

Would you like to rebuild the local repository (Y/N)? y
Stand by, recursively scanning 1 port serially.
Scanning existing packages.
Packages validated, rebuilding local repository.
Local repository successfully rebuilt
Would you like to upgrade your system with the new packages now (Y/N)? n

What else do we have to do to sign the repository? Nothing. Synth has already done that and even changed the local repository configuration to make the package manager verify the signature:

# tail -n 3 /usr/local/etc/pkg/00_synth.conf
  signature_type: PUBKEY,
  pubkey        : /usr/local/etc/synth/LiveSystem-public.key
}

That wasn’t so hard now, was it? You might want to know that Synth also supports using a signing server instead of signing locally. If this is something you’re interested in, do a man 1 synth and read the appropriate section.

Global options with make.conf

FreeBSD has two main configuration files that affect the compilation process when using the system compiler. The more general one, /etc/make.conf and another one that is only used when building FreeBSD from source: /etc/src.conf. Since we’re talking about building ports, we can ignore the latter.

There’s a manual page, make.conf(5), which describes some of the options that can be put into there. Most of the ones covered there are only relevant for building the system. Do by all means leave things like CFLAGS alone if you don’t know what you’re doing! Regarding the ports tree, it’s most useful to set or unset common options globally. It’s very tedious to set all the options for your ports manually like this:

# make -C /usr/ports/sysutils/tmux config-recursive

You need to do this for specific ports that you want to change the options for. But if there’s some that you have a global policy for, it’s better to use make.conf. Let’s say we want to never include documentation and examples in our ports. This would be done by adding the following line to /etc/make.conf:

OPTIONS_UNSET+=DOCS EXAMPLES

This affects all ports whether built by Synth or not as well as each and every Synth profile. Let’s say we also want no foreign language support in our packages for the default Synth profile (but in all others), we’d create the file /usr/local/etc/synth/LiveSystem-make.conf and put the following in there:

OPTIONS_UNSET+=NLS

That setting will build packages without NLS in addition to building without DOCS and EXAMPLES – if “LiveSystem” is the active profile.

If you want to build all ports that support it with e.g. the DEBUG option, add another line:

OPTION_SET+=DEBUG

Some common options that you might want to use include:

  • X11
  • CUPS
  • GTK3
  • QT5
  • JAVA
  • MANPAGES

After unsetting DOCS and EXAMPLES globally as well as NLS for the default profile, we’re going to rebuild the installed packages next:

# synth prepare-system

Package rebuild done

Note that Synth only rebuilt the packages that were affected by the changed options either directly or via their dependencies. For that reason only 219 of the 344 packages actually installed were rebuilt. If we now use pkg upgrade, this is what happens (see screenshot):

pkg upgrade will free space due to removing docs, examples and nls files

Ignore the 3 packages getting updated; this is due to packages that were skipped due to the Rust failure originally. That port has successfully built when we were trying to build Firefox, so our latest package runs built three more packages that have not been updated to, yet.

More interestingly: There’s 31 reinstalls. Most of them due to the package manager detecting changed options and one due to a change to a required shared library. It’s not hard to do the math and figure out that 31 is quite a bit less than 219. It’s a little less obvious that build-time dependencies count towards that greater number while they don’t appear in the count of packages that are eventually reinstalled. Still it’s true that Synth takes a “better safe than sorry” approach and tends to rebuild some packages that pkg(8) will end up not reinstalling. But this is not much of a problem, especially if you’re using ccache.

Alternative profiles

If you want to use Synth to build more than one specific set of packages for exactly one platform, you can. One way to achive this would be to always change Synth’s configuration. But that’s tedious and error prone. For that reason profiles exist. They allow you to have multiple different configurations available at the same time. If you’re simply running Synth from the command line like we’ve always done so far, it will use the configuration of the active profile.

To show off how powerful this is, we’re going to do something a little special: Building 32-bit packages for the no longer supported version of FreeBSD 12.1. Since amd64 CPUs are capable of running i386 programs, this does not even involve emulation. We need to create a couple of new directories first:

# mkdir /var/synth/www/packages/12.1_i386
# mkdir /var/synth/www/log/12.1_i386
# mkdir -p /var/synth/sysroot/12.1_i386

The last one is something that you might or might not be familiar with. A sysroot is a somewhat common term for – well, the root of a system. The sysroot of our running system is /. But we can put the data for other systems somewhere in our filesystem. If we put the base system of 32-bit 12.1-RELEASE into the directory created last, that’ll be a sysroot for 12.1 i386. Technically we don’t need all of the base system and could cherry-pick. It’s easier to simply use the whole thing, though:

# fetch -o /tmp/12.1-i386-base.txz http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases/i386/12.1-RELEASE/base.txz
# tar -C /var/synth/sysroot/12.1_i386 -xf /tmp/12.1-i386-base.txz
# rm /tmp/12.1-i386-base.txz
# synth configure

Alright. Now we’re going to create an additional profile. To do so, press > (greater than key), then chose 2 (i.e. Create new profile) and give it a name like e.g. 12.1-i386. Then change the following three settings:

B: /var/synth/www/packages/12.1_i386
E: /var/synth/www/log/12.1_i386
G: /var/synth/sysroot/12.1_i386

That’s all, after you save the configuration you’re ready to go. Create a list of packages you want to build and let Synth do it’s thing:

# synth just-build /var/synth/pkglist.12.1_i386

The build will fail almost immediately. Why? Let’s take a look. Building pkg(8) failed and here’s why:

--------------------------------------------------------------------------------
--  Phase: check-sanity
--------------------------------------------------------------------------------
/!\ ERROR: /!\

Ports Collection support for your FreeBSD version has ended, and no ports are
guaranteed to build on this system. Please upgrade to a supported release.

No support will be provided if you silence this message by defining
ALLOW_UNSUPPORTED_SYSTEM.

*** Error code 1

Stop.
make: stopped in /xports/ports-mgmt/pkg

Ok, since we’re trying to build for a system that’s not supported anymore, the ports infrastructure warns us about that. We have to tell it to ignore that. How do we do that? You might have guessed: By using a make.conf for the profile we’re using for this set of packages:

# echo ALLOW_UNSUPPORTED_SYSTEM=1 > /usr/local/etc/synth/12.1-i386-make.conf

Then try again to build the set – and it will just work.

Successfully built all the packages using the i386 profile

Automation & Hooks

Last but not least let’s put everything we’ve done so far together to automate building two package sets. We can make use of cron(8) to schedule the tasks. Let’s add the first one to /etc/crontab like this:

0	23	*	*	7	root	env TERM=dumb /usr/local/bin/synth just-build /var/synth/pkglist-12.1-i386

What does it do? It will run synth at 11pm every sunday to build all of the packages defined in the package list referenced there. There’s two things to note here:

  1. You need to disable curses mode in all the profiles you’re going to use. Synth still expects to find the TERM environment variable to figure out the terminal’s capabilities. You can set it to dumb as done here or to xterm or other valid values. If you don’t set it at all, Synth will not run.
  2. The cron entry as we’re using it here will use the active profile for Synth. It’s better to explicitly state which profile should be used. Let’s add another line to crontab for building the amd64 packages for 13.0 on Friday night:
0	23	*	*	5	root	env TERM=dumb SYNTHPROFILE=LiveSystem /usr/local/bin/synth just-build /var/synth/pkglist-13.0-amd64

In general I’d recommend to consider not calling synth directly from cron but to write small scripts instead. You could for example backup the current package set before actually starting the new build or you could snapshot the dataset after the successful build and zfs-send it off to another system.

One last thing that you should be aware of is that Synth provides hooks like hook_run_start, hook_run_end, hook_pkg_failure and so on. If you’re considering using hooks, have a look at the Synth manpage, they are covered well there.

What’s next?

Next topic would be covering Poudriere. However I’m considering taking a little break from package building and writing about something else instead before returning to this topic.

FreeBSD package building pt. 4: (Slightly) Advanced Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt4.gmi

In the previous posts of this series, there was an introduction to package building on FreeBSD and we discussed basic Synth usage. The program’s configuration, working with compiler cache and using the tool to update the installed applications was covered as well.

This article is about some advanced functionality of Synth like using web reports, building package sets and serving repositories as well as taking a look at logs.

Reporting

Synth comes with very useful reporting reporting functionality. For the default LiveSystem profile, Synth writes its logs to /var/log/synth. There it also creates a subdirectory called Reports and puts an advanced web report in there. It looks like this:

% ls -1 /var/log/synth/Report 
01_history.json
favicon.png
index.html
progress.css
progress.js
summary.json
synth.png

We’re going to build and setup a web server to make it accessible. I will use OpenBSD HTTPd for a reason that we’re going to talk about in a minute (besides me liking it quite a bit). Let’s use Synth to install it first. Then we’re going to enable it and create a log directory for it to use:

# synth install www/obhttpd
# sysrc obhttpd_enable=YES
# mkdir /var/log/obhttpd

OpenBSD HTTPd successfully installed by Synth

Alright. Remember how I told you not to change Synth’s directory configuration unless you have a reason for it? Now we have: We’re going to serve both the web report and the packages over http and we’re using OpenBSD HTTPd for that. That browser is chrooted: Not just by default, BTW but by design! You cannot turn it off. Unless told otherwise, Synth has a directory structure that doesn’t fit this scenario well. So we’re going to change it.

First thing is creating a new directory for the logs and then changing the configuration so Synth uses that:

# mkdir -p /var/synth/www/log
> # synth configure

choc

Change setting E to /var/synth/www/log and save. The new directory will of course be empty. We could copy the old files over, but let’s just rebuild HTTPd instead. Using synth build or synth just-build doesn’t work, though; the tool will detect that an up to date package has already been built and that nothing needs to be done. That’s what the force command is handy for:

# synth force www/obhttpd

Web server setup for reports

Now that we have something to serve, we can edit the webserver’s configuration file. Simple delete everything and put something like this into /usr/local/etc/obhttpd.conf:

chroot "/var/synth/www"
logdir "/var/log/obhttpd"

server synth.local {
    listen on * port 80
    root "/log"
    log style combined
    location "/" {
        block return 302 "http://$HTTP_HOST/Report/index.html"
    }
}

This defines where the chroot begins and thus where the unprivileged webserver processes are contained. It also defines the log directory as well as a virtual host (simply called “server” in OpenBSD HTTPd) with the name “synth.local”. Either replace this with a proper domain name that fits your scheme and configure your DNS accordingly or use /etc/hosts on all machines that need to access it to define the name there.

The server part is pretty simple, too. It makes HTTPd bind on port 80 on every usable network interface. The web root is defined relative to the chroot, so in this case it points to /var/synth/www/log. I’ve grown a habit of using the more detailed combined log style; if you’re fine with the default common format, you can leave the respective line out. Finally the configuration block defines a special rule for location / which means somebody accesses the virtual host directly (i.e. http://synth.local in this case). It will make the browser be redirected to the report index instead. Getting a special file (like e.g. lang___python37.log in the log directory) will not trigger the rule and thus still work. This is just a convenience thing and if you don’t like it leave it out.

All that’s missing now is starting up the webserver:

# service obhttpd start

You should now be able to point your browser at the the vhost’s name (if you made it resolve). Just using the machine’s IP address is also going to work in this case since it’s the default vhost. But better make it reachable using the configured name as we’re adding another vhost in a moment.

Synth web report for the latest package run

Authentication

But for now what about security? Let’s say you don’t want to share your report with the whole world. One easy means of protecting it is by using HTTP basic auth. OpenBSD HTTPd uses standard .htpasswd files. These can however use various cryptographic hashes for the passwords – whereas HTTPd only supports one: bcrypt.

The first time I tried to do authentication with OpenBSD HTTPd, it drove me completely nuts as I couldn’t get it working. Fortunately I own Michael W. Lucas’ excellent book “Httpd and Relayd Mastery”. After digging it out the index indicated that I might want to read page 28. I did, banged my head against the table and immediately got it working using that hashing algorithm. Don’t be like me, skip trying to use foreign tools you may be used to and just do it right in the first place. HTTPd comes with its own htpasswd binary. Use that.

In this example I’m adding a user called “synth”. Use whatever you prefer. Then give the password two times. This leaves you with a valid .htpasswd file that HTTPd could use – if it was allowed to access it! Let’s fix that problem:

# chown root:wheel /var/synth/www/.htpasswd
> # chmod 0640 /var/synth/www/.htpasswd

Having the authentication information in place, we only need to add another location block to the webserver’s vhost configuration. Put the following in there after the line that closes the previous location block:

    location "/Report/* {
        authenticate with "/.htpasswd"
    }

Note the htpasswd file’s location! It’s within the chroot (or it couldn’t be accessed by the webserver), but outside the webroot directory. So HTTPd could never accidentally serve it to somebody who knew that it was there and requested the file.

The only thing that remains is restarting the webserver. Next time you visit the report page, you’ll be asked to authenticate first.

# service obhttpd restart

Package repository

So far all of our packages have been created in a directory outside of the webserver’s chroot. If we want to make them available via HTTP, we need to use another path for them. Therefore we’re going to create a directory and reconfigure Synth again:

# mkdir -p /var/synth/www/packages
# synth configure

This time it’s setting B. Change it to /var/synth/www/packages and save. Now let’s build a package that draws in a couple of dependencies:

# synth just-build chocolate-doom

We can watch it now via the web reports while it’s building. Since it’s a new directory where no packages exist, yet, Synth is first going to build the package manager again. During this early stage no report is available, but once that’s finished the reports work.

While we’re rebuilding all packages due to the new package directory, Synth can take advantage of ccache as we haven’t modified it’s path. Wonder how much of a difference that actually makes? Building llvm10 on its own, one time using the cache and one time (for testing purposes) without it will show you the difference:

Duration: 00:13:32 (with ccache)
Duration: 02:09:37 (uncached)

Synth web report while it’s building

It gives us all the information that the curses UI holds – and more. The number of entries for completed packages can be changed. You can browse those page-wise back to the very first packages. It’s possible to use filters to e.g. just list skipped or failed packages. You can view / download (whatever your browser does) the log files for all those packages. And there’s even a search (which can be very handy if you’re building a large package set).

Report with only 10 entries per page

As long as packages are being built, the report page also shows the builder information and automatically refreshes the page every couple of seconds. Once it completes, it removes builder info (which would only waste space) and stops the polling. You can always come back later and inspect everything about the latest package run. The next one will overwrite the previous information, though.

Synth report search function

Now that we have a bunch of newly built packages, let’s see what that looks like:

# ls -1 /var/synth/packages/All
autoconf-2.69_3.txz
autoconf-wrapper-20131203.txz
automake-1.16.3.txz
binutils-2.33.1_4,1.txz
bison-3.7.5,1.txz
ca_root_nss-3.63.txz
ccache-3.7.1_1.txz
celt-0.11.3_3.txz
chocolate-doom-3.0.1.txz
cmake-3.19.6.txz
curl-7.76.0.txz
db5-5.3.28_7.txz
docbook-1.5.txz
docbook-sgml-4.5_1.txz
docbook-xml-5.0_3.txz
docbook-xsl-1.79.1_1,1.txz
doom-data-1.0_1.txz
evdev-proto-5.8.txz
expat-2.2.10.txz
flac-1.3.3_1.txz
[...]

Showing only ignored packages in the report (none in this case)

The packages are there. But what’s in the main directory?

# ls -l /var/synth/www/packages
total 18
drwxr-xr-x  2 root  wheel  150 Jun  7 23:57 All
drwxr-xr-x  2 root  wheel    3 Jun  7 23:21 Latest

This is not a valid pkg(8) repository. Which is no wonder since we used just-build. So we’re going to have Synth create an actual repository from these packages next:

Searching in the report after the build was completed

# synth rebuild-repository
# ls -l /var/synth/www/packages
total 117
drwxr-xr-x  2 root  wheel    150 Jun  7 23:57 All
drwxr-xr-x  2 root  wheel      3 Jun  7 23:21 Latest
-rw-r--r--  1 root  wheel    163 Jun  8 00:02 meta.conf
-rw-r--r--  1 root  wheel    236 Jun  8 00:02 meta.txz
-rw-r--r--  1 root  wheel  40824 Jun  8 00:02 packagesite.txz

Here we go, that’s all that pkg(8) needs. Synth should have automatically updated your repository configuration to use the new location. Have a look at /usr/local/etc/pkg/repos/00_synth.conf – the URL should point to the new directory.

Serving the repository

The next step is to make the repository available in the network, too. So edit /usr/local/etc/obhttpd.conf once more and add another “server” (i.e. vhost):

server pkg.local {
    listen on * port 80
    root "/packages"
    log style combined
    location * {
        directory auto index
    }
}

One service restart later you should be able to access the repository via a web browser from any machine in the same subnet (if you got your DNS right):

# service obhttpd restart

Looking at the package repository with a browser

This is already it, but let’s prove that it works, too. I’m adding the “pkg.local” name to the machine’s 127.0.0.1 definition in /etc/hosts, then change the URL in the Synth repo to fetch packages via HTTP:

  url      : http://pkg.local,

I’ve also created a FreeBSD.conf to disable the official repository. Let’s stop the webserver for a second and then try to update the package DB:

# service obhttpd stop
# pkg update
Updating Synth repository catalogue...
pkg: Repository Synth has a wrong packagesite, need to re-create database
pkg: http://pkg.local/meta.txz: Connection refused
Unable to update repository Synth
Error updating repositories!

Ok, so there’s no other repository configured anymore and this one is not accessed via the filesystem. So we’re going to start the webserver once more (give it a sec) and then try again:

# service obhttpd start
# pkg update
Updating Synth repository catalogue...
Fetching meta.conf: 100%    163 B  0.2kB/s    00:01
Fetching packagesite.txz: 100%   40 KiB  40.8kB/s   00:01
Processing entries: 100%
Synth repository update completed. 148 packages processed.
All repositories are up to date.

Great! So now we can install DooM on this box or on any other machine running FreeBSD 13.0 which can reach it over the network.

Package sets

So far we’ve only either built all packages for what was already installed on the machine or for single ports that we selected at the command line. But now that we can serve packages over the network, it’s rather tempting to use a powerful build machine to build packages for various other FreeBSD machines, isn’t it? Let’s assume that you’re going to share packages with a KDE lover.

First we should prepare a list of packages that we need, starting with what is installed on our machine.

# pkg info | wc -l
345

Wow, that’s already quite some packages for such a pretty naked system! But we don’t need to consider them all as most of them are just dependencies. Let’s ask pkg(8) for the origin of all packages we explicitly installed (i.e. which were not recorded as automatically installed):

# pkg query -e %a=0 %o > /var/synth/pkglist
# cat /var/synth/pkglist
x11-wm/awesome
devel/ccache
graphics/drm-kmod
devel/git
www/obhttpd
ports-mgmt/pkg
ports-mgmt/portmaster
x11/sakura
x11/setxkbmap
security/sudo
ports-mgmt/synth
sysutils/tmux
x11/xfce4-screenshooter-plugin
x11/xorg-minimal

That’s better! But we certainly don’t need portmaster anymore, so we can take it off the list (and deinstall it actually). Let’s add www/firefox and x11/kde5 for our pal (and sort the list since it’s a bit ugly right now).

Once that’s done, we should be able to do a simple:

# synth build /var/synth/pkglist
Error: port origin 'devel/git' not recognized.
Perhaps you intended one of the following flavors?
   - devel/git@default
   - devel/git@gui
   - devel/git@lite
   - devel/git@svn
   - devel/git@tiny

Oh yes, right! We need to edit our list and specify the flavor to build! I’m going with the lite variant here, so the git line needs to be changed to this:

devel/git@lite

Then we can try again – and yes, it starts building after calculating the required dependencies.

Logs

Whoopsie! After about an hour stupid me removed the network cable for a moment. This has caused a couple of build failures (see screenshot). The report will display the phase that the build failed. In this case it’s the fetch phase (and we don’t have to look for the reason as we already know it). Sometimes a distfile mirror is temporary down or the distfile has been removed. In that case you will have to manually get the files and put them into the distfiles directory. Skipped ports also display the reason, i.e. which dependency failed previously.

Failed and skipped ports due to a connection problem

I better re-attach that cable right away and start the building over… Many hours later it has finished. But what’s this? Rust has failed again (and this time it wasn’t me)! And it failed at the stage phase. When this happens it’s usually because of a broken port got committed. Update your ports tree and hope that it has been fixed in the meantime. This is not the reason in our case, however.

Another phase, another failure!

But how do we find out what actually happened? Well, by looking at the logs, of course. Here’s the last 15 lines of the relevant log:

        finished in 183.141 seconds
  < Docs { host: TargetSelection { triple: "x86_64-unknown-freebsd", file: None } }
Install docs stage2 (Some(TargetSelection { triple: "x86_64-unknown-freebsd", file: None }))
running: "sh" "/construction/xports/lang/rust/work/rustc-1.51.0-src/build/tmp/tarball/rust-docs/x86_64-unknown-freebsd/rust-docs-1.51.0-x86_64-unknown-freebsd/install.sh" "--prefix=/construction/xports/lang/rust/work/stage/usr/local" "--sysconfdir=/construction/xports/lang/rust/work/stage/usr/local/etc" "--datadir=/construction/xports/lang/rust/work/stage/usr/local/share" "--docdir=/construction/xports/lang/rust/work/stage/usr/local/share/doc/rust" "--bindir=/construction/xports/lang/rust/work/stage/usr/local/bin" "--libdir=/construction/xports/lang/rust/work/stage/usr/local/lib" "--mandir=/construction/xports/lang/rust/work/stage/usr/local/share/man" "--disable-ldconfig"
install: creating uninstall script at /construction/xports/lang/rust/work/stage/usr/local/lib/rustlib/uninstall.sh
install: installing component 'rust-docs'
###  Watchdog killed runaway process!  (no activity for 78 minutes)  ###



--------------------------------------------------
--  Termination
--------------------------------------------------
Finished: Wednesday, 9 JUN 2021 at 00:04:47 UTC
Duration: 04:27:03

Ha! The build process was killed by the watchdog! Bad doggy? It does happen that the process would eventually have finished. Not this time. We have to dig a little deeper. In /var/log/messages of the build machine I can find the messages kernel: swap_pager: out of swap space and kernel: swp_pager_getswapspace(4): failed. This machine has 24 GB of RAM and 8 GB of swap space configured. And by building 6 huge ports concurrently, it exceeded these resources! Keep in mind that package building can be quite demanding, especially if you use tmpfs (which you should if you can).

So, there we are. We’ve configured our build server for web reports and serving the repository. We’ve looked at building package sets and covered a few examples of what can go wrong. And that’s it for today.

What’s next?

The last article about Synth will cover make.conf, signing repositories and using cron for automated builds. We’ll also take a brief look at profiles.

FreeBSD package building pt. 3: Intermediate Synth

[New to Gemini? Have a look at my Gemini FAQ.]

This article was bi-posted to Gemini and the Web; Gemini version is here: gemini://gemini.circumlunar.space/users/kraileth/neunix/2021/freebsd_package_building_pt3.gmi

In this article we’ll continue exploring Synth. After covering a general introduction to package building as well as Synth’s basic operation in the previous articles, we’re going to look at the program’s configuration and using it for updating the system.

The first thing to do here is to get rid of the old ports tree and replace it with a newer one so that some updates become available:

# zfs destroy zroot/usr/ports
# zfs create zroot/usr/ports
# git clone --depth 1 -b 2012Q2 https://git.freebsd.org/ports.git /usr/ports
# mkdir /usr/ports/distfiles
# synth status

Synth regenerates the flavor index again, then gives the output. This time it will not just show all packages as “new” but most are either updates or rebuilds.

Synth status with new ports tree

Configuration

Let’s look at how you can change Synth’s behavior next:

# synth configure

Synth’s configuration is pretty straight-forward. Options A to G configure the various directories that the program uses. Should you change anything here? If you know what you are doing and have special needs: Maybe. If you actually ask the question whether to change any such directory, the answer is no. Option H (Compiler cache) is a special one. We’ll get to that in a minute.

With I you set the number of builders that Synth uses. J means how many threads each builder may use; think make -j $SOMENUMBER. When compiling a single piece of software, it’s usually recommended to set the number of jobs equal to the machines core count + 1. Take a look at the screenshot above: Doing the math, we’re configuring for 24 cores here – on a system that has 8 (with Hyper Threading).

Synth’s configuration menu

Why does Synth choose to over-provision the resources so much? The reason is simple: It’s only over-provisioned when more than three builders are active at the same time. Often enough not all builders will be at the build stage (where the compiling happens) at the same time. Most other stages are much lighter on CPU – which would mean that you’re wasting available resources (and thus prolong the total build time). Also in the previous post you’ve seen that LLVM and Rust took hours to build (all four remaining builders were idle most of the time!). If the cap would have been lower, build times would have increased even more.

So what’s the best setting for builder count and max jobs? There’s no single answer to that. It depends on both your machine and on the set of ports that you build. Play with both values a bit and get a feeling what works best for you. Or leave it at the default that Synth calculated for your system which will probably work well enough.

Options K and L are real speed boosters. They control whether the base system for the builder (“localbase”) and the directory where the program is built are using tmpfs or not. Tmpfs is a memory-backed filesystem. If you disable one or both options, compilation times increase a lot because all the data that needs to be copied on setting up a builder and compiling software will be written to your disk. For an extreme example: On one of my machines, building and testing a port increased from slightly over 30 seconds with tmpfs to over 4 minutes (!) without it. Yes, that machine uses an HDD and it was occupied with other things besides building packages. But there is more than a negligible impact if you disable tmpfs.

So when to do it in the first place? If you’re on a machine with little RAM you might have to do this. Some ports like LLMV or Firefox require lots of RAM to build. If your system starts swapping heavily, disable tmpfs. Ideally build those ports separately and leave tmpfs on for all the rest.

Then there’s option M which toggles the fancy colored text UI on or off. If you turn it off, you’ll get a simple monochrome log-like output of which builder started or finished building which port. Every now and then the info that’s in the top bar of the UI (e.g. number of packages completed, number of packages remaining, skips, etc) gets printed.

Finally we have option N which toggles using pre-built packages on or off. It’s off by default which means that Synth will build everything required for the selected package set. If you enable this option and have e.g. the official FreeBSD repository enabled as well, it will try to fetch suitable packages from there that can be used as the buildtime or runtime dependencies of the packages that still need to be built. If you’re mixing official and custom packages this could be a huge time saver. I admit that I have never used this option.

And what’s this profile thing? Well, ok. The configuration we’ve been looking at is for the default profile. You can create additional ones if you want to. And that’s where the directories that I told you to ignore come into play. You could for example create a profile to use a different ports tree (e.g. the quaterly branch) and switch between the profiles to build two different packages sets on one machine. While Synth can do this, that is the point where I’d advise you to try out Poudriere instead. The main benefit of Synth over Poudriere is ease of use and when you’re trying to do clearly advanced things with Synth you might as well go for the officially supported FreeBSD package builder instead.

Compiler cache

Let’s disable the curses UI just to see what Synth looks like without it and save the config change. If you plan to build packages regularly on your system, you will definitely want to set up the compiler cache. To be able to use it, we first need another package installed, though: ccache. We’re going to build it and then install it manually this time:

# synth just-build devel/ccache
# pkg add /var/synth/live_packages/All/ccache-3.7.1_1.txz

Then we’re going to create a directory for it and go back to Synth’s configuration menu:

# mkdir -p /var/tmp/ccache/synth
# synth configure

Now change H to point to the newly created directory and save. Synth will use ccache now. But what does it do? Ccache caches the results from compilation processes. It also detects if the same compilation is happening again and can provide the cached result instead of actually compiling once again. This means that the first time you compile it doesn’t make a difference but after that the duration of building packages will drop significantly. The only cost of this is that the cached results take up a bit of space. Keep ccache disabled if drive space is your primary concern. In all other cases definitely turn it on!

Ccache directory and config after the system upgrade

Updating the system

Next is using Synth to update the system.

# synth upgrade-system

After determining which packages need to be built / rebuilt, Synth will start doing so. We’ve turned off the text UI, so now we get only the pretty simplistic output from standard text mode.

Package building in pure text mode

It provides you with the most important information about builders: Which builder started / finished building which package when. You don’t get the nice additional info about which state it’s in and how long it has been busy building so far. As mentioned above, Synth will print a status line every couple of minutes which holds the most important information. But that’s all.

Status lines amidst the builder output

What happens if something goes wrong? I simulated that by simply killing one of the processes associated with the rust builder. Synth reports failure to build rust and prints a status line that shows 17 package skips. In contrast to the curses UI it does not tell you explicitly which ones were skipped, though!

Simulated package build failure

When Synth is done building packages, it displays the tally as usual and does some repository cleanup by removing the old packages. Then it rebuilds the repository.

Tally displayed after completion and repository cleanup

Since we asked Synth to upgrade the system, it invokes pkg(8) to do its thing once the repository rebuild is complete.

Package repository rebuilt, starting system upgrade

And here’s why I strongly prefer prepare-system over upgrade-system: The upgrade is initiated whether there were failed packages or not. And since pkg(8) knows no mercy on currently installed programs when they block upgrades, it will happily remove them by default! To be safe it makes sense to always review what pkg(8) would do before actually letting it do it. Yes, it’s an additional step. Yes, most of the time you’ll be fine with letting Synth handle things. But it might also bite you. You’ve been warned.

System upgrade in progress – it may remove packages!

Every now and then you may want to run synth purge-distfiles (unless you have unlimited storage capacity, of course). It will make the tool scan the distinfo files of all ports and then look for distfile archives of obsolete program versions to remove.

Cleaning up old distfiles with purge-distfiles

There might not be a large gain in our example case, but things do add up. I’ve had Synth reclaim multiple GB on a desktop machine that I regularly upgraded by building custom packages. And that’s definitely worth it.

What’s next?

The next article will cover some leftover topics like logs, web report and repository sharing.