03 Feb 2020, 11:30

How to do Pull-ups to pkgsrc-stable

I am part of the pkgsrc releng (release engineering) team. My main task there is handling pull-ups into the stable branch.

pkgsrc creates a stable branch every three months and names it after the respective quarter – for example, the last branch was called 2019Q4. Pull-ups are tickets to “pull up” one or more commit from the development branch into the stable branch. Typical justifications for pull-ups are:

  • security updates
  • build fixes
  • important bug fixes (such as when the package crashes on startup)

In addition, sometimes we pull up updates to packages if they are “leaves” and stop working without regular updates. Some web scrapers, for example, need regular updates to keep up with changes in the sites they scrape.

Pull-up tickets can only be sent by pkgsrc developers, to a special mail alias. Unfortunately, the mails come in all kinds of formats, which makes things harder for me and the others on the team.

There is a Python script, in an internal repository, that we use for pull-ups to pkgsrc.

The ideal input is just the commit messages, concatenated, up to and including the blurb about diffs not being public domain. Links to https://mail-index.netbsd.org/ are okay too, but I have to copy/paste the corresponding messages manually. Patches typically mean more manual work because they typically do not contain an appropriate commit message.

When there have been intermediate commits, most of the time they need to be pulled up too. For example, if the stable branch is at version 1.0 and you want version 1.2 pulled up, you typically need to add the 1.1 update commit, for whatever patch or PLIST changes.

If the intermediate commit is a revbump touching a million packages, it is probably better to leave that out. PKGREVISION merge conflicts are almost trivial to fix.

Finally, if you are interested, there is a public web interface tracking the status of pkgsrc pull-ups at http://releng.netbsd.org/cgi-bin/req-pkgsrc.cgi.

15 Jul 2019, 20:44

A Tale of Two Spellcheckers

This is a transcript of the talk I gave at pkgsrcCon 2019 in Cambridge, UK. It is about spellcheckers, but there are much more general software engineering lessons that we can learn from this case study.

The reason I got into this subject at all was my paternal leave last year, when I finally had some more time to spend working on pkgsrc. It was a tiny item in the enormous TODO file at the top of the source tree (“update enchant to version 2.2”) that made me go into this rabbit hole.

A short history of spellchecking

spell

The oldest spellchecker, spell, appeared in version 6 AT&T Unix, but it was actually written before that, in 1975. The great Doug McIlroy (who is also the inventor of the concept of a “pipe”, by the way) worked on spell, added it to UNIX and wrote a 1982 paper. Today, NetBSD still contains a version of spell(1) in the base system.

To say that spell is not user-friendly is an understatement. You give it a text (or troff!) file to check, and it outputs a list of all the misspelled words on stdout. It supports both kinds of languages, British and American. In American mode, it flags all British spelling as incorrect, and vice versa. This includes verbs ending in -ize needing to be written with an -ise ending, which is highly questionable from a linguistic point of view too.

ispell and aspell

Next came a program called ispell, which stands for “interactive spell”. Its main innovation was interactive operation: it would stop when it found a misspelled word and present you with suggestions for what you meant. You choose the correct spelling, and ispell replaces it in the text. It supports different languages, and there is a comprehensive set of dictionaries.

aspell (advanced spell?) set out to replace ispell as the standard spellchecker. Its main distinctive feature was that its suggestions are far better than the ones that ispell provides (“even better than Word 97!”, its documentation claims). It also understands encodings, including UTF-8, which is a big deal for most languages.

Both ispell and aspell are in active use today. Their dictionary formats are different.

A digression on agglutinating languages

Imagine that you would like to spellcheck a text that is written in Finnish. The problem with writing a dictionary for Finnish though is the near-infinite number of words that it would need to contain. It is an agglutinating language, which means that you can stick words together without a space. As an example, consider the word “kasvihuoneilmiö”, which is composedof the individual words for “plant”, “room” and “phenomenon” and means “greenhouse effect”. Such composites do not obey vocal harmony rules (indeed, this is one way to see where the separation is). In addition, there are about 15 different cases for the noun, as a number of prepositions is replaced by a case (as in, a suffix). Some cases use the strong stem, some the weak stem of the word. Et cetera.

So what do you do if you would like to keep the dictionary as small as possible? Easy: you take a team of linguists and have them carefully model all the rules of word construction as a library! Such a thing exists in fact for Finnish (voikko), for Turkish (zemberek) and for Hungarian (hunspell).

Hunspell is particularly interesting: while it does contain special word formation rules for Hungarian, it is also an excellent spellchecker for other languages. It can use aspell dictionaries but is faster and gives even better suggestions, apparently. So using hunspell is a fairly popular choice among users, no matter what language.

Needs more abstraction

The consequence of the previous section is:

  • multiple spellcheckers are in active use;
  • users would like to be able to choose which one to use;
  • that choice may depend on the language of the document.

So what should you do as an application developer? You need an abstraction library over all these different programs. Such a library exists, and it is called Enchant.

Enchant gives you a uniform interface over all spellcheckers (including the system one on macOS, and a few more), handles the user choice of backend and the user dictionaries.

The messy Enchant 2 transition

Enchant 2.0.0 was released in August 2017. Its release notes contain this (emphasis mine):

The major version number has been incremented owing to API/ABI changes, but in practice upgrading from 1.6.x should be easy.

Previously-deprecated APIs have been removed.

The little-used enchant_broker_get/set_param calls have been removed.

Some trivial API changes have been made to fix otherwise-unavoidable compilation warnings both in libenchant and in application code. This is strictly an ABI change (although the ABI may not actually have changed, depending on the platform).

So there is a new major release, and it is incompatible with the previous release for ABI and API. In a surprising development, uptake of the new version was extremely low. For someone to adopt the new version, they have to replace the old version with it, at which point all programs that use Enchant stop working until you fix them. Thus, the developers have created a chicken-and-egg problem.

They spent the next couple releases re-adding bits that had been removed and declared that the new release was now API-compatible to 1.x, “except for some really deprecated calls”. It just so happened that many Enchant-using programs actually used these calls, since they were more convenient than their non-deprecated replacements!

Then, in November 2017, Enchant 2.1.3 had this in its release notes (again, emphasis mine):

This release adds support for parallel installation with other major versions of Enchant, and fixes a crash in the Voikko provider when it has no supported languages.

2.2.0 fixed parallel installation fully. You can now install Enchant versions 1 and 2 in parallel, since they go into different subdirectories and have different pkg-config files (enchant.pc vs. enchant-2.pc).

Adoption by other software is still really low: no one checks for the enchant-2 package, and for an application developer, there has never been a compelling reason to use version 2 rather than 1.

Enchant in pkgsrc

Back to pkgsrc. I tried to make Enchant 2 the only Enchant version in our tree.

And failed.

As stated above, almost no software has explicit support for checking enchant-2.pc, so I resorted to a trick to not have to patch all those configure scripts. textproc/enchant2/buildlink3.mk has this bit:

# Lots of older software looks for enchant.pc instead of enchant-2.pc.
${BUILDLINK_DIR}/lib/pkgconfig/enchant.pc:
        ${MKDIR} ${BUILDLINK_DIR}/lib/pkgconfig
        cd ${BUILDLINK_DIR}/lib/pkgconfig && ${LN} -sf enchant-2.pc enchant.pc

buildlink-enchant2-cookie: ${BUILDLINK_DIR}/lib/pkgconfig/enchant.pc

What this does is symlink enchant-2.pc to enchant.pc within the buildlink tree that is created for a single package build. We can do that because no enchant1 files are present in that tree.

But what broke the whole thing was PHP. Of course.

php-enchant supports only enchant1. Worse, it translates the entire API, including those deprecated bits, to PHP. So there is no way to make it use the newer version: if you were to remove the APIs that are no longer provided, software using php-enchant might break, at runtime. This is not acceptable for web applications.

So this is where I am stuck.

General advice for library authors

In lieu of a conclusion, I would like to offer some general advice if you are the author or maintainer of a library.

The most important is this: An incompatible V2 of a library is like a new product.

Importantly, this means that if you stop maintaining V1 the moment you release V2, it is as if you had abandoned your library and created a new one.

Think of other projects that depend on you as customers. Think about migration paths. Think about the cost-benefit ratio of an upgrade by your customers.

Consider sending pull requests to your customers! If you look at pkgsrc, Debian, etc., it is easy to see what other projects depend on your library. Many of them are on github. All of them probably have a way of sending patches. Send them a patch to upgrade the dependency. Do the work for them.

Otherwise, you are developing for no one.

02 Jul 2019, 19:29

pkgsrccon 2019: Talk Announcement

In a few weeks, on the weekend of July 13 and 14, the annual pkgsrc conference, pkgsrcCon 2019, will take place in Cambridge, UK. Whether you are a user or developer of pkgsrc, this is a really nice place to meet the developers and spend some time hacking together and listening to talks.

My talk this year was originally supposed to be about Go module support in pkgsrc, but that work did not get done in time. So instead, I will talk about something entirely different:

A Tale of Two Spellcheckers

This talk is about the obscure and intricate world of spell checkers and how they are packaged in pkgsrc, the NetBSD package collection.

There are many general-purpose spell checkers in existence. ispell and aspell are the most famous ones, but hunspell, originally written for Hungarian, has become the spell checker of choice. In addition, there are a number of specialized checkers tailored to the idiosyncrasies of a single language, e.g. voikko (Finnish) and zemberek (Turkish).

There is a separate library, called Enchant, that aims to abstract away the spell checker implementation from the application code. Enchant went through a botched transition of major versions, from V1 to V2. To this day, most apps only support V1. We’ll talk about general lessons from this case.

30 Apr 2019, 20:19

Supporting Go Modules in pkgsrc (Part 2)

This announcement dropped today:

I realized that this is the missing piece for supporting Go modules in pkgsrc. If you go back and reread the “fetch” section in Supporting Go Modules in pkgsrc, it seems a bit awkward compared to a standard fetch action. The reason is that go mod download re-packs the source into its own zip format archive.

The module proxy (https://proxy.golang.org/) solves this problem and enables a simple solution for modules, very similar to lang/rust/cargo.mk. Basically, a target similar to show-cargo-depends that outputs a Makefile fragment containing the names of modules that the current package depends upon. All these become distfiles fetched from a hypothetical $MASTER_SITES_GOPROXY. Crucially, this means that the distfiles do not have to be stored in a LOCAL_PORTS subdirectory but can use the normal fetch infrastructure.

Now all that remains is implementing this :) There is some more time to do that: Go 1.13 (to be released some time in summer) will use module support by default. What’s more, a bunch of new software (including the various golang.org/x/* repositories) has go.mod files these days, using module-based builds by default.

04 Feb 2019, 17:15

Pkgsrc Buildbots

After talking to Sijmen Mulder on IRC (thanks, TGV Wi-Fi!), I began thinking more about how you could automate the pkgsrc release engineers away.

The basic idea for a buildbot would be this:

  1. Download and unpack latest pkgsrc.tar.gz for the stable branch.
  2. Run the pullup script with the ticket number, then run whatever pullup script it outputs.
  3. Figure out the package that this concerns (perhaps from filenames).
  4. Go to the package in question, install its dependencies from binary packages.
  5. Build (make package is probably enough, or perhaps also install?).
  6. Upload build log to Cloud Storage.
  7. Post an email to the pullup thread with status and a link to the log.

For extra points, do this in a fresh, ephemeral VM, triggered by an incoming mail.

You would also need a buildbot supervisor that receives mails (to know that it should build something) and that launches the VM. I know that Google App Engine could do it, as it can receive emails. But maybe Cloud Functions would be the way to go?

In any case, this would be a cool project for someone, maybe myself :)

Issues with Pull-up Ticket Tracking

This project is largely orthogonal to improvements in the pullup script. Right now, there are a number of issues with it that make it require manual intervention in many cases:

  • The tracker (req) doesn’t do MIME, so sometimes mails are encoded with base64 or quoted-printale. This breaks parsing the commit mails.
  • Sometimes, submitters of tickets insert mail-index.netbsd.org URLs instead of copies of the message.
  • Some pullup tickets include a patch instead of, or in addition to, a list of commits. For instance, this may happen when backporting a fix to an older release instead of pulling up a bigger update.
  • Sometimes, commit messages are truncated, or there are merge conflicts. This mostly happens when there has been a revbump before the change that is to be committed – in the majority of cases, the merge conflicts only concern PKGREVISION lines.

I am wondering how much we could gain, e.g. in terms of MIME support, from changing the request tracking software. admins@ uses RT, which has more features. Perhaps that could be brought to pullup tickets?

29 Dec 2018, 13:12

Supporting Go Modules in pkgsrc, a Proposal

Go 1.11 introduced a new way of building Go code that no longer needs a GOPATH at all. In due course, this will become the default way of building. What’s more, sooner or later, we are going to want to package software that only builds with modules.

There should be some package-settable variable that controls whether you want to use modules or not. If you are going to use modules, then the repo should have a go.mod file. Otherwise (e.g. if there is a dep file or something), the build could start by doing go mod init (which needs to be after make extract).

fetch

There can be two implementations of the fetch phase:

  1. Run go mod download.

    It should download required packages into a cache directory, $GOPATH/pkg/mod/cache/download. Then, I propose tarring up the whole tree into a single .tar.gz and putting that into the distfile directory for make checksum. Alternatively, we could have the individual files from the cache as “distfiles”. Note however (see below) that the filenames alone do not contain the module name, so there will be tons of files named v1.0.zip and so on.

  2. “Regular fetch”

    Download the .tar.gz (or the set of individual files) from above from the LOCAL_PORTS directory on ftp.n.o, as usual.

The files that go mod download creates are different from any of the ones that upstream provides. Notably, the zip files are based on a VCS checkout followed by re-zipping. Here is an example for the piece of a cache tree corresponding to a single dependency (ignore the lock files):

./github.com/nsf/termbox-go/@v:
list                                           v0.0.0-20180613055208-5c94acc5e6eb.lock        v0.0.0-20180613055208-5c94acc5e6eb.ziphash
list.lock                                      v0.0.0-20180613055208-5c94acc5e6eb.mod
v0.0.0-20180613055208-5c94acc5e6eb.info        v0.0.0-20180613055208-5c94acc5e6eb.zip

As an additional complication, (2) needs to run after “make extract”. Method (1) cannot always be the default, as it needs access to some kind of hosting. A non-developer cannot easily upload the distfile.

extract

In a GOPATH build, we do some gymnastics to move the just-extracted source code into the correct place in a GOPATH. This is no longer necessary, and module builds can just use the same $WRKSRC logic as other software.

build

The dependencies tarball (or individual dependencies files) should be extracted into $GOPATH, which in non-mod builds is propagated through buildlink3.mk files of dependent packages. After this, in all invocations of the go tool, we set GOPROXY=file://$GOPATH/pkg/mod/cache/download, as per this comment from the help:

A Go module proxy is any web server that can respond to GET requests for URLs of a specified form. The requests have no query parameters, so even a site serving from a fixed file system (including a file:/// URL) can be a module proxy.

Even when downloading directly from version control systems, the go command synthesizes explicit info, mod, and zip files and stores them in its local cache, $GOPATH/pkg/mod/cache/download, the same as if it had downloaded them directly from a proxy.

20 Nov 2018, 20:10

Race Condition at the Pool

Recently, I stumbled upon an odd race condition, at the local public pool of all places. The following workflow, which should be standard, does not work:

  1. Buy a 10-entry ticket and pay with debit card.
  2. Immediately try to redeem one entry to, well, go for a swim.

The freshly printed card will be declined, and you have to ask for help. When you leave (because this is Switzerland and everyone is honest, right!?), you hand in the card at the cash desk, and it is perfectly fine.

They tell me that this is a common issue, and they have it a lot.

The only explanation I can find for this behavior is that the system handling the tickets only declares the ticket as valid once it has fully cleared the card transaction – perhaps to reduce the risk of fraud. This fits with the observation that paying with cash does not trigger this issue. Also, I note that the ticket itself is apparently only a record number in a database.

So this is how fraud prevention can annoy the hell out of your customers :)

10 Nov 2018, 18:46

pkgsrc: Upgrading, Part 1

I found this text in my post drafts, where it had been sitting for a bit. Consider this the first part of a series on keeping pkgsrc up to date.

If you have not upgraded the packages in your pkgsrc installation in a while, you might be so far behind on updates that most or all your packages are outdated. Now what?

The easiest way to update you packages in order is to simply use pkg_rolling-replace. Update your pkgsrc tree (either to the latest from cvs, or to a supported quarterly release), then simply run

$ pkg_rolling-replace -uv

This will rebuild the required set of packages, in the right order. This takes a while, as the rebuild is from source, and is somewhat likely to break in the middle. When the compilation of a package fails, the tool just stops and leaves you with an inconsistent (and in the worst case, non-working) set of packages. Good luck fixing things. Making a backup of your /usr/pkg and /var/db/pkg* directories before you start is a good idea.

10 Nov 2018, 17:53

Build Systems: CMake and Autotools

I think I am finally warming up to CMake.

Eight years ago (at FOSDEM 2010), I gave a talk on build systems that explains the fundamentals of automake, autoconf and libtool:

There is nothing in this talk that is no longer valid today as far as I can see, though CMake was “newfangled” then and is a lot less so today. In any case, my conclusion still stands:

  • Don’t try to reinvent the wheel, use a popular build system.
  • You cannot write a portable build system from scratch – so don’t try.

My advice came from pent-up frustration over software that does not build on my platform (MirBSD, at the time) but remains true today. And to be clear:

Autotools is still a good choice for new code.

CMake

However, recent experience has made me like CMake a lot more. For one, it is more common today, which means that packaging systems such as pkgsrc have good support for using it. For instance, in a pkgsrc Makefile, configuring using CMake is as easy as specifying

USE_CMAKE=	yes

As the user of a package (i.e. the person who compiles it), CMake builds are compelling because they (a) configure faster and (b) build faster.

Regarding configuring, it is infuriating (to me) how the run time of the configure script in autotools totally dominates build time, as long as you run make -j12 or similar. CMake typically checks fewer things (I think) and does not run giant blobs of shell, so it is faster.

For the latter, I have noticed that CMake builds typically manage to use all the cores of the machine, while automake-based builds do not. I think (again, this is speculation) that this is because automake encourages one Makefile per directory (which are being run sequentially, not in parallel) and one directory per target, while CMake builds all in one go. Automake can do one Makefile for all directories too, but support for that was added only a few years ago, and it seems rarely used.

CMake builds also have different diagnostics (console output), optionally in color. Some people hate the colors, and they can be garish, but I do like the percentages that are shown for every line.

Concrete case: icewm

When I recently packaged wm/icewm14, I noticed that you now have the choice of CMake or autotools, and I ended up with CMake. There were a few things to fix but its CMakeLists.txt file is reasonably easy to edit. Note that it contains both configuration tests and target declarations. Here is a small example:

ADD_EXECUTABLE(genpref${EXEEXT} genpref.cc ${MISC_SRCS})
TARGET_LINK_LIBRARIES(genpref${EXEEXT} ${EXTRA_LIBS})

# ... other targets ...

INSTALL(TARGETS icewm${EXEEXT} icesh${EXEEXT} icewm-session${EXEEXT} icewmhint${EXEEXT} icewmbg${EXEEXT} DESTINATION ${BINDIR})

Compared to the same thing in automake:

noinst_PROGRAMS = \
	genpref

genpref_SOURCES = \
	intl.h \
	debug.h \
	sysdep.h \
	base.h \
	bindkey.h \
	themable.h \
	default.h \
	genpref.cc
genpref_LDADD = libice.la @LIBINTL@

So if anything, the syntax is no worse but the result is a bit better. I was able to rummage around in CMakeLists.txt without reading any documentation.

05 Aug 2018, 14:43

Working categories

Vacation is a good time for some housekeeping. So I managed to get categories for posts working! In the process, I learned a bit about how hugo works.

Hugo automatically creates so-called taxonomies for tags and categories. The theme I have been using only shows categories in the header itself. And I managed to disable the categories taxonomy in the config file. And I got the syntax for specifying them wrong.

It turns out that the config file is toml and the front matter of posts is yaml, for some inexplicable reason. So the correct syntax for categories is a YAML list:

categories:
 - NetBSD
 - Cloud

Hugo will then create pages for each category under /categories, with a filtered list of posts. Click a category on a post below to see it.

Google Analytics disabled

While here, I also got rid of the Google Analytics include (yay data protection!). However, the whole site remains hosted on Firebase Hosting, which is a Google product.