04 Jul 2020, 14:53

Using CPU Subsets for Building Software

Like many ARM CPUs, the one in the Pinebook Pro has a “big.LITTLE” architecture, where some cores are more powerful than others:

[     1.000000] cpu0 at cpus0: Arm Cortex-A53 r0p4 (v8-A), id 0x0
[     1.000000] cpu1 at cpus0: Arm Cortex-A53 r0p4 (v8-A), id 0x1
[     1.000000] cpu2 at cpus0: Arm Cortex-A53 r0p4 (v8-A), id 0x2
[     1.000000] cpu3 at cpus0: Arm Cortex-A53 r0p4 (v8-A), id 0x3
[     1.000000] cpu4 at cpus0: Arm Cortex-A72 r0p2 (v8-A), id 0x100
[     1.000000] cpu5 at cpus0: Arm Cortex-A72 r0p2 (v8-A), id 0x101

The A72 is a more powerful than the efficiency-oriented A53, it has out-of-order execution, plus it reaches a higher maximum clock rate (1.4 GHz for the A53 and 2.0 GHz for the A72 in the Pinebook Pro).

On NetBSD-current, the kernel scheduler prefers the big cores to the little ones. However, when building software, you may want to force the build process onto the big cores only. One advantage is that you still have the little cores to deal with user input and such, yet your build has the highest performance. Also, building with all cores at the highest clock rate will quickly lead to overheating.

NetBSD has a somewhat obscure tool named psrset that allows creating “sets” of cores and running tasks on one of those sets. Let’s try it:

$ psrset
system processor set 0: processor(s) 0 1 2 3 4 5

Now let’s create a set that comprises cpu4 and cpu5. You will have to do that as root for obvious reasons:

# psrset -c 4 5
1
# psrset
system processor set 0: processor(s) 0 1 2 3
user processor set 1: processor(s) 4 5

The first invocation printed “1”, which is the ID of our new processor set. Now we can run something on this set. Everything run below only sees two cores, cpu4 and cpu5. Note the “1” in the command below. This is the ID from before.

# psrset -e 1 make package-install MAKE_JOBS=2

If you run htop or similar while your package is building, you will see that only cpu4 and cpu5 are busy. If you have installed estd to automatically adjust CPU clocks, you will notice that cpu4 and cpu5 are at 2 GHz while the four little cores are running at a cool 400 MHz.

20 Jun 2020, 18:09

Getting Started with NetBSD on the Pinebook Pro

If you buy a Pinebook Pro now, it comes with Manjaro Linux on the internal eMMC storage. Let’s install NetBSD instead!

The easiest way to get started is to buy a decent micro-SD card (what sort of markings it should have is a science of its own, by the way) and install NetBSD on that. On a warm boot (i.e. when rebooting a running system), the micro-SD card has priority compared to the eMMC, so the system will boot from there.

As for which version to run, there is a conundrum:

  • There are binary packages but only for NetBSD-9. On -current, you have to compile everything yourself, which takes a long time.
  • Hardware support is better in NetBSD-current.

The solution is to run a userland from NetBSD-9 with a NetBSD-current kernel.

As the Pinebook Pro is a fully 64-bit capable machine, we are going to run the evbarm-aarch64 NetBSD port on it. Head over to https://armbsd.org/arm/ (thanks, Jared McNeill!) and grab a NetBSD 9 image for the Pinebook Pro. Then (assuming you are under Linux), extract it onto the memory card with the following command:

zcat netbsd-9.img.gz | dd of=/dev/mmcblk2 bs=1m status=progress

Be sure to check that mmcblk2 is the correct device, e.g. by examining dmesg output! Once the command is done, you can reboot. Once booting is finished, you can log in as root with no password. The first thing you should do is to set one, using passwd.

To the eMMC!

Would you like to replace the pre-installed Manjaro Linux on the eMMC?

It makes sense to have your main OS on the built-in storage, since it is quite a bit faster than the typical micro-SD card. In my tests, I get write speeds of about 70 MiB per second on the eMMC.

By the way, if you want more and even faster Storage, PINE64 will sell you an adapter board for adding an NVMe drive (a fast SSD).

Once you have booted NetBSD from the memory card, mount the Linux volume and copy over the image file from before, then unmount and extract it in exactly the same way as above. The only difference is that the target device is called /dev/rld0. Shut down the system, remove the memory card, switch it back on and watch NetBSD come up :)

Getting a -current Kernel

To have better driver support, I recommend installing a NetBSD-current kernel. To do that, you just need to replace the /netbsd file with the new kernel – no changes to the bootloader are required.

You can find a pre-built kernel under https://nycdn.netbsd.org/pub/NetBSD-daily/HEAD/latest/evbarm-aarch64/binary/kernel. Download the file and install it:

cp /netbsd /netbsd.old
gunzip netbsd-GENERIC64.gz
install -o root -g wheel -m 555 netbsd-GENERIC64 /netbsd

You will find that there is now a driver for the built-in Broadcom Wi-Fi (as the bwfm0 interface) but the firmware is missing. To fix this, download the base.tgz set from the same location and extract the firmware blobs only (as root):

cd /
tar xvpfz /path/to/base.tgz libdata/firmware

In my experience however, the Broadcom Wi-Fi driver is extremely likely to make the system crash or hang. I tend to rely mostly on an old Apple Ethernet-USB adapter (an axe interface).

NOTE: I have since back-pedaled and returned to NetBSD 9. Other than the unstable Wi-Fi, I also had crashes when running npm install and other issues.

Update 2022-08-27: changed the device name for writing the image, thanks Nikita!

31 May 2020, 16:04

Pinebook Pro, First Impressions

Note: This post was written on the Pinebook Pro :)

After seeing it in action at FOSDEM (from afar, as the crowd was too large), I decided to buy a Pinebook Pro for personal use. From the beginning, the intention was to use it for pkgsrc development, with NetBSD as the main OS. It was finally delivered on Thursday, one day earlier than promised, so I thought I would write down my first impressions.

If you have never heard of the Pinebook Pro: It is a cheap, open, hackable laptop with an ARM processor, the successor of the original Pinebook (which I thought was too low-end to be a useful daily driver) with generally more premium components.

As I alluded to in the first paragraph, the enthusiasm of the Free Software community is incredibly strong! Nothing showed this better than the incredible resonance from a tweet with a quick snapshot after the first boot:

You’ll note, in passing, that on the photo, I am downloading the NetBSD install image :)

What makes this device attractive, apart from the price, is the ARM architecture without the baggage of the PC world. What’s more, an open and hackable system in the age of Macbooks with soldered everything, tablets that you cannot open, “secure” boot that severely limits what you can run on it, is something of a counterculture device. The Pine64 folks have built a great community that embodies the true hacker spirit.

The Hardware

Here is where I am going to be harsh: in some ways, using this device feels like a regression.

I was previously using a Samsung Chromebook Pro, a Pixelbook and a Pixel Slate as laptops. Compared to these, the Pinebook Pro has

  • no HiDPI screen,
  • no touchscreen,
  • a barrel connector power supply,

and peripherals are mostly using USB-A port. To be fair, there is USB-C, and it can be used to charge the machine, so I haven’t used the original charger yet.

The display resolution is 1920x1080, equivalent to about 100 dpi. While I regret the absence of HiDPI, it is well lit and well readable. The viewing angles are fairly large, and the colors are crisp. The default Manjaro Linux wallpaper is a great showcase for this.

The Pinebook Pro is vaguely shaped like a MacBook Air from a few years ago, with the same curved bottom. At 14", it is surprisingly large – the machines mentioned above are 12 to 13". Compared to the MacBook and Pixelbook in their quest for ever thinner devices, the Pinebook Pro feels strangely empty. I guess there is actually free space on the inside that you can use for upgrades and such.

The most similar laptop I have used is the HP Chromebook 14. Against this, the Pinebook Pro holds up really well though: it is lighter, has a better keyboard and display and is actually cheaper!

Keyboard and Trackpad

The keyboard is an absolute joy to use. Really, it’s great. The keys have a large amount of travel, comparable to older MacBook Pros, before they introduced the terrible keyboard. The layout (I have ANSI, i.e. US) is exactly what you would expect. This is definitely made for typing a lot.

On the other hand, I am not friends with that trackpad. I am hoping I can get used to it at some point. The way it tracks small finger movement is … weird and counter-intuitive, and I am having a hard time hitting small click targets. It has two mouse buttons under the bottom left and bottom right, so clicking in the middle usually has no effect. For dragging, you need to keep one finger in the corner on the button and move another finger, which sometimes triggers multi-touch gestures.

Battery life

I have not done detailed measurements, but it seems pretty good at about 7 hours. Here is the envstat output while writing this:

                               Current  CritMax  WarnMax  WarnMin  CritMin  Unit
[cwfg0]
            battery voltage:     3.935                                         V
            battery percent:        79                                      none
  battery remaining minutes:       319        0        0        0        0  none
[rktsadc0]
                        CPU:    42.778   95.000   75.000                    degC
                        GPU:    43.333   95.000   75.000                    degC

Performance

Compared to all those ARM SBCs I have used (Raspberry Pi, Orange Pi, Pine-A64), the Pinebook Pro feels really fast. Storage (at this point I am using a memory card) is decent speed-wise, and compilations are reasonably fast – though my five year old Intel NUC with an i7 still beats it by far, of course. But my workload involves compiling lots of stuff, so this seems like a good fit.

Graphics performance has not blown me away. Animations on Manjaro stutter a bit, Midori on NetBSD (the first browser that I tried) is really testing my patience.

Hackability?

I noticed that opening the bottom of the housing is screwed on with standard Philips head screws and easy to open. There are no rubber feet glued on top of the screws, no special tools needed.

You can easily boot your custom OS from the micro-SD card reader. As mentioned, I bought this machine for running NetBSD on it, which works well.

There are upgrade kits available, for example an adapter to add an NVMe disk instead of the eMMC. For the original Pinebook, there has been an upgrade kit with a better processor even.

Because we are on ARM, there is no “Intel Inside”, and consequently, there are also no stickers, except for one on the underside that gives the model number.

Conclusion

This has become longer than I intended. Despite my criticism, I really like this machine. I am hoping I can get some good development work done with it and use NetBSD for my daily computing tasks.

Stay tuned for another post with some NetBSD tips!

03 Feb 2020, 11:30

How to do Pull-ups to pkgsrc-stable

I am part of the pkgsrc releng (release engineering) team. My main task there is handling pull-ups into the stable branch.

pkgsrc creates a stable branch every three months and names it after the respective quarter – for example, the last branch was called 2019Q4. Pull-ups are tickets to “pull up” one or more commit from the development branch into the stable branch. Typical justifications for pull-ups are:

  • security updates
  • build fixes
  • important bug fixes (such as when the package crashes on startup)

In addition, sometimes we pull up updates to packages if they are “leaves” and stop working without regular updates. Some web scrapers, for example, need regular updates to keep up with changes in the sites they scrape.

Pull-up tickets can only be sent by pkgsrc developers, to a special mail alias. Unfortunately, the mails come in all kinds of formats, which makes things harder for me and the others on the team.

There is a Python script, in an internal repository, that we use for pull-ups to pkgsrc.

The ideal input is just the commit messages, concatenated, up to and including the blurb about diffs not being public domain. Links to https://mail-index.netbsd.org/ are okay too, but I have to copy/paste the corresponding messages manually. Patches typically mean more manual work because they typically do not contain an appropriate commit message.

When there have been intermediate commits, most of the time they need to be pulled up too. For example, if the stable branch is at version 1.0 and you want version 1.2 pulled up, you typically need to add the 1.1 update commit, for whatever patch or PLIST changes.

If the intermediate commit is a revbump touching a million packages, it is probably better to leave that out. PKGREVISION merge conflicts are almost trivial to fix.

Finally, if you are interested, there is a public web interface tracking the status of pkgsrc pull-ups at http://releng.netbsd.org/cgi-bin/req-pkgsrc.cgi.

15 Jul 2019, 20:44

A Tale of Two Spellcheckers

This is a transcript of the talk I gave at pkgsrcCon 2019 in Cambridge, UK. It is about spellcheckers, but there are much more general software engineering lessons that we can learn from this case study.

The reason I got into this subject at all was my paternal leave last year, when I finally had some more time to spend working on pkgsrc. It was a tiny item in the enormous TODO file at the top of the source tree (“update enchant to version 2.2”) that made me go into this rabbit hole.

A short history of spellchecking

spell

The oldest spellchecker, spell, appeared in version 6 AT&T Unix, but it was actually written before that, in 1975. The great Doug McIlroy (who is also the inventor of the concept of a “pipe”, by the way) worked on spell, added it to UNIX and wrote a 1982 paper. Today, NetBSD still contains a version of spell(1) in the base system.

To say that spell is not user-friendly is an understatement. You give it a text (or troff!) file to check, and it outputs a list of all the misspelled words on stdout. It supports both kinds of languages, British and American. In American mode, it flags all British spelling as incorrect, and vice versa. This includes verbs ending in -ize needing to be written with an -ise ending, which is highly questionable from a linguistic point of view too.

ispell and aspell

Next came a program called ispell, which stands for “interactive spell”. Its main innovation was interactive operation: it would stop when it found a misspelled word and present you with suggestions for what you meant. You choose the correct spelling, and ispell replaces it in the text. It supports different languages, and there is a comprehensive set of dictionaries.

aspell (advanced spell?) set out to replace ispell as the standard spellchecker. Its main distinctive feature was that its suggestions are far better than the ones that ispell provides (“even better than Word 97!”, its documentation claims). It also understands encodings, including UTF-8, which is a big deal for most languages.

Both ispell and aspell are in active use today. Their dictionary formats are different.

A digression on agglutinating languages

Imagine that you would like to spellcheck a text that is written in Finnish. The problem with writing a dictionary for Finnish though is the near-infinite number of words that it would need to contain. It is an agglutinating language, which means that you can stick words together without a space. As an example, consider the word “kasvihuoneilmiö”, which is composedof the individual words for “plant”, “room” and “phenomenon” and means “greenhouse effect”. Such composites do not obey vocal harmony rules (indeed, this is one way to see where the separation is). In addition, there are about 15 different cases for the noun, as a number of prepositions is replaced by a case (as in, a suffix). Some cases use the strong stem, some the weak stem of the word. Et cetera.

So what do you do if you would like to keep the dictionary as small as possible? Easy: you take a team of linguists and have them carefully model all the rules of word construction as a library! Such a thing exists in fact for Finnish (voikko), for Turkish (zemberek) and for Hungarian (hunspell).

Hunspell is particularly interesting: while it does contain special word formation rules for Hungarian, it is also an excellent spellchecker for other languages. It can use aspell dictionaries but is faster and gives even better suggestions, apparently. So using hunspell is a fairly popular choice among users, no matter what language.

Needs more abstraction

The consequence of the previous section is:

  • multiple spellcheckers are in active use;
  • users would like to be able to choose which one to use;
  • that choice may depend on the language of the document.

So what should you do as an application developer? You need an abstraction library over all these different programs. Such a library exists, and it is called Enchant.

Enchant gives you a uniform interface over all spellcheckers (including the system one on macOS, and a few more), handles the user choice of backend and the user dictionaries.

The messy Enchant 2 transition

Enchant 2.0.0 was released in August 2017. Its release notes contain this (emphasis mine):

The major version number has been incremented owing to API/ABI changes, but in practice upgrading from 1.6.x should be easy.

Previously-deprecated APIs have been removed.

The little-used enchant_broker_get/set_param calls have been removed.

Some trivial API changes have been made to fix otherwise-unavoidable compilation warnings both in libenchant and in application code. This is strictly an ABI change (although the ABI may not actually have changed, depending on the platform).

So there is a new major release, and it is incompatible with the previous release for ABI and API. In a surprising development, uptake of the new version was extremely low. For someone to adopt the new version, they have to replace the old version with it, at which point all programs that use Enchant stop working until you fix them. Thus, the developers have created a chicken-and-egg problem.

They spent the next couple releases re-adding bits that had been removed and declared that the new release was now API-compatible to 1.x, “except for some really deprecated calls”. It just so happened that many Enchant-using programs actually used these calls, since they were more convenient than their non-deprecated replacements!

Then, in November 2017, Enchant 2.1.3 had this in its release notes (again, emphasis mine):

This release adds support for parallel installation with other major versions of Enchant, and fixes a crash in the Voikko provider when it has no supported languages.

2.2.0 fixed parallel installation fully. You can now install Enchant versions 1 and 2 in parallel, since they go into different subdirectories and have different pkg-config files (enchant.pc vs. enchant-2.pc).

Adoption by other software is still really low: no one checks for the enchant-2 package, and for an application developer, there has never been a compelling reason to use version 2 rather than 1.

Enchant in pkgsrc

Back to pkgsrc. I tried to make Enchant 2 the only Enchant version in our tree.

And failed.

As stated above, almost no software has explicit support for checking enchant-2.pc, so I resorted to a trick to not have to patch all those configure scripts. textproc/enchant2/buildlink3.mk has this bit:

# Lots of older software looks for enchant.pc instead of enchant-2.pc.
${BUILDLINK_DIR}/lib/pkgconfig/enchant.pc:
        ${MKDIR} ${BUILDLINK_DIR}/lib/pkgconfig
        cd ${BUILDLINK_DIR}/lib/pkgconfig && ${LN} -sf enchant-2.pc enchant.pc

buildlink-enchant2-cookie: ${BUILDLINK_DIR}/lib/pkgconfig/enchant.pc

What this does is symlink enchant-2.pc to enchant.pc within the buildlink tree that is created for a single package build. We can do that because no enchant1 files are present in that tree.

But what broke the whole thing was PHP. Of course.

php-enchant supports only enchant1. Worse, it translates the entire API, including those deprecated bits, to PHP. So there is no way to make it use the newer version: if you were to remove the APIs that are no longer provided, software using php-enchant might break, at runtime. This is not acceptable for web applications.

So this is where I am stuck.

General advice for library authors

In lieu of a conclusion, I would like to offer some general advice if you are the author or maintainer of a library.

The most important is this: An incompatible V2 of a library is like a new product.

Importantly, this means that if you stop maintaining V1 the moment you release V2, it is as if you had abandoned your library and created a new one.

Think of other projects that depend on you as customers. Think about migration paths. Think about the cost-benefit ratio of an upgrade by your customers.

Consider sending pull requests to your customers! If you look at pkgsrc, Debian, etc., it is easy to see what other projects depend on your library. Many of them are on github. All of them probably have a way of sending patches. Send them a patch to upgrade the dependency. Do the work for them.

Otherwise, you are developing for no one.

02 Jul 2019, 19:29

pkgsrccon 2019: Talk Announcement

In a few weeks, on the weekend of July 13 and 14, the annual pkgsrc conference, pkgsrcCon 2019, will take place in Cambridge, UK. Whether you are a user or developer of pkgsrc, this is a really nice place to meet the developers and spend some time hacking together and listening to talks.

My talk this year was originally supposed to be about Go module support in pkgsrc, but that work did not get done in time. So instead, I will talk about something entirely different:

A Tale of Two Spellcheckers

This talk is about the obscure and intricate world of spell checkers and how they are packaged in pkgsrc, the NetBSD package collection.

There are many general-purpose spell checkers in existence. ispell and aspell are the most famous ones, but hunspell, originally written for Hungarian, has become the spell checker of choice. In addition, there are a number of specialized checkers tailored to the idiosyncrasies of a single language, e.g. voikko (Finnish) and zemberek (Turkish).

There is a separate library, called Enchant, that aims to abstract away the spell checker implementation from the application code. Enchant went through a botched transition of major versions, from V1 to V2. To this day, most apps only support V1. We’ll talk about general lessons from this case.

30 Apr 2019, 20:19

Supporting Go Modules in pkgsrc (Part 2)

This announcement dropped today:

I realized that this is the missing piece for supporting Go modules in pkgsrc. If you go back and reread the “fetch” section in Supporting Go Modules in pkgsrc, it seems a bit awkward compared to a standard fetch action. The reason is that go mod download re-packs the source into its own zip format archive.

The module proxy (https://proxy.golang.org/) solves this problem and enables a simple solution for modules, very similar to lang/rust/cargo.mk. Basically, a target similar to show-cargo-depends that outputs a Makefile fragment containing the names of modules that the current package depends upon. All these become distfiles fetched from a hypothetical $MASTER_SITES_GOPROXY. Crucially, this means that the distfiles do not have to be stored in a LOCAL_PORTS subdirectory but can use the normal fetch infrastructure.

Now all that remains is implementing this :) There is some more time to do that: Go 1.13 (to be released some time in summer) will use module support by default. What’s more, a bunch of new software (including the various golang.org/x/* repositories) has go.mod files these days, using module-based builds by default.

04 Feb 2019, 17:15

Pkgsrc Buildbots

After talking to Sijmen Mulder on IRC (thanks, TGV Wi-Fi!), I began thinking more about how you could automate the pkgsrc release engineers away.

The basic idea for a buildbot would be this:

  1. Download and unpack latest pkgsrc.tar.gz for the stable branch.
  2. Run the pullup script with the ticket number, then run whatever pullup script it outputs.
  3. Figure out the package that this concerns (perhaps from filenames).
  4. Go to the package in question, install its dependencies from binary packages.
  5. Build (make package is probably enough, or perhaps also install?).
  6. Upload build log to Cloud Storage.
  7. Post an email to the pullup thread with status and a link to the log.

For extra points, do this in a fresh, ephemeral VM, triggered by an incoming mail.

You would also need a buildbot supervisor that receives mails (to know that it should build something) and that launches the VM. I know that Google App Engine could do it, as it can receive emails. But maybe Cloud Functions would be the way to go?

In any case, this would be a cool project for someone, maybe myself :)

Issues with Pull-up Ticket Tracking

This project is largely orthogonal to improvements in the pullup script. Right now, there are a number of issues with it that make it require manual intervention in many cases:

  • The tracker (req) doesn’t do MIME, so sometimes mails are encoded with base64 or quoted-printale. This breaks parsing the commit mails.
  • Sometimes, submitters of tickets insert mail-index.netbsd.org URLs instead of copies of the message.
  • Some pullup tickets include a patch instead of, or in addition to, a list of commits. For instance, this may happen when backporting a fix to an older release instead of pulling up a bigger update.
  • Sometimes, commit messages are truncated, or there are merge conflicts. This mostly happens when there has been a revbump before the change that is to be committed – in the majority of cases, the merge conflicts only concern PKGREVISION lines.

I am wondering how much we could gain, e.g. in terms of MIME support, from changing the request tracking software. admins@ uses RT, which has more features. Perhaps that could be brought to pullup tickets?

29 Dec 2018, 13:12

Supporting Go Modules in pkgsrc, a Proposal

Go 1.11 introduced a new way of building Go code that no longer needs a GOPATH at all. In due course, this will become the default way of building. What’s more, sooner or later, we are going to want to package software that only builds with modules.

There should be some package-settable variable that controls whether you want to use modules or not. If you are going to use modules, then the repo should have a go.mod file. Otherwise (e.g. if there is a dep file or something), the build could start by doing go mod init (which needs to be after make extract).

fetch

There can be two implementations of the fetch phase:

  1. Run go mod download.

    It should download required packages into a cache directory, $GOPATH/pkg/mod/cache/download. Then, I propose tarring up the whole tree into a single .tar.gz and putting that into the distfile directory for make checksum. Alternatively, we could have the individual files from the cache as “distfiles”. Note however (see below) that the filenames alone do not contain the module name, so there will be tons of files named v1.0.zip and so on.

  2. “Regular fetch”

    Download the .tar.gz (or the set of individual files) from above from the LOCAL_PORTS directory on ftp.n.o, as usual.

The files that go mod download creates are different from any of the ones that upstream provides. Notably, the zip files are based on a VCS checkout followed by re-zipping. Here is an example for the piece of a cache tree corresponding to a single dependency (ignore the lock files):

./github.com/nsf/termbox-go/@v:
list                                           v0.0.0-20180613055208-5c94acc5e6eb.lock        v0.0.0-20180613055208-5c94acc5e6eb.ziphash
list.lock                                      v0.0.0-20180613055208-5c94acc5e6eb.mod
v0.0.0-20180613055208-5c94acc5e6eb.info        v0.0.0-20180613055208-5c94acc5e6eb.zip

As an additional complication, (2) needs to run after “make extract”. Method (1) cannot always be the default, as it needs access to some kind of hosting. A non-developer cannot easily upload the distfile.

extract

In a GOPATH build, we do some gymnastics to move the just-extracted source code into the correct place in a GOPATH. This is no longer necessary, and module builds can just use the same $WRKSRC logic as other software.

build

The dependencies tarball (or individual dependencies files) should be extracted into $GOPATH, which in non-mod builds is propagated through buildlink3.mk files of dependent packages. After this, in all invocations of the go tool, we set GOPROXY=file://$GOPATH/pkg/mod/cache/download, as per this comment from the help:

A Go module proxy is any web server that can respond to GET requests for URLs of a specified form. The requests have no query parameters, so even a site serving from a fixed file system (including a file:/// URL) can be a module proxy.

Even when downloading directly from version control systems, the go command synthesizes explicit info, mod, and zip files and stores them in its local cache, $GOPATH/pkg/mod/cache/download, the same as if it had downloaded them directly from a proxy.

20 Nov 2018, 20:10

Race Condition at the Pool

Recently, I stumbled upon an odd race condition, at the local public pool of all places. The following workflow, which should be standard, does not work:

  1. Buy a 10-entry ticket and pay with debit card.
  2. Immediately try to redeem one entry to, well, go for a swim.

The freshly printed card will be declined, and you have to ask for help. When you leave (because this is Switzerland and everyone is honest, right!?), you hand in the card at the cash desk, and it is perfectly fine.

They tell me that this is a common issue, and they have it a lot.

The only explanation I can find for this behavior is that the system handling the tickets only declares the ticket as valid once it has fully cleared the card transaction – perhaps to reduce the risk of fraud. This fits with the observation that paying with cash does not trigger this issue. Also, I note that the ticket itself is apparently only a record number in a database.

So this is how fraud prevention can annoy the hell out of your customers :)