29 Dec 2018, 13:12

Supporting Go Modules in pkgsrc, a Proposal

Go 1.11 introduced a new way of building Go code that no longer needs a GOPATH at all. In due course, this will become the default way of building. What’s more, sooner or later, we are going to want to package software that only builds with modules.

There should be some package-settable variable that controls whether you want to use modules or not. If you are going to use modules, then the repo should have a go.mod file. Otherwise (e.g. if there is a dep file or something), the build could start by doing go mod init (which needs to be after make extract).

fetch

There can be two implementations of the fetch phase:

  1. Run go mod download.

    It should download required packages into a cache directory, $GOPATH/pkg/mod/cache/download. Then, I propose tarring up the whole tree into a single .tar.gz and putting that into the distfile directory for make checksum. Alternatively, we could have the individual files from the cache as “distfiles”. Note however (see below) that the filenames alone do not contain the module name, so there will be tons of files named v1.0.zip and so on.

  2. “Regular fetch”

    Download the .tar.gz (or the set of individual files) from above from the LOCAL_PORTS directory on ftp.n.o, as usual.

The files that go mod download creates are different from any of the ones that upstream provides. Notably, the zip files are based on a VCS checkout followed by re-zipping. Here is an example for the piece of a cache tree corresponding to a single dependency (ignore the lock files):

./github.com/nsf/termbox-go/@v:
list                                           v0.0.0-20180613055208-5c94acc5e6eb.lock        v0.0.0-20180613055208-5c94acc5e6eb.ziphash
list.lock                                      v0.0.0-20180613055208-5c94acc5e6eb.mod
v0.0.0-20180613055208-5c94acc5e6eb.info        v0.0.0-20180613055208-5c94acc5e6eb.zip

As an additional complication, (2) needs to run after “make extract”. Method (1) cannot always be the default, as it needs access to some kind of hosting. A non-developer cannot easily upload the distfile.

extract

In a GOPATH build, we do some gymnastics to move the just-extracted source code into the correct place in a GOPATH. This is no longer necessary, and module builds can just use the same $WRKSRC logic as other software.

build

The dependencies tarball (or individual dependencies files) should be extracted into $GOPATH, which in non-mod builds is propagated through buildlink3.mk files of dependent packages. After this, in all invocations of the go tool, we set GOPROXY=file://$GOPATH/pkg/mod/cache/download, as per this comment from the help:

A Go module proxy is any web server that can respond to GET requests for URLs of a specified form. The requests have no query parameters, so even a site serving from a fixed file system (including a file:/// URL) can be a module proxy.

Even when downloading directly from version control systems, the go command synthesizes explicit info, mod, and zip files and stores them in its local cache, $GOPATH/pkg/mod/cache/download, the same as if it had downloaded them directly from a proxy.

10 Nov 2018, 18:46

pkgsrc: Upgrading, Part 1

I found this text in my post drafts, where it had been sitting for a bit. Consider this the first part of a series on keeping pkgsrc up to date.

If you have not upgraded the packages in your pkgsrc installation in a while, you might be so far behind on updates that most or all your packages are outdated. Now what?

The easiest way to update you packages in order is to simply use pkg_rolling-replace. Update your pkgsrc tree (either to the latest from cvs, or to a supported quarterly release), then simply run

$ pkg_rolling-replace -uv

This will rebuild the required set of packages, in the right order. This takes a while, as the rebuild is from source, and is somewhat likely to break in the middle. When the compilation of a package fails, the tool just stops and leaves you with an inconsistent (and in the worst case, non-working) set of packages. Good luck fixing things. Making a backup of your /usr/pkg and /var/db/pkg* directories before you start is a good idea.

10 Nov 2018, 17:53

Build Systems: CMake and Autotools

I think I am finally warming up to CMake.

Eight years ago (at FOSDEM 2010), I gave a talk on build systems that explains the fundamentals of automake, autoconf and libtool:

There is nothing in this talk that is no longer valid today as far as I can see, though CMake was “newfangled” then and is a lot less so today. In any case, my conclusion still stands:

  • Don’t try to reinvent the wheel, use a popular build system.
  • You cannot write a portable build system from scratch – so don’t try.

My advice came from pent-up frustration over software that does not build on my platform (MirBSD, at the time) but remains true today. And to be clear:

Autotools is still a good choice for new code.

CMake

However, recent experience has made me like CMake a lot more. For one, it is more common today, which means that packaging systems such as pkgsrc have good support for using it. For instance, in a pkgsrc Makefile, configuring using CMake is as easy as specifying

USE_CMAKE=	yes

As the user of a package (i.e. the person who compiles it), CMake builds are compelling because they (a) configure faster and (b) build faster.

Regarding configuring, it is infuriating (to me) how the run time of the configure script in autotools totally dominates build time, as long as you run make -j12 or similar. CMake typically checks fewer things (I think) and does not run giant blobs of shell, so it is faster.

For the latter, I have noticed that CMake builds typically manage to use all the cores of the machine, while automake-based builds do not. I think (again, this is speculation) that this is because automake encourages one Makefile per directory (which are being run sequentially, not in parallel) and one directory per target, while CMake builds all in one go. Automake can do one Makefile for all directories too, but support for that was added only a few years ago, and it seems rarely used.

CMake builds also have different diagnostics (console output), optionally in color. Some people hate the colors, and they can be garish, but I do like the percentages that are shown for every line.

Concrete case: icewm

When I recently packaged wm/icewm14, I noticed that you now have the choice of CMake or autotools, and I ended up with CMake. There were a few things to fix but its CMakeLists.txt file is reasonably easy to edit. Note that it contains both configuration tests and target declarations. Here is a small example:

ADD_EXECUTABLE(genpref${EXEEXT} genpref.cc ${MISC_SRCS})
TARGET_LINK_LIBRARIES(genpref${EXEEXT} ${EXTRA_LIBS})

# ... other targets ...

INSTALL(TARGETS icewm${EXEEXT} icesh${EXEEXT} icewm-session${EXEEXT} icewmhint${EXEEXT} icewmbg${EXEEXT} DESTINATION ${BINDIR})

Compared to the same thing in automake:

noinst_PROGRAMS = \
	genpref

genpref_SOURCES = \
	intl.h \
	debug.h \
	sysdep.h \
	base.h \
	bindkey.h \
	themable.h \
	default.h \
	genpref.cc
genpref_LDADD = libice.la @LIBINTL@

So if anything, the syntax is no worse but the result is a bit better. I was able to rummage around in CMakeLists.txt without reading any documentation.