RSS Atom Add a new post titled:

I've mentioned I work on Baserock for Codethink. I've gotten out of practice writing, so I'm going to talk about something I know well, how we build software in Baserock.

Morpohologies

The software recipes are called morphologies. The unusual name choice was deliberate to prevent the name imposing design choices. If we called them recipes we would be tempted to implement them like bitbake recipes.

Chunk morphologies

Chunk morphologies describe how to build "packages". We don't call them packages since they aren't independently usable.

These compose a set of commands to configure, build then install the chunk.

Stratum morphologies

Strata are a set of descriptions on where to find chunks, and their build-dependencies.

These are specified externally to the chunk, as opposed to with them, as packages are usually built, since a chunk builds differently with different packages available but the same commands.

As an example, vim will build gvim if graphics are available, but only the command line vim if they aren't.

As a far more extreme example, cpython builds python bindings for pretty much every library it finds installed, since the standard python distribution has a reputation for having batteries included.

This is somewhat analogous to layers in Yocto.

Strata describe where to find a chunk by the git repository the chunk is located in, the branch to use, and the name of the chunk morphology file to use.

System morphologies

System morphologies describe which strata to install on the system. These can be combined from different sources to produce different systems, while reducing repetition for common elements.

Building

Resolving morphologies and fetching sources

The first step is to load all the disjoint morphologies, so that they can be brought together to describe how to build the System.

The strata listed in the system are found by fetching the specified repository and reading the morphology file from it.

This information is also used to compute hashes of all the information required to build, so that if a part of the system has already been built, it can be reused without having to build everything again.

Building Chunks

The hash of dependencies is used to find out if a chunk is already built. If it is then the following steps are not performed.

Constructing a staging area

The cached build artifact of all the chunks that are build dependencies of the target chunk are extracted into a temporary directory. This temporary directory is named the same as the cache key, and is not extracted if it already exists.

This acts as a cache so that chunks do not need to be extracted multiple times.

The staging area itself it constructed by hardlinking all of the chunks into a new temporary directory.

The git repository with the chunk's source code is then cloned into place for the builds to be run in.

Running configure and build commands

Since the staging area is constructed from hardlinks, builds must not be able to alter the staging area except for directories deemed safe.

To do this we use linux-user-chroot, a command written by Colin Walters, which uses kernel namespaces to isolate a subprocess. It can be used to make subtrees read-only.

Configure commands are expected to prepare the build. It is told things like the prefix to install into and the gnu build triplet.

Build commands are similar, but also have MAKEFLAGS set to make use of all available compute resources.

Installing chunks

Chunks are expected to fill the directory pointed to by the environment variable DESTDIR.

The contents of this directory is archived and placed into the artifact cache.

Building Strata

Stratum artifacts are just the list of chunks they are made of, plus a little metadata.

Building Systems

Systems are made of all the files of all the chunks in all the strata listed. They do not include all the strata built, as a stratum can depend on another stratum that is not part of the System. This is currently the only way to handle pure build dependencies.

Bootstrapping

The staging area needs a way to be filled. Previously we used to have a staging filler, which was a tarball to extract first.

The filler was constructed by a complicated bootstrap script and was hard to maintain.

This caused many problems, such as chunks linking against things they would not have in the final system, since they were in the filler but were removed from the morphologies.

Now, we have a carefully constructed stratum, called build-essential, which contains chunks built in what's called "bootstrap mode", where while the staging area is constructed, we do not chroot into it. We do however use linux-user-chroot to make the host's root file system read-only for protection.

Posted Thu Aug 29 22:32:19 2013

And now another rant about build systems, this time it's more accurately the state of upstream's repository.

The state of the repository

$ git clone git://git.savannah.gnu.org/screen.git
$ ls screen
COPYING  incoming  mktar.pl  patches  src

That's right, they keep patches in version control and the source code in a subdirectory.

This is pointlessly different, serving only to make it harder to build from version control. It's yet further fetishising of Tarballs.

I'm particularly annoyed by this because the Baserock build tool, morph, can automatically infer how to build something by which files are checked in to version control.

DESTDIR in the Makefile

As a bit of background, $DESTDIR is the traditional way for specify a directory to install to for packaging, instead of installing directly onto your system. You can then make a tarball of the contents of $DESTDIR, which you can later unpack onto many systems.

Then you get rules in your Makefile to install along the lines of install screen $(DESTDIR)$(bindir)/screen.

Baserock sets this environment variable during the install step, and tars it up.

Irritatingly, the Makefile specifies DESTDIR =, which means it overrides the DESTDIR set in the environment by Baserock.

It is entirely superfluous, since if make is asked to subtitute an undefined variable, it defaults to the empty string.

I can only think that the developer didn't know this fact, since it has existed as long as screen has supported DESTDIR, it's either that or the developer wanted you to have to run make DESTDIR="$DESTDIR" install.

Posted Mon Apr 29 20:27:22 2013

What is Awesome?

Awesome is a window manager and window manager toolkit. The window manager configuration file is written in Lua.

You could write your own window manager with it, but the default configuration, a tiling window manager with a preference for keyboard shortcuts, meshes nicely with how I like to use my computer.

Why not just use Awesome on its own?

You get a bunch of theming and widgets if you use Awesome with GNOME.

Why write the guide?

I need something to refer to when I install a system. I used to follow a friend's guide, but he moved to Xmonad, and the guide needs amending for Ubuntu 12.10.

I'm going to deviate from the previous guide, in that I'm not going to install gdm, since you're likely to already have a working desktop manager, and I'm going to install to system-wide configuration instead of per-user.

Installing the required software

sudo apt-get install awesome gnome-settings-daemon nautilus
sudo apt-get install --no-install-recommends gnome-session 

Installing gnome-session without its recommendations is required to prevent installing it pulling in all of Unity and GNOME Shell.

Installing configuration files

These bits are mostly the same, but I'm going to try and not require installing gdm, since you will already have a desktop manager installed.

Create the desktop application entry

Desktop application entries are for databases of which applications are instaled. Since a window manager isn't an application you'd usually launch it has NoDisplay=true.

sudo tee /usr/share/applications/awesomewm.desktop >/dev/null <<'EOF'
[Desktop Entry]
Exec=/usr/bin/awesome
Name=Awesome
NoDisplay=true
StartupNotify=false
Type=Application
Version=1.0
EOF

Create the desktop session file

This defines the desktop session to run, it knows to run by the Required-windowmanager referencing awesomewm, which is our desktop application file.

sudo tee /usr/share/gnome-session/sessions/awesome-gnome.session >/dev/null <<'EOF'
[GNOME Session]
Name=GNOME Awesome Session
Required=windowmanager;filemanager;
Required-windowmanager=awesomewm
Required-filemanager=nautilus
DefaultApps=gnome-settings-daemon;
RequiredComponents=gnome-settings-daemon;awesomewm
EOF

Create the xsession file

Your desktop manager looks at these files to work out what desktop sessions it can run.

sudo tee /usr/share/xsessions/awesome-gnome.desktop >/dev/null <<'EOF'
[Desktop Entry]
Name=Awesome GNOME
TryExec=awesome
Exec=gnome-session --session=awesome-gnome
Type=Application
EOF

Now when you log out you should be able to select "Awesome GNOME" from the sessions list. In vanilla Ubuntu, you have to click the Ubuntu logo to pick the session.

Testing status

These steps are known to work on the following installs:

  • XUbuntu 12.04.2
  • Ubuntu 12.10
  • Ubuntu 13.04
Posted Sat Apr 27 16:36:37 2013

The problem

Smart phones tend to have flash storage. Memory cells in it have a finite number of writes. To avoid them being prematurely depleted there's a translation layer which redirects writes to unused cells.

Unfortunately this changes the problem to running out of cells, since the file system needs to say when it's stopped using a block.

This is done by either mounting the file system with the discard mount option, or running the fstrim program.

If you trim you need to do it periodically, but it makes individual writes faster.

Unfortunatey, such common desktop practices aren't followed in the mobile world. Android changed from its own file system, yaffs2, to ext4. Unfortunately it doesn't mount with discard.

Vold always mounts without any extra flags, and has no way to specify other flags.

If you're lucky, your phone's /fstab.$devicename has it specified, but my S3 does not, nor did my brother's Note 2 or my friend's S2.

How to know if you need this

  1. Install the Android Terminal Emulator.
  2. Run cat /proc/mounts.
  3. Inspect the output, if there's a line that has the word ext4 on it, but doesn't have discard in it, then you have a problem.

Failed attempts

Alter the fstab or initrc

initrc is able to remount devices with different options, and fstab changes how they're made in the first place.

Unfortunately these files come from the initramfs, so you can't modify them without intrusive changes to the bootloader; something I wasn't prepared to do.

Alter /system/etc/vold.fstab

vold.fstab is Android's abstraction layer over device mounting. Unfortunately there is no way to specify mount options in it. I looked at the source code, it always mounts with the data parameter to mount as NULL.

The solution

Root your phone

Unfortunately this requires rooting your phone, so do that first. For the S3 I followed this guide, except I recompiled Heimdall myself, as I don't trust executables downloaded from the internet.

I also needed to make multiple attempts, since I didn't reboot immediately into the recovery partition after flashing it, which caused it to be reflashed to stock formware.

Trim your file system

Download LagFix, it's advertised as a way to make your phone faster, since writes also get slower if you're running out of cells to write to.

Running this will discard any blocks your file system isn't using.

Pay attention to the warning on the app page, some phones have a broken discard operation that makes it brick immediately. Fortunately there's a way to tell if if would brick your phone without empirical evidence.

If you're willing to pay, you can get the Professional version and shcedule regular trims, which is sufficient to keep your phone going. If you're happy with this solution, you can stop here.

Remount with discard on boot

You need an app that starts at boot time and remounts, since at the time of writing this no such app existed, I had to find an alternative solution.

Here comes Script Manager to the rescue.

Script Manager lets you run shell scripts. You can set these scripts to be run with SuperUser, or at boot time. Writing this shell script requires more useful shell utilities than are usually installed on your phone; hence install Busybox.

After this, create the following boot script in Script Manager:

#!/system/bin/sh
grep 'ext[34]' /proc/mounts | awk '{print $2}' | while read mp; do
    mount -o remount,discard "$mp"
done

It's probably possible to do this without the grep, but this is what I used.

How could this happen!

You can't test a whole product's life, but with sufficient instrumentation you could simulate the amount of wear the Flash is going to recieve.

If you're a conspiracy theorist, you could claim this is intentional, to ensure planned obsolecense; however my boss had his die in less than a year, which for a premium phone is just embarassing.

However this did happen not too long before the S4 started being advertised.

Posted Wed Apr 24 20:41:14 2013

Part 1 showed a rather trivial example of executing a Lua script. This time we're going to do something more interesting, the standard guess a number game, but instead of asking the user for input, a function is called to find the random number.

Basic implementation

Sources can be viewed here, the important functions follow.

check_match

This function is called by strategies to guess whether they should try higher or lower. It counts up how many times it is called for statistics.

int check_match(int guess){
    call_count += 1;
    if (guess > target) {
        return 1;
    } else if (guess < target) {
        return -1;
    } else {
        return 0;
    }
}

strategy_iterate

This is a strategy function, it gets given the check_match function and the range of values the random number is in.

int strategy_iterate(int (*check)(int guess), int lower, int upper){
    for (int i = lower; i <= upper; ++i){
        if (check(i) == 0){
            return i;
        }
    }
    return 0;
}

process_strategy

This takes arguments that have been processed on the command line, generates a random number, then calls the strategy, and prints stats on how often check_match was called.

int process_strategy(int min, int max){
    int guess;
    target = rand_range(min, max);

    guess = strategy_iterate(check_match, min, max);
    if (guess != target){
        fprintf(stderr, "Incorrect guess, got %d, wanted %d\n",
                guess, target);
        return 1;
    }

    printf("Guessed %d in %d attempts\n", target, call_count);

    return 0;
}

Strategy via Lua script

As is the point of this article, we're going to embed lua in this program.

The main function now handles -f strategy.lua parameter, which is a filename of a lua script to run instead of the strategy_iterate function.

I won't show the diff, because it's boring, but you can see it here

process_strategy changes

It now gets the path to the lua script that should be used as the strategy, and creates a lua state.

-int process_strategy(int min, int max){
+int process_strategy(char const *strategy, int min, int max){
    int guess;
+   int exit = 0;
+   lua_State *L = luaL_newstate();
    target = rand_range(min, max);

There's also a new exit code variable, since this happens at the end to clean up the created state. Yes gotos are involved, if they're good enough for Linux they're good enough for you.

    printf("Guessed %d in %d attempts\n", target, call_count);
-   return 0;
+cleanup:
+   lua_close(L);
+   return exit;
 }

Calling the strategy has become significantly more complicated.

-   guess = strategy_iterate(check_match, min, max);
+   if (luaL_loadfile(L, strategy)){
+       char const *errmsg = lua_tostring(L, 1);
+       fprintf(stderr, "Failed to load strategy file %s: %s\n",
+               strategy, errmsg);
+       exit = 1;
+       goto cleanup;
+   }

Instead of using luaL_dofile, I'm using luaL_loadfile. dofile loads and executes a file in one function. For my use-case I need to pass parameters in like it were a function. luaL_loadfile will turn a file into a function to be called later.

Arguably I could put the values I want into the Lua global environment, but I don't like global state, and it's cool that you can treat a file like a function in lua.

+   lua_pushcfunction(L, check_match);
+   lua_pushinteger(L, min);
+   lua_pushinteger(L, max);

This adds the parameters to the stack, it gets the function for checking its guess, and the range of values the target is in.

lua_pushcfunction makes our slightly modified check_match function into something that can be called in Lua.

Anyway, we call our function with lua_pcall, the 3 is the number of parameters on the stack it's called with, 1 is the number of expected parameters to be returned.

It returns nonzero if the function call fails, in which case it leaves an error message on the stack.

+   if (lua_pcall(L, 3, 1, 0)){
+       char const *errmsg = lua_tostring(L, 1);
+       fprintf(stderr, "Failed to execute strategy file %s: %s\n",
+               strategy, errmsg);
+       exit = 1;
+       goto cleanup;
+   }

If calling the function works then it returns the guess on the stack, here it's checked, at which point it's back to where it used to be.

+   guess = lua_tointeger(L, 1);
    if (guess != target){
        fprintf(stderr, "Incorrect guess, got %d, wanted %d\n",
                guess, target);
-       return 1;
+       exit = 1;
+       goto cleanup;
    }

check_match changes

check_match is functionally the same, except it has changed calling convention to that of lua functions, where the parameters are part of the lua_State object and the return value is the number of values returned in the lua_State.

-int check_match(int guess){
+int check_match(lua_State *L){
+   int guess = lua_tointeger(L, 1);
    call_count += 1;
    if (guess > target) {
-       return 1;
+   lua_pushinteger(L, 1);
    } else if (guess < target) {
-       return -1;
+       lua_pushinteger(L, -1);
    } else {
-       return 0;
+       lua_pushinteger(L, 0);
    }
+   return 1;
 }

strategy_iterate changes

The strategy_iterate function is gone, instead it's replaced by a lua script.

local check, lower, upper = ...

for i = lower, upper do
    if check(i) == 0 then
        return i
    end
end

return 0

local a, b, c = ... is the syntax for unpacking a tuple. When a file is used like a function, you can't name the parameters, but it's functionally similar to having the parameters declared like function(...).

Now running ./higher-lower -f higher-lower.lua is functionally equivalent to before, but how the numbers are guessed isn't hard coded, so we can start experimenting.

Good bye globals and good riddance!

Before we actually do that, I want to get rid of the global variables used to avoid passing around the target value and the call count.

The full changes can be viewed here, but the important parts are how the lua function is created, and how it's called.

Here, instead of creating it just as a function, it's created as a "closure". If you're not familiar with function closiures, they're a useful way to create stateful functions.

In this case we're adding the target and a reference to the call count to the state of the function in what lua calls upvalues.

-   lua_pushcfunction(L, check_match);
+   lua_pushinteger(L, target);
+   lua_pushlightuserdata(L, &call_count);
+   lua_pushcclosure(L, check_match, 2);

As you can see, the header of the check_match function has changed, the target is no longer a global variable, it's extracted from the lua_State and put on the C function's stack, likewise the call count is extracted, and later incremented.

 int check_match(lua_State *L){
+   int target = lua_tointeger(L, lua_upvalueindex(1));
+   int *call_count = lua_touserdata(L, lua_upvalueindex(2));
    int guess = lua_tointeger(L, 1);
-   call_count += 1;
+   (*call_count) += 1;

More interesting strategies

There's only one more change needed before interesting strategies can be tried; previously the environment the strategy is called in was completely bare. To fix this call luaL_openlibs, you can see where in the changelog.

interactive.lua

This script asks the user for the value to guess, bringing it more into line with the standard guessing game.

local check, lower, upper = ...

while true do
    print(("Guess a number between %d and %d"):format(lower, upper))
    local guess = io.read("*n")
    local result = check(guess)
    if result == 0 then
        return guess
    end
    if result < 0 then
        print("Too low")
        lower = guess
    elseif result > 0 then
        print("Too high")
        upper = guess
    end
end

This can be easily made into a binary search, allowing the correct number to be guessed in ceil(log2(upper - lower)) attempts.

It's also possible to guess the correct value without ever calling check, the answers are in the previously linked commit.

Links

Source code for this example can be found at git://git.gitano.org.uk/personal/richardmaw/lua/higher-lower.git

Posted Thu Apr 18 23:31:05 2013

This is going to be a little different to my usual rants about build systems.

Codethink does friday talks about varying topics, so people get practise speaking, and knowledge is shared. My talk is going to be about embedding the Lua interpreter in a C program.

Starting at hello

We need somewhere to start, so here's a hello world, including a makefile.

Here's hello.c

#include <stdio.h>

int main(void){
    printf("Hello World!\n");
    return 0;
}

Here's the Makefile

CC = gcc
OBJS = hello.o
BINS = hello

all: $(BINS)

hello: $(OBJS)

.PHONY: clean
clean:
    $(RM) $(OBJS) $(BINS)

Install your distribution's build-essential and run make to build it. Run it as follows:

$ ./hello
Hello World!

Embedding Lua

Build-system changes

I'm going to use luajit, it's API and ABI compatible with lua 5.1, but faster. To install the required libraries in Ubuntu 12.04 run sudo apt-get install luajit libluajit-5.1-2 libluajit-5.1-dev.

To let the built executable use luajit, add the following to the Makefile.

CFLAGS = `pkg-config --cflags luajit`
LDLIBS = `pkg-config --libs-only-l luajit`
LDFLAGS = `pkg-config --libs-only-L luajit`

Adding CFLAGS lets you include the headers, LDLIBS adds the libraries to the linker command, LDFLAGS lets the linker find the libraries.

hello.c changes

I'm not as familiar with the C api as I am with the lua language itself and its standard libraries, so I refer to The Lua 5.1 Reference manual.

Given the nature of the changes, hello.c is almost entirely rewritten, so I'm going to show the new version commented inline.

#include <stdio.h>

/* lua.h provides core functions, these include everything starting lua_ */
#include <lua.h>
/* lauxlib.h provides convenience functions, these start luaL_ */
#include <lauxlib.h>

int main(void){
    /* The lua_State structure encapsulates your lua runtime environment
       luaL_newstate allocates and initializes it.
       There also exists a lua_newstate, which takes parameters for a custom
       memory allocator, since the lua runtime needs to allocate memory.
       luaL_newstate is a wrapper, which uses the standard C library's realloc,
       and aborts on allocation failure.
     */
    lua_State *L = luaL_newstate();
    if (!L) {
        perror("Creating lua state");
        return 1;
    }

    /* The following executes the file hello.lua, which returns the string
       intended to be printed.
       Likewise it is an auxiliary function, which reads a file,
       compiles the contents into a function, then calls it.
     */
    if (luaL_dofile(L, "hello.lua")) {
        /* dofile returns a string on the lua stack in the case of error
           which says what went wrong, a pointer to it is retrieved with
           lua_tostring, this exists inside the lua_State structure, so
           if you need it after the next call to lua you have to copy it
         */
        char const *errmsg = lua_tostring(L, 1);
        fprintf(stderr, "Failed to execute hello.lua: %s\n", errmsg);
        return 2;
    }

    /* A file can take parameters and return values like a function.
       The result is passed to printf, so IO is done outside of lua.
     */
    char const *result = lua_tostring(L, 1);
    if (!result) {
        fprintf(stderr, "hello.lua did not return a string\n");
        return 3;
    }
    printf("%s\n", result);

    /* lua_close frees up any resources used by the lua runtime
       This is not necessary here, since the program is about to exit.
     */
    lua_close(L);
    return 0;
}

hello.lua

To completely reproduce behaviour before we over-complicated "Hello World", write return "Hello World!" to hello.lua.

$ echo 'return "Hello World!"' >hello.lua
$ ./hello
Hello World!

Right now this is kind of pointless, however lua is a complete programming language, so you can get the sum of numbers 1 to 10 do the following.

$ cat >hello.lua <<'EOF'
local sum = 0
for i=1, 10 do
    sum = sum + i
end
return sum
EOF
$ ./hello
55

Links

The source code for this is available at git://git.gitano.org.uk/personal/richardmaw/lua/binding-lua-from-c.git. The different steps are tagged.

Posted Thu Apr 11 22:15:44 2013

What is it?

Gettext is a library used by GNU projects for internationalisation. Before you print a string, you wrap it, like printf(_("Hello")).

It then looks up these strings and finds appropriate replacements for the current language.

I don't pretend to know how it works very well, and this is not a criticism of the library itself, it's a criticism of its build system.

Ok what's wrong with the build system then

Well, to start with it uses gnulib, so it's off to a bad start.

autogen

First up is the ./autogen.sh script, which is used to generate the ./configre script from code in version control. Many projects have this pre-configure step, which if it uses autotools, is usually nothing more than autoreconf -fiv.

Gettext runs a full build, then does a distclean. This is decidedly sub-optimal for automated builds.

archive.dir.tar

archive.dir.tar is used by autopoint, which copies gettext infrastructure, so you can gettextize a project.

It's designed to be able to support different versions of gettext. This is possible, since the archive contains previous versions of the infrastructure.

One of the possible formats of the archive is a git repository, with tags for the different versions of gettext.

This archive is not kept in version control. Releases include it, however to make a release of gettext, you have to copy whatever random archive you had on your host system.

This of course implies gettext is inherently unreproducible.

What's the alternative?

I don't know of a more sane project, but it's free software, so you could fork gettext and give it some sanity.

To start with I'd rip out gnulib, then I'd make autogen only generate enough to be able to run ./configure. Then I'd make it generate an archive with just the latest release, and fix anything that requires older versions.

However, in the short term I'll just avoid localising software, since I can't stand the tools.

Posted Sat Mar 16 23:56:14 2013

What it is

gnulib is one of the ways GNU software deals with unixes that don't use glibc as their C library runtime. It provides stubs for projects that use function that aren't universally available.

Similar to this was libiberty, which had much the same goal, but is now pretty much just used by gcc and binutils.

Why it's bad

The reason why it's a terrible idea is that it isn't a library that you compile and link to, it's a bunch of C source code that you link into your source tree at build time and statically link into your executable.

A plethora of tags

It can't become a library because it's interface is too unstable, everything that uses it relies on a specific version, otherwise it's likely to break. Just look at the list of tags in its repository.

git ls-remote git://git.savannah.org/gnulib.git will show you every branch and tag they have. We mirrored gnulib for baserock, so you can also see it at git://git.baserock.org/delta/gnulib.git.

Submodules

It's usually included in a project as a git submodule, since this ensures that the right version of gnulib is used, and as part of bootstrapping the source code from version control, the ./bootstrap script clones the submodule (or just plain clones the repository if the source code isn't maintained in git).

After this it copies or symlinks any "modules" you requested, modules meaning random bits of source code you want to re-use.

Worse is something depending on gnulib being installed, which I have seen, but can't recall off the top of my head where.

Unversioned translations

Unless you tell it --skip-po, it will download translations at build time from http://translationproject.org, which leaves you with the problem that these translations aren't version controlled.

This causes problems for continuous integration, since your builds can fail independently to your source code, because the translations are broken, or the server hosting them is down.

This makes projects that use gnulib difficult to build only from the contents of version control, which causes unnecessary friction, since the developers are still working on the tarball release model, which with today's version control, is an antiquated model.

Alternatives

If you want such a compatibility library I would recommend you try glib instead. Gnulib is harder to integrate into your build-system than a dependency on GLib.

An aside

Another issue I have with gnulib is its commit history. If you're not careful, a clone of gnulib can be huge, since every change is entered in its changelog, unless you tell git to repack, it will store a million slightly different changelog files of ever increasing size.

We stopped needing Changelogs when version control was invented.

This can compress quite nicely, but you've got to remember to do so. Before our source import tool was told to repack git mirrors, gnulib had a disk footprint on the same order of magnitude to the linux kernel.

Also, the repack had to be told to limit the amount of memory it used, since it would easily exhaust the system't resources and die because it couldn't allocate enough memory.

Posted Thu Mar 14 22:05:34 2013 Tags:

I work for a company called Codethink, we'e doing an embedded Linux operating system.

We're doing our own build-system as part of a project called Baserock. We're doing our own build recipes based on upstream's git repositories. As such you get to see some horrible build systems.

This is why I started this blog, I need somewhere to rant.

Posted Sun Mar 10 19:17:24 2013

This blog is powered by ikiwiki.