25 May 2023, 17:15

Building a golang program with cgo

Recently, I needed to debug a particularly nasty interaction between two programs, one of which was a go tool. To get further in understanding the issue, I had to compile a little test program with cgo, the dreaded (by go programmers) compilation mode that allows go programs to call C code. Unfortunately, it’s a bit difficult to find out what to concretely do in order to build a program with cgo.

As with all semi-taboo knowledge1, there seems to be a strong reluctance in the respective community to providing straightforward guidance on how do what you want (in this case, build a program with cgo which is discouraged and intensely disliked but is an integral part of building several popular & working go programs). Well, we’ll show them!

There are two main things you need to do: First, explicitly opt out of disabling the usage of the cgo compiler by setting CGO_ENABLED=1 on the compiler’s process environment. (This defaults to on, but your environment might have it turned off! Best to explicitly enable it.)

Second, you also have to provide the compiler a reason in code to compile your code with cgo: You have to make an FFI call into C. The easiest way that Andrew pointed me at is to use call a no-op C function from a go init() function.

This looks like the following:

package main

/*
int dummy(void) { return 42; }
*/
import "C"

func init() {
    C.dummy()
}

In order to make a program that can optionally be compiled with cgo and the native go complier, I structured the cgo-enabling parts such that the init function above lives in a file enablecgo.go, and starts with a line that says // +build enablecgo. You can find the whole source code in my bug report repro repo.

To build the app from that repo, you use the following commandlines:

  • without cgo (native go compiler): go build ./
  • with cgo: env CGO_ENABLED=1 go build -tags enablecgo

If you landed on this page, I hope it can help you get further in your debugging journey, and hope that the pain stops soon.


  1. Like the existence of nonguix if you want to use non-free software in Guix; choice tagline being “Please do NOT promote or refer to this repository on any official Guix communication channels”, which already tells you that stuff is juicy and there will be tons of terrible drama if it’s brought up. ↩︎

27 Jul 2019, 01:36

basename and dirname in Rust

I recently did some minor file name munging in Rust, and was reminded that one of the hard parts about learning a new language is the differences in vocabulary.

In UNIX, there are two command line tools, basename and dirname. They take a pathname as an argument and print a modified pathname to stdout, which is really handy for shell scripts. Several other languages copied that naming convention, and so I was really surprised to find that googling for rust dirname didn’t return anything useful1.

Here’s a usage example: Say you have a pathname /etc/ssh/sshd.config, if you use dirname on it, that prints /etc/ssh and basename prints sshd.config. Ruby, python, go all follow a similar pattern (ok, go calls its functions Dir and Base). Rust does not - it calls them something else2.

In Rust, the functions live under the Path struct and are called parent (the dirname equivalent), and file_name (the basename equivalent).

These names make sense! They’re just way outside the range of vocabulary I’m used to.


  1. Maybe now that this post is published, it will! ↩︎

  2. Rust used to have functions under these names, up until late 2014-early 2015, but then the “Path reform” happened, which normalized the API a great deal and renamed a bunch of functions. ↩︎

27 Oct 2018, 01:50

Editing rustdoc comments in emacs

I’ve been writing a bunch of rust code lately, and it’s been a pretty great experience! The thing I enjoy most about it is that the documentation looks just so extremely good.

Which brings me to my major point of frustration with my rust-writing setup: Writing doc comments in emacs’s otherwise excellent rust-mode is a pain. You always have to insert the doc comment character sequence de la ligne, and writing doctest examples was even worse: You write rust code, inside markdown, in rust comments. Add smartparens and other helper packages, and editing gets really annoying pretty fast.

So, I decided to look around for solutions, and found something pretty cool: Fanael’s edit-indirect is an emacs package that will take lines from the current buffer, put them into a new buffer, transform them, apply a major mode, and then let you edit them. When you’re done, you apply the changes back to the original buffer. If this sounds like org-edit-src-code, that’s because it’s directly inspired by it. (-:

So I wrote this piece of elisp glue to help my rustdoc editing experience, and so far it’s pretty great: Navigate to a rustdoc comment, hit C-c ' (the same keys you’d use in a literate org file), up pops a buffer in markdown-mode; edit that and then hit C-c ' again to apply the changes back to the original buffer. Easy!

If you write rust in emacs, I hope you’ll try this out and if you do, let me know how it works for you!

19 Nov 2017, 16:00

Enabling the F4 key in macOS

This problem has been a mystery to me, and I figure to a bunch of other people, too: If you hit F4 in Mac OS X (or macOS) since Lion, it does not have any effect. What.

It appears that the key (when hit without modifiers) is disabled for some reason: I mainly rely on the Function keys on my mechanical keyboard to switch windows in tmux, and e.g., if you hit shift-F4 (the same thing, according to the terminal), it actually works.

There’s a bunch of forums that advise deleting ~/Library/Preferences/com.apple.symbolichotkeys.plist, which also removes all your custom app shortcuts. I have a bunch of those, and would prefer to keep those, thank you!

Turns out you can not do that and still get the desired behavior:

A milder fix

The main insight that led me to this fix is outlined on this post1: The symbolic hotkeys plist is a mapping of key codes to some parameters. So, after some experimentation, I cooked up this command line (which, if you try it, make sure you create a backup of the ~/Library/Preferences/com.apple.symbolichotkeys.plist file!)

defaults write ~/Library/Preferences/com.apple.symbolichotkeys.plist AppleSymbolicHotKeys -dict-add 96 '{enabled = 1; value = {parameters = (96); type = standard; }; }'

This, I think, does the following: It adds key 96 to the plist (96 stands for F4, according to the krypted blog post), with a parameter that I can only guess makes it send the 96 keycode (and if it doesn’t, at least doesn’t do harm), as a “standard” key, and enables that key.

After logging out and back in, pressing my F4 key unmodified works, and all my custom app shortcuts are still there. Win!

Do let me know if this works for you!


  1. This post does not have any attribution on it, but it appears that it is written by Charles Edge. Thanks, Charles! ↩︎

10 Dec 2016, 16:47

Configuring iTerm2 for mosh: URLs

I use a Mac as my main typing/character-displaying computer, and on macOS, iTerm2 is the best terminal emulator that I’ve found so far. In addition to iTerm2, I also use mosh, the mobile shell, to get a fast, interactive and disconnection-resistant SSH-like connection to hosts on which I need to use the commandline.

So, in order to make getting to these hosts fast, I’ve made something that sets up bookmarks which open a new terminal window for me: The ruby gem ssh_bookmarker runs in a LaunchAgent anytime my ~/.ssh/known_hosts or ~/.ssh/config files change and drops a bunch of bookmarks in a directory that gets indexed by spotlight.

Now, whenever I want to open a remote shell, I use spotlight and type the host name. Very handy! (You can also use open ssh://my.cool.server.horse and get a new iTerm tab with the SSH session in it, and that’s exactly what goes on in the background.)

That works perfectly for SSH (to see how to set this up, see the FAQ and search for “handler for ssh://”), but I’d like to do this with mosh or other custom URL schemes, too! This is not as readily available as ssh:// URL handling, but it can be done.

For about 5 years now, I’ve had to look up how to do this and cobble together a solution from various rumors, stackoverflow articles and digging through source code. No more! This time I’m blogging the solution so future-me can have an easier time of it.

Prerequisites

First, you’ll need iTerm2 - I use version 3.0.12, but the newer the better. Then, you’ll need mosh - I install it from homebrew, and the program location is /usr/local/bin/mosh.

Throughout this post, we’ll also be using the jq and duti tools, you can get them from homebrew, too.

The iTerm profile and its GUID

First, you’ll need an iTerm profile dedicated to mosh-ing. Any settings you want are ok, but you need to set this as the command: /usr/local/bin/mosh $$HOST$$

Now that you have this profile, you’ll need its GUID. This is easiest by exporting your new profile as JSON from iTerm’s Profiles preferences:

  1. Select the Mosh profile you just created,
  2. Open the “Other Actions” gear menu below the profile list.
  3. Select “Copy Profile as JSON”:

To figure out the profile’s GUID, run:

pbpaste | jq '.Guid'

This should print a UUID in double quotes. Make a note of that string! We’re going to use it as THEGUID below.

URL handling - LaunchServices

URL handling in macOS comes in two steps: First when you run open somescheme://host/, LaunchServices looks up what program handles the given URL scheme. To set iTerm2 up as the handler for mosh:// URLs, I use duti:

duti -s com.googlecode.iterm2 mosh

At this point, running open mosh://my.cool.server.horse should open a new iTerm tab, but it won’t open a mosh connection yet. What else do we need to do?

URL handling on iTerm’s end

Once iTerm gets instructed to open a mosh:// URL, it looks up the URL scheme in its scheme<>profile mapping. Since mosh is not in there yet, let’s fix this (replace THEGUID with the output from jq in the GUID section:

defaults write com.googlecode.iterm2 URLHandlersByGuid -dict-add mosh THEGUID

And then restart iTerm2.

Success

If all this worked correctly and all the IDs line up, running open mosh://my.cool.server.horse should open a new iTerm window running mosh, attempting to open a connection to a cool example server.

Next steps

You can save yourself the trouble of keeping track of these GUIDs, especially if you use some sort of management tool (like ansible) to automatically set up your Macs. I have started experimenting with Dynamic Profiles and specifying GUIDs as host names, and that might have some pleasing results, too. I’ll post an update when I get this fully working.

Also, this doesn’t yet work for mosh:// URLs with a user name specified (or rather, the user name gets ignored and only the host part gets passed to mosh). It’s likely that you’ll have to wrap the mosh tool with another tool in order to get that to work.

In the meantime, I hope you enjoy.

21 Jan 2016, 18:57

Better filters for gmail with google apps scripts

At my workplace, we use github pretty extensively, and with github, we use organization teams. They allow assigning permissions on different repos to groups of people, but, are a really great way of @-mentioning people. This is wonderful, but sadly, github doesn’t make it easy for gmail filters to tell the difference between an email notification that you got because it was interesting to you, or because somebody sent a heads-up @-mention to a team you’re on.

I thought that was impossible to solve, but I was so wrong!

The setup: github notification email basics

Github makes it relatively easy to opt into getting all sorts of notifications that might interest you. Sadly, it doesn’t make it easy to stop it from notifying you about things that aren’t of interest to you anymore: Either you can’t turn off a notification in the first place, or you have to visit every single thing that it notifies about and hit “Unsubscribe”. Not optimal!

In theory, it should be easier to filter github’s notification emails by relevance than it is to filter on their webface; at least with emails, you can use third-party filtering tools, right?1

If you’re using gmail, you’re shaking your head now (as I did): All the criteria that you could usefully use in gmail filters (From address, Subject, To address) are the same across all sorts of notifications you get from github. Ugh.

However, they do set a header field, X-Github-Reason: It is set to team_mention if the sole reason you’re getting an email is because somebody mentioned one of your teams (not because you subscribed to an issue on purpose, say). However, there’s a snag: Gmail can’t match on that with its default filters.

Fortunately, Lyzi Diamond has written up a wonderful, and completely working solution to this problem using a mechanism that I was vaguely aware of in the past, but didn’t look at in detail: Google Apps Scripts.

(Go on, read her article; I’ll wait.)

Google Apps Scripts?!

Some time ago, Google made Google Docs, and for some reason they added a feature where you can edit JavaScript software projects (it’s mostly ok; the editor is no Emacs, but you can get by). And they also added a facility that lets you trigger those scripts in regular intervals, say once a minute. And they added lots and lots of bindings into their Apps For Business product suite, with much better functionality than they expose in their user-facing APIs2

In effect, Apps Scripts really powerful cron jobs that google runs for you, which can process your email.

My current github notification filter setup

So, as you may have gathered above, I have a Opinions on how a notification should affect my life:

  • If a person in the work org writes in about one of “my” issues or pull requests, I would like to know immediately (this means, the email should go into my inbox).

  • Same if they @-mention me personally. This probably means they’re blocked, or need help or are asking for a review.

  • If somebody @-mentions only a team I’m on, the email should be available under a label, but not go into my inbox.

I’ve modified Lyzi’s script for my purposes (also, I made it parse simple RFC822 headers, but not multi-line ones). The resulting script is in this gist.

Setting this up in your gmail account

This is a pretty manual process, sorry there’s no shell script you can pipe into bash (-:

  1. Create a gmail filter to match from:notifications@github.com that assigns a label (mine is _github_incoming) and archives the email. (The google apps script will send github notifications to your inbox according to the criteria above!)

  2. Create a new script project and copy/paste the script from my gist as indicated in Lyzi’s blog post. It has screenshots! It’s great!

  3. Adjust the variables at the top to reflect the labels that you want email to be tagged with.

  4. Set up a trigger: I set mine up like this to call processMessages once a minute:

  5. Set up notifications for that trigger: If anything should go wrong (I have a bug, there was a syntax error while pasting), you should get a notification. Click on “notifications” and set up a notification to email you hourly (or immediately if you like to get lots of email in case something goes wrong).

That’s it! Now your inbox should accumulate much less clutter!

I am pretty impressed with the things that Apps Scripts can let you do; my dream is a thing that cleans out email in small batches during off-hours (since bulk-deleting hundreds of thousands of messages can render your account unusable for hours). Maybe I’ll experiment with this soon!


  1. For my purposes, I’m focusing only on filtering out notifications that I’m getting solely because a team name that I’m on is @-mentioned in a pull request; you could imagine all sorts of other, more complex criteria! ↩︎

  2. Just look at the meager offerings in the public API for managing gmail filters; you can create filters… and that’s it. I could go on about this API for days. ↩︎

04 Jan 2016, 19:03

Deptyr, or how I learned to love UNIX domain sockets

Let’s say you have a program that needs to do I/O on a terminal (it draws really nice ascii graphics!), but it usually runs unsupervised. If the program crashes, you want a think like s6 or systemd to restart that program. The problem here is the terminal I/O: Since most process supervision tools usually redirect standard I/O to a log file, the wonderful terminal graphics just end up being non-ascii chunder that confuses you if you try to tail the log file.

My usual approach would have been to start the program under screen (screen -D -m if you’re interested), but that way you lose part of your process supervision tools’ capabilities: There’s a process in between the supervisor and your actual program, so you can’t send e.g. SIGKILL with your standard tools (e.g., svc -k /svc/your-tool) to force it to exit.

However, this approach is generally what I want – I’d like the crashy program to run under a pseudo terminal like screen to have its I/O be available elsewhere, and also make the pseudo-terminal’ed process be a direct child of the process supervisor. One feels reminded of a cake that is had & eaten.

I searched up and down, and besides some djb announcement in the early 90s of a tool that might be made to do what I want (which doesn’t compile under modern OSes anymore, and is also fantastically underdocumented), I didn’t find anything. screen -Dm was my best bet, but ugh! Time to see if we can do something hilarious with UNIX semantics. Spoiler: We totally can.

First: Pseudo Terminals - how do they work?

Pseudo Terminals (aka pseudo TTYs or PTYs) are a fun and kinda horrible facility in UNIX: A process can allocate a PTY, and gets a controlling and a client end1. If you’re writing a terminal-emulation program like xterm, it would keep the controlling end - this is what allows it to read what’s being written to the client end and send text to the client, as if that text appeared in a real terminal. Your terminal emulator would pass the client end to a shell session and then read what the shell sends to stdout or stderr.2

The one thing you really need to know about PTYs here is that the controlling and the client end both come as UNIX file descriptors. They’re a number attached to a process, much like file handles, sockets or other silly things you can use with read/write.

So, my thinking goes: Let me write a little UNIX tool that sets up a new PTY, then sends the controlling end to another process, then retains the client end for itself and calls exec to start my crashy program. Calling exec doesn’t adjust the process hierarchy, and would be exactly what other tools do to start programs under process supervision.

If only there was a way to send that controlling end elsewhere…

But… uh, can you send the controlling end of a PTY to another process? Turns out you can!

UNIX domain sockets3 are what they call a socket facility (“Internet” is another socket facility). These are file-like objects that behave almost exactly like real network sockets to localhost - they have two ends, you can send and receive data via sendmsg and recvmsg, but they have a few more functions! One is that one end can query the other end’s user ID and other authentication data.

Another cool function of UNIX domain sockets is that you can send structured data like file descriptors over them. Remember file descriptors? Both ends of a PTY are file descriptors!

Yay! Just send the controlling end of the PTY through a UNIX domain socket to a process that’s running under a terminal emulator like screen! We can do this!

Oh right: Prior art & introducing deptyr!

My amazing colleague Nelson had already written a tool called reptyr, which did the things I wanted to do, just almost in inverse: It uses ptrace to attach to a process that’s running under another terminal and force it to set up a new PTY, it then makes the process send the controlling end to reptyr through a UNIX domain socket so it can proxy your input and the process’s output.

Since reptyr’s code base is geared towards doing just that re-PTY-ing of existing programs (it’s really not my pun), I decided to rearrange it in a new tool for starting processes headlessly, called deptyr.

Deptyr has two modes of operation: One is to act as the “head”: It’s the thing that receives the controlling end of a PTY and acts as a proxy for your program’s output & any user input.

The other mode is the one that runs under process supervision - it sets up a PTY, connects to the “head” deptyr, and then execs your program with stdin/stdout redirected.

Once I’ve got the original thing thing I wanted to work, I’ll post an update with the config I used to actually run it under supervision. Initial experiments point to yes, but we’ll see (-:


  1. the standard terminology for the controlling and client end is is the “master” and “slave” ends. I find the standard terms extremely distasteful; in addition to extreme lack of taste, they don’t even correctly convey what’s going on, so controlling/client ends it is. ↩︎

  2. This is what tools like screen and xterm do! It’s pretty interesting to learn about this in detail – it’s pretty easy to run into a situation where you want to control a tool like a terminal emulator would. Sadly, I don’t know a lot of literature on PTYs. Send me your favorites! ↩︎

  3. Beej has a pretty good intro to programming UNIX domain sockets↩︎

02 Jan 2016, 20:24

Hosting my blog on Google App Engine with Letsencrypt

Editing my last post in Octopress was such a pain that I decided to switch the blog over to Hugo. While doing that, I decided that the yak stack wasn’t deep enough and that I should be moving my blog to https in the process. Here is my story (and links to automation shell scripts!)

(This is what happens when you give me a pot of black tea on New Year’s Day after 6 hours of sleep!)

The Yaks

I was hosting this blog on Amazon S3 - it’s static files, so that seemed reasonable. However, you can only host non-https sites on S3 - to get https, you have to use Cloudfront, and then that would require that cloudfront talks to S3 over http - that’s pretty ridiculous.

My colleague Carl found a great solution, though: If you write a tiny amount of configuration, and a go file containing package dummy, you can get Google App Engine (GAE) to host your weblog’s static files on their infra, with a reasonable HTTPS story!

All that you need now is an SSL certificate, and hey - letsencrypt gives you free certificates with reasonable (and most importantly, automatable) processes - perfect!

Getting that SSL Certificate

The default letsencrypt client expects to run on your web server as root. Google app engine however doesn’t give you any of that - you get no web server, no code exec and most certainly no root.

This sounds displeasingly impossible, but thankfully, we don’t have to use the letsencrypt client, except to set up an account. Once I had the private key file, I used letsencrypt.sh by Lukas Schauer to automate the SSL certificate issuance process.

Background

This is how letsencrypt operates (they have a really really good technical document too, so feel free to skip this section): They first check that you have access to the domain that you request the certificate for, by providing you a challenge URL and response body that they expect to get back when they hit that URL. Once they can see the right response (with a timeout), they issue a certificate for your private key.

The Automation Caper

With google app engine, we can deploy web apps, so I initially wrote a little go program that would respond to these requests and kept it under source control. This wasn’t great for a number of reasons, and the biggest one was that I had to copy/paste these tokens back and forth - a toilsome process.

Now, letsencrypt.sh has a “hook” facility for the certificate issuance process: It calls a shell script or function for every step of the challenge/response flow. Writing the script to do the right thing was pretty trivial, and this is what it does (follow the links if you like bash scripts):

All this is held together by a kinda convoluted Makefile - here are the most important targets:

  • make deploy calls this script to generate the latest HTML, and deploy the app to GAE.
  • make certificates calls letsencrypt.sh with the right arguments and should allow me to renew the certificates that I created once they are closer to expiring (2016-03-31!)

Annoying Things That Cost Me Way Too Much Time

Two things in this setup were really pretty frustrating:

One, letsencrypt.sh requires a perl program to extract your regular letsencrypt client’s private key into usable format (they store its RSA parameters in JSON, everything else under the sun expects the key format to be PEM).

This perl program requires Crypt::OpenSSL::Bignum and ::RSA, which were serious pains to install under El Capitan. What I ended up doing was install openssl from homebrew and link the headers (which they place out of the way) into place so that the install process could find them, like so:

ln -sf /usr/local/opt/openssl/include/openssl/ /usr/local/include/openssl

With the symlink in place, these two modules could install, and I could finally convert the private key to the right format. (Finding the right combination of cpan and file system things took me about an hour, ugh.)

Conclusion: letsencrypt, your client’s private key format sucks & converting it into anything remotely useful is annoyingly difficult.

The second frustrating/unfamiliar thing that cost me time was that if you have two GAE apps (one for a live blog and one for a “test” blog) and a certificate that covers both blogs’ domains, you have to upload the same certificate to both apps so that the GAE custom domain picker can even refer to it.

Conclusion: The GAE SSL cert upload form is convoluted and annoying, and I really want an API for this.

How well does it work?

I could bring my blog up under SSL within less than 4h, and that included a bunch of hacking. If you use the automation scripts and tricks for avoiding pitfalls I mentioned above, you should be able to get this running in far less time (I hope)!1

My weblog’s git repo is here. If you do use this, please let me know how it goes!


  1. I’ll probably write an update full of screams of frustration if cert renewal time comes and everything fails.2 ↩︎

  2. …but you won’t be able to read that update because my blog’s SSL config will be broken. So it goes! (-: ↩︎

19 Jan 2013, 00:00

Elixir: First Impressions

For the longest time now, I’ve admired Erlang from afar. It always seemed to be a bit daunting to take on. For one, there was the slightly weird and inconsistent Prolog-inspired syntax (I was always scratching my head over why this place needs a period and that place doesn’t), and then there was just plain weird stuff like one-based indexes.

While you don’t end up needing indexes very often, a nice syntax on top of Erlang is something I always kind of wanted, but nothing really could deliver. Then I saw Jose Valim demoing Elixir at Strange Loop 2012. It has a ruby-inspired (but more regular) syntax, it can do macros(!), it has protocols(!!!), and it has a very enthusiastic developer community behind it (see expm for an example of the packages that people have written/ported over to Elixir). That its data structures use zero-based index access certainly helps, too (-:

On top of all these nice things, it also lets you use any Erlang library (with only minimally less nice syntax by default). I think I’m sold.

What is all that hair on the floor?

As an initial just-for-fun project, I tried porting over the progress I’d made on a node.js-based gmail->localhost IMAP backup tool that I’d optimistically named gmail-syncer.1 So far, this has required a ton of yak shaving, but I’m enjoying the hell out of every single step down the fractal yak ranch.

  • First, there is no suitable IMAP client library. The thing that comes closest is erlmail. It is somewhat abandoned, and its IMAP client isn’t very usable for my purposes (doesn’t implement capabilities the way I need them, doesn’t really follow the one relatively sane guide to writing an IMAP client). So I’ll have to write my own IMAP interaction code.

  • To write my own IMAP code, I need to parse server responses; this requires parsing the highly weird IMAP protocol, with its somewhat lisp-inspired (but definitely not lispy) ideas of how to represent things. For example, The way a UID FETCH response looks makes it pretty impractical to tokenize & parse the response using a parser generator - unless you enjoy concatenating potentially dozens of megabytes of text that would do better to remain as an opaque binary buffer.

  • Hence, to parse server responses in a smarter way, I have to have a smarter parser. While that can use a pretty nice heuristic (despite its lispy nature, the IMAP server responses are specified to terminate in newlines at certain points), I still need it to cooperate well with something that manages buffers received from the network somewhat smartly. Aaaand that’s where I am right now.

Introducing gmail_synchronize, the tool that doesn’t do very much right now other than fill a buffer and let you read lines or N-byte-long binaries from them. But I’m sure there will be more stuff eventually (-:

To come this far, I’ve written some kilobytes of code (on various levels of the aforementioned yak stack) and thrown them away. The results in the git repo are the best I have come up with, so far. This isn’t much, and so you should take the following opinions with a mine of salt.

My impression of Elixir so far

Here’s a brain dump of what about the language stood out to me:

So far, I really like Elixir (and, by extension, Erlang). There’s a lot to be said about its pattern matching (which is as powerful as Erlang’s), but I don’t think I fully understand it yet. There’s a bit of terminology I still have to learn, but even at this level of (non-)proficiency, it’s making my job way easier.

There’s a very helpful channel on freenode, #elixir-lang. It has the creator of the language in it, and a bunch of very enthusiastic, knowledgeable and helpful people (hi, yrashk and cmn!). This has been invaluable in my learning to use the language.

I still don’t quite get why some of the decisions in it were made the way they were made. For example, it would seem natural to me to have a way to pattern-match binary buffers to test whether some bytes appear next to each other in the buffer, but there isn’t. I guess this may have to do with being able to unambiguously resolve the pattern, but it’s still a bit unsatisfactory. I’m sure this will pass as I learn more of its vocabulary and integrate it into mine.

Testing in Elixir is very cool. Instead of mocking or stubbing things like I would in, say, Ruby, I factor things such that tests can implement a protocol that the part being tested uses, and I’m set. I love protocols, and I think Elixir lets you use them in a very nice way. See here for how the tests interact with a library that follows a protocol. Note the re_buffered variable - in Ruby, I’d be using a method call expectation instead - this is way more satisfying.

Non-modifiable data structures are way less of a pain than I’d imagined (they are in fact pretty pleasing). The pattern matching makes things much easier to follow, and the way updates (which return a new object) work is also pretty cool: You can write stuff like:

some_record.buffer("foo").number(20)

…and this returns a record that is like some_record, except its buffer and number components are replaced by the values passed in the function argument list. Pretty pleasing.

I would not have been able to write code so relatively painlessly if it weren’t for the emacs mode that I’ve painfully adjusted to automatically indent Elixir code correctly. Emacs’s smie is really pretty cool, and I wish more emacs modes used it (-:

That’s all so far. I urge you to check out Elixir, and hope you have as much fun with it as I do!


  1. Why write a new tool over using offlineimap? Offlineimap is a huge pain - when used with gmail, it’ll sometimes run into UIDVALIDITY mismatches (which require a re-download of potentially huge mailboxes, which run for days), it’s slow, and its thread-based design is so horrible that it manages to mess up its own UI even when using a single worker thread, and then it can’t even exit cleanly on anything other than a SIGKILL. Arrrrgh. ↩︎

26 Dec 2012, 00:00

Write gmail filters in a nice Ruby DSL: gmail-britta

I’ve just finished (mostly) documenting and writing tests for my latest little library, gmail-britta, so thought I should release it to the world as a sort of holiday gift.

Gmail-britta is a library that lets you write Gmail filters (hah, Britta, get it?) in a way that doesn’t drive you insane - you write them as a ruby program, run that program, and out comes XML that you can import into Gmail’s filter settings.

It does a bunch of other nice things, but I guess it’s better to let the README explain

So far, I (and a few colleagues of mine) have been successfully using this for the past few months to generate filters for work email. Just yesterday I took the step and ported my 156 filters over to a gmail-britta program (yep, that’s my filters, with sensitive email addresses stubbed out), resulting in 34, easier to maintain, more accurate filters.

If you’re interested, please give it a try. Also, please let me know in the issues if you find anything that it doesn’t do, or if you’re feeling super generous, please open a pull request and send me improvements!