Most people don’t like flying, I think. No one likes standing in long lines or sitting around, but I don’t mind the extra free hours. It gives me a chance to hack around on fun things I normally wouldn’t have the time for. I’m on my way back home from San Francisco, so I took advantage of the time to start hacking around with building a simple RPC protocol in OpenResty. It’s been a good chance to work with binary data and exercise the Nginx stream module. Tossing some notes and snippets in here as an outlet; I’m hoping to have a more formal follow-up in a few weekends as life starts to settle back to normal.
Until (somewhat) recently, Nginx development was somewhat of an adventurous journey. Official documentation was largely nonexistent; Evan Miller’s decade-old guide was the often-referenced canonical source of material. Publication of an authoritative development guide came only a few years ago, significantly lowering the bar to entry for third party developers. It’s an excellent source of material, but it doesn’t cover every aspect of authoring and extending Nginx, meaning that complex or uncommon features still require a bit of blog browsing, copypasta, and diving into the Nginx source to figure out what’s going on.
Module feature testing is one of these aspects. Writing simple Nginx modules is straightforward, but discussion of module config files is often glossed over or ignored entirely. The canonical development guide does a reasonable job of touching on some of the functionality and definitions available to config files, but it’s short on examples, and it ignores a crucial aspect of developing complex modules or integrations with third party libraries: feature testing.
Working on a WAF solution for the Nginx ecosystem provides a lot of opportunities for discussion, given that such work is a meeting of crossroads. Mixing high-performance engineering, WAF technologies, ModSecurity DSL, and the OpenResty community puts lua-resty-waf in a unique context. I often get asked about other WAF solutions for Nginx, including Naxsi, and how these solutions compare to ModSecurity, lua-resty-waf (and other security OpenResty libs), and commercial solutions. I haven’t spent a lot of time working with the Naxsi project, but I’ve poked at it enough to at least start putting some thoughts on paper.
I’ve spent a good chunk of the last three months hacking away at ModSecurity compatibility with FreeWAF, and thanks to some employer sponsorship and noise from the community, there is enough feature-completeness and stability that it’s ready to sit as the latest tagged release. The project also wears a shiny new name tag: lua-resty-waf.
Some time ago I wrote a comparison of lua-nginx-module’s per-request context table and per-worker shared memory dictionaries. Silly me- our examination of the usage of hitting ngx.ctx is pretty naive. Of course, constantly doing the magic lookup to get to the table will be expensive. What happens when we localize the table, do our work, and shove it back into the ngx API at the end of our handler?
January tends to be a pretty quiet month in the admin/operations world. Most people are still coming back from holiday, new yearly plans are being made, meetings are held, and the server monkeys… sit and watch the graphs scroll by. The rest of the world’s gradual return to work means the start of a seasonal upswing, but we’re still in a relatively low point, so that generally means a light workload. That extra free time has given me a chance to put in a good chunk of work towards FreeWAF, cleaning up code, adding new features, and interacting with a total stranger (score!). I’ve just tagged a new release, v0.4, which provides a handful of new features that were sorely missing:
One of the advantages of having a rotating graveyard schedule (two weeks of 10 PM shifts, followed by two months of normal living) is that quiet nights allow for copious amounts of time to muck around on the Internet. One topic that’s piqued my interest over the last few days is tarpitting. Purposefully delaying responses seems a little more interesting than strictly rate limiting; it’s a little more of a retaliatory attitude, without causing any damage at the other end of the connection. Most of the writing I’ve found related to the idea is focused on lower-layer implementation (e.g. the TARPIT iptables module) or SMTP, so I decided to roll my own for HTTP services.