Resty Core – The Good, The Bad, and the JIT-y

OpenResty’s biggest selling point is its performance. Embedding Lua allows for some pretty cool new features to pop into a simple Nginx proxy, and the synchronous but non-blocking paradigm introduced by hooking into the event loop (thanks, epoll!) is awesome, but the OpenResty stack as a whole really shines when everything is tied together under the umbrella of approaching, crossing, and then shattering C10k. Out of the box, Lua code embedded into any number of phase handlers runs at an impressive speed, and with the flick of a switch, we can really kick things into high gear.

Continue reading

Benchmarking Simple String Comparison Options for OpenResty

Perhaps one of the most powerful primatives that lua-nginx-module provides out of the box is a sane, simple wrapper for regular expression operations (via PCRE). String matching, replacing (and now splitting!) via regex allows for much greater flexibility in string processing than Lua’s native string library. Recently while cleaning up an OpenResty InfluxDB client I needed to do some simple string comparison. My knee-jerk reaction was to use a simple expression in ngx.re.find, but I had a hunch that the overhead of using the PCRE lib would be a waste, and that native Lua pattern searches would be quicker. Time for a benchmark to figure out the most sane solution!

Continue reading

Properly Scoping lua-nginx-module’s ngx.ctx

Some time ago I wrote a comparison of lua-nginx-module’s per-request context table and per-worker shared memory dictionaries. Silly me- our examination of the usage of hitting ngx.ctx is pretty naive. Of course, constantly doing the magic lookup to get to the table will be expensive. What happens when we localize the table, do our work, and shove it back into the ngx API at the end of our handler?

Continue reading

OpenResty Performance – ngx.ctx vs ngx.shared.dict

The Nginx Lua module provides two structures for maintaining Lua-land data: per-request context tables, and shared memory zones. Each has its pros and cons; ngx.ctx can store arbitrarily complex Lua structures, and only live within a single transaction’s lifecycle. Conversely, shared dictionaries are capable of storing key/value pairs as Lua primitives (complex Lua structures must be serialized to be stored in a shared dictionary); the lifetime of a shared dictionary’s contents is the lifetime of the master nginx process (dictionary contents survive HUP signals). Additionally, the Lua module documentation notes that ngx.ctx performance is slow:

The ngx.ctx lookup requires relatively expensive metamethod calls and it is much slower than explicitly passing per-request data along by your own function arguments. So do not abuse this API for saving your own function arguments because it usually has quite some performance impact.

A similar note appears in the documentation under the ngx.var API call; no such note lives in the shared dictionary documentation. Given that shared dictionaries can be leveraged to store per-request data in a manner similar to the context table (for primitive values), I wanted to see if there is a noticeable difference between the two methods:

Continue reading

Efficient Bulk Search in High Level Languages

Certainly an oxymoronic subject, and not without justification- any decent high-level language developer will leverage existing libraries that have been designed for optimal performance (at least among the majority of use cases). Optimization and high performance in these environments often focuses on architectural and engineering disciplines, not a science of algorithm optimization and memory management. Certainly, any developer worth their weight should understand the basic concepts in computer science, but you’re not likely to find a vast array of efficient search algorithms implemented in Javascript or Ruby. Development platforms (and in turn, high-level languages) are built with the best possible performance in mind to enable developers to create and build without focusing on plumbing code. This does not, however, limit a language or environment’s ability to provide efficient methods of data management, nor should it preclude a developer from leveraging the most efficient (and safest, and sanest) solution possible.

Continue reading

Enabling PCRE JIT in OpenResty on CentOS

I’m quickly finding that openresty is an excellent stack, bundling a large number of nginx modules and Lua functionality. One of the most useful features I’ve found so far is the nginx Lua API’s ability to perform PCRE matches (Lua offers string.find, using a search syntax that is similar to PCRE, but lacks robustness). Performance with PCRE can be increased by using PCRE JIT (just-in-time) compilation; this is required to be present within the underlying system’s PCRE package. JIT was introduce in PCRE 8.21, and, go figure, the upstream package on CentOS is PCRE 7.8.

Continue reading