OpenResty Performance – ngx.ctx vs ngx.shared.dict

The Nginx Lua module provides two structures for maintaining data Lua-land data: per-request context tables, and shared memory zones. Each has its pros and cons; ngx.ctx can store arbitrarily complex Lua structures, and only live within a single transaction’s lifecycle. Conversely, shared dictionaries are capable of storing key/value pairs as Lua primitives (complex Lua structures must be serialized to be stored in a shared dictionary); the lifetime of a shared dictionary’s contents is the lifetime of the master nginx process (dictionary contents survive HUP signals). Additionally, the Lua module documentation notes that ngx.ctx performance is slow:

The ngx.ctx lookup requires relatively expensive metamethod calls and it is much slower than explicitly passing per-request data along by your own function arguments. So do not abuse this API for saving your own function arguments because it usually has quite some performance impact.

A similar note appears in the documentation under the ngx.var API call; no such note lives in the shared dictionary documentation. Given that shared dictionaries can be leveraged to store per-request data in a manner similar to the context table (for primitive values), I wanted to see if there is a noticeable difference between the two methods:

(more…)

Never Code Like Me, Part I

I’m a sysadmin, both by trade and hobby, and that tends to have an impact on a number of facets in my life, not the least of which being how I develop and manage my personal infrastructures. There are a few implications involved in this that we need to get out in the open before going any further:

(more…)

Efficient Bulk Search in High Level Languages

Certainly an oxymoronic subject, and not without justification- any decent high-level language developer will leverage existing libraries that have been designed for optimal performance (at least among the majority of use cases). Optimization and high performance in these environments often focuses on architectural and engineering disciplines, not a science of algorithm optimization and memory management. Certainly, any developer worth their weight should understand the basic concepts in computer science, but you’re not likely to find a vast array of efficient search algorithms implemented in Javascript or Ruby. Development platforms (and in turn, high-level languages) are built with the best possible performance in mind to enable developers to create and build without focusing on plumbing code. This does not, however, limit a language or environment’s ability to provide efficient methods of data management, nor should it preclude a developer from leveraging the most efficient (and safest, and sanest) solution possible.

(more…)

Minimal Resources, Maximum Ludicrousness

There are already plenty of articles on optimizing Nginx performance, so I’m not going to focus on that here. Instead, I’m going to demonstrate a quick-and-dirty method of measuring traffic without the use of the biggest performance drag in a standard Nginx setup- on-disk logging.

(more…)

More WordPress XMLRPC Brute Force Attacks

If I never see this file again, it might be too soon.

We’re seeing an uptick in requests to xmlrpc.php, the API endpoint for WordPress. The¬†wp.getUsersBlogs method is being hit across a large number of domains; I managed to grab a series of request payloads:

(more…)

Enabling PCRE JIT in OpenResty on CentOS

I’m quickly finding that openresty is an excellent stack, bundling a large number of nginx modules and Lua functionality. One of the most useful features I’ve found so far is the nginx Lua API’s ability to perform PCRE matches (Lua offers string.find, using a search syntax that is similar to PCRE, but lacks robustness). Performance with PCRE can be increased by using PCRE JIT (just-in-time) compilation; this is required to be present within the underlying system’s PCRE package. JIT was introduce in PCRE 8.21, and, go figure, the upstream package on CentOS is PCRE 7.8.

(more…)