WordPress Trivia, Part 2

Oh, crap, I did it again.

WordPress core will send headers that prevent caching by browsers and RFC-compliant reverse proxies (like Ledge, which I’m forking to implement aggressive shared dictionary caching in additional to leveraging Redis storage) when a user is logged in, or an error (such as 404) is returned. These headers are defined in wp-includes/functions.php:

Nothing crazy. But what’s so special about January 11, 1984? Beats me, I wasn’t alive then. But we have the Internet (and copious amounts of free time while I’m stuck on a graveyard rotation)!

(more…)

Bypassing CloudFlare’s Layer 7 DDoS Protection

Volumetric layer 7 (HTTP) DDoS typically relies on overwhelming the target by inundating the target with a large number of (pseudo) legitimate HTTP requests, the end goal being resource starvation (typically, CPU cycles or available bandwidth, e.g. NIC saturation). Because layer 7 attacks require a full three-way handshake, spoofing source information is impossible (though using a proxy is a viable alternative- remember the XMLRPC issues earlier this year?); as such, the ability to control a large number of attacking machines becomes critical as the size of the target increases. Of course, other forms of HTTP DoS exist outside of volumetric resource starvation (such as Slowloris), but I wanted to take a look at common methods of defending (and circumventing said defenses) against resource starvation attacks via HTTP. This will also serve to demonstrate the weakness in deploying WAFs that rely exclusively on signature-based matching.

(more…)

Spectacularly Bad

Another round of notable WordPress plugin vulnerabilities made the rounds today- nothing particular noteworthy, just a smattering of XSS, SQLi, and form uploads. The Rich Counter upload vulnerability caught my eye as the PoC notes the exploit vector is through the user agent header. Given that unsanitized user input is more often exploited in query string and POST argument vectors, I thought this was worth a quick once over.

(more…)

Openresty Time Resolution With FFI – A More Accurate Approach

In looking for ways to further optimize my poorly-written Lua WAF (don’t worry, we’ll get there eventually), I’ve recently found that Openresty’s timing API is a bit lacking. Specifically, the time calls¬†used within each transaction seems to only have millisecond granularity. This is fine for more complex applications which run over tens or hundreds of milliseconds, but in analyzing small sections of framework code, I wanted to have a more clear picture of how long each phase handler was taking (and the patronizingly PR-esque phrase “sub-millisecond processing time” was getting really annoying to hear in my head). A quick bounce to the Openresty mailing list¬†got me on the right track:

(more…)

OpenResty Performance – ngx.ctx vs ngx.shared.dict

The Nginx Lua module provides two structures for maintaining data Lua-land data: per-request context tables, and shared memory zones. Each has its pros and cons; ngx.ctx can store arbitrarily complex Lua structures, and only live within a single transaction’s lifecycle. Conversely, shared dictionaries are capable of storing key/value pairs as Lua primitives (complex Lua structures must be serialized to be stored in a shared dictionary); the lifetime of a shared dictionary’s contents is the lifetime of the master nginx process (dictionary contents survive HUP signals). Additionally, the Lua module documentation notes that ngx.ctx performance is slow:

The ngx.ctx lookup requires relatively expensive metamethod calls and it is much slower than explicitly passing per-request data along by your own function arguments. So do not abuse this API for saving your own function arguments because it usually has quite some performance impact.

A similar note appears in the documentation under the ngx.var API call; no such note lives in the shared dictionary documentation. Given that shared dictionaries can be leveraged to store per-request data in a manner similar to the context table (for primitive values), I wanted to see if there is a noticeable difference between the two methods:

(more…)

Never Code Like Me, Part I

I’m a sysadmin, both by trade and hobby, and that tends to have an impact on a number of facets in my life, not the least of which being how I develop and manage my personal infrastructures. There are a few implications involved in this that we need to get out in the open before going any further:

(more…)

Efficient Bulk Search in High Level Languages

Certainly an oxymoronic subject, and not without justification- any decent high-level language developer will leverage existing libraries that have been designed for optimal performance (at least among the majority of use cases). Optimization and high performance in these environments often focuses on architectural and engineering disciplines, not a science of algorithm optimization and memory management. Certainly, any developer worth their weight should understand the basic concepts in computer science, but you’re not likely to find a vast array of efficient search algorithms implemented in Javascript or Ruby. Development platforms (and in turn, high-level languages) are built with the best possible performance in mind to enable developers to create and build without focusing on plumbing code. This does not, however, limit a language or environment’s ability to provide efficient methods of data management, nor should it preclude a developer from leveraging the most efficient (and safest, and sanest) solution possible.

(more…)