Volumetric layer 7 (HTTP) DDoS typically relies on overwhelming the target by inundating the target with a large number of (pseudo) legitimate HTTP requests, the end goal being resource starvation (typically, CPU cycles or available bandwidth, e.g. NIC saturation). Because layer 7 attacks require a full three-way handshake, spoofing source information is impossible (though using a proxy is a viable alternative- remember the XMLRPC issues earlier this year?); as such, the ability to control a large number of attacking machines becomes critical as the size of the target increases. Of course, other forms of HTTP DoS exist outside of volumetric resource starvation (such as Slowloris), but I wanted to take a look at common methods of defending (and circumventing said defenses) against resource starvation attacks via HTTP. This will also serve to demonstrate the weakness in deploying WAFs that rely exclusively on signature-based matching.
Another round of notable WordPress plugin vulnerabilities made the rounds today- nothing particular noteworthy, just a smattering of XSS, SQLi, and form uploads. The Rich Counter upload vulnerability caught my eye as the PoC notes the exploit vector is through the user agent header. Given that unsanitized user input is more often exploited in query string and POST argument vectors, I thought this was worth a quick once over.
In looking for ways to further optimize my poorly-written Lua WAF (don’t worry, we’ll get there eventually), I’ve recently found that Openresty’s timing API is a bit lacking. Specifically, the time calls used within each transaction seems to only have millisecond granularity. This is fine for more complex applications which run over tens or hundreds of milliseconds, but in analyzing small sections of framework code, I wanted to have a more clear picture of how long each phase handler was taking (and the patronizingly PR-esque phrase “sub-millisecond processing time” was getting really annoying to hear in my head). A quick bounce to the Openresty mailing list got me on the right track:
The Nginx Lua module provides two structures for maintaining Lua-land data: per-request context tables, and shared memory zones. Each has its pros and cons; ngx.ctx can store arbitrarily complex Lua structures, and only live within a single transaction’s lifecycle. Conversely, shared dictionaries are capable of storing key/value pairs as Lua primitives (complex Lua structures must be serialized to be stored in a shared dictionary); the lifetime of a shared dictionary’s contents is the lifetime of the master nginx process (dictionary contents survive HUP signals). Additionally, the Lua module documentation notes that ngx.ctx performance is slow:
ngx.ctx lookup requires relatively expensive metamethod calls and it is much slower than explicitly passing per-request data along by your own function arguments. So do not abuse this API for saving your own function arguments because it usually has quite some performance impact.
A similar note appears in the documentation under the ngx.var API call; no such note lives in the shared dictionary documentation. Given that shared dictionaries can be leveraged to store per-request data in a manner similar to the context table (for primitive values), I wanted to see if there is a noticeable difference between the two methods:
If I never see this file again, it might be too soon.
We’re seeing an uptick in requests to xmlrpc.php, the API endpoint for WordPress. The wp.getUsersBlogs method is being hit across a large number of domains; I managed to grab a series of request payloads:
I’m quickly finding that openresty is an excellent stack, bundling a large number of nginx modules and Lua functionality. One of the most useful features I’ve found so far is the nginx Lua API’s ability to perform PCRE matches (Lua offers string.find, using a search syntax that is similar to PCRE, but lacks robustness). Performance with PCRE can be increased by using PCRE JIT (just-in-time) compilation; this is required to be present within the underlying system’s PCRE package. JIT was introduce in PCRE 8.21, and, go figure, the upstream package on CentOS is PCRE 7.8.
Seriously, this crap is really annoying.
mod_sec audit logs are atrocious. There have been a few attempts to make parsing audit data more palatable- BitsOfInfo recently wrote up a proof of concept of working through audit logs with logstash, and the AuditConsole project from JWall provides a more comprehensive approach to collecting log data, but neither solutions addresses the inherent problem of how messy native audit log data is.
As part of my Master’s thesis I need a way to efficiently audit, sort and store audit logs; building my own parser, while doable, would be a waste of time, so I’ve forked the ModSecurity project on Github and built a patch that implements JSON logging directly in the application, replacing the old data log structures entirely. There’s still a long way to go, but the initial patch seems stable and produces valid JSON. Continue reading