Properly Scoping lua-nginx-module’s ngx.ctx

Some time ago I wrote a comparison of lua-nginx-module’s per-request context table and per-worker shared memory dictionaries. Silly me- our examination of the usage of hitting ngx.ctx is pretty naive. Of course, constantly doing the magic lookup to get to the table will be expensive. What happens when we localize the table, do our work, and shove it back into the ngx API at the end of our handler?

We’ve added in an extra test to localize the ctx table, iterate in the same fashion, and save the results back. (We’ve also updated the test to use higher-resolution timing). There is a negligible performance impact from making just two calls to ngx.ctx:

So we can see that the functionality provided by ngx.ctx (arbitrary table nesting, per-request lifetime) is worth the minimal overhead when properly implement.

While digging through this, I had an opportunity to poke agentzh about the language in the documentation regarding dangerous using of scoping ngx.ctx. He quickly updated the docs with a clear use case. This type of responsive attitude and clear communication is why the OpenResty project is, in my opinion, one of the most well-maintained FOSS projects today.

5 thoughts on “Properly Scoping lua-nginx-module’s ngx.ctx

  1. Thanks for the benchmark!
    If you add require “resty.core” to the top of the file, you will see a significant boost in all numbers, and ctx runs even faster than the shared memory. Maybe you can add a third post with those numbers 😉

Leave a Reply

Your email address will not be published. Required fields are marked *