564
564
so you have to explicitly force reading the body if you need its content.
566
566
If you want to read the request body data from the `$request_body` variable, make sure that
567
your `client_max_body_size` setting is equal to
568
your `client_body_buffer_size` setting and
569
the capacity specified should hold the biggest
570
request body that your app allow.
567
your set `client_body_in_single_buffer` on. See
569
<http://wiki.nginx.org/NginxHttpCoreModule#client_body_in_single_buffer>
572
573
If the current location defines `rewrite_by_lua` or `rewrite_by_lua_file`,
573
574
then the request body will be read just before the `rewrite_by_lua` or `rewrite_by_lua_file` code is run (and also at the
637
They take the same values of NGX_OK, NGX_AGAIN, NGX_DONE, NGX_ERROR, and etc. But now
638
only ngx.exit() only take two of these values, i.e., NGX_OK and NGX_ERROR. I'll add a
639
quick note to README. Thanks for reminding us. The return values of the Lua "return"
640
statement will be silently ignored.
636
642
HTTP method constants
637
643
---------------------
638
644
* **Context:** `rewrite_by_lua*`, `access_by_lua*`, `content_by_lua*`
787
793
See <http://wiki.nginx.org/NginxHttpProxyModule#proxy_pass_request_headers> for more
796
For now, do not use the `error_page` directive or `ngx.exec()` or ngx_echo's `echo_exec`
797
directive within locations to be captured by `ngx.location.capture()`
798
or `ngx.location.capture_multi()`; ngx_lua cannot capture locations with internal redirections.
799
See the `Known Issues` section below for more details and working work-arounds.
790
801
ngx.location.capture_multi({ {uri, options?}, {uri, options?}, ... })
791
802
---------------------------------------------------------------------
792
803
* **Context:** `rewrite_by_lua*`, `access_by_lua*`, `content_by_lua*`
843
* **Context:** `rewrite_by_lua*`, `access_by_lua*`, `content_by_lua*`
854
* **Context:** `set_by_lua*`, `rewrite_by_lua*`, `access_by_lua*`, `content_by_lua*`
845
856
Read and write the response status. This should be called
846
857
before sending out the response headers.
1102
1113
ngx.say(ngx.cookie_time(1290079655))
1103
1114
-- yields "Thu, 18-Nov-10 11:27:35 GMT"
1118
* **Context:** `set_by_lua*`, `rewrite_by_lua*`, `access_by_lua*`, `content_by_lua*`
1120
Returns true if the current request is an nginx subrequest, or false otherwise.
1105
1122
ndk.set_var.DIRECTIVE
1106
1123
---------------------
1107
1124
* **Context:** `rewrite_by_lua*`, `access_by_lua*`, `content_by_lua*`
1119
1136
HTTP 1.0 support
1120
1137
----------------
1122
Sometimes you may want to use nginx's standard `ngx_proxy` module to proxy requests to
1123
another nginx machine configured by a location with `content_by_lua`. Because
1124
`proxy_pass` only supports the HTTP 1.0 protocol, we have to know
1125
the length of your response body and set the `Content-Length` header before emitting
1126
any data out. `ngx_lua` will automatically recognize HTTP 1.0 requests and try to send out an appropriate `Content-Length` header for you, at the first invocation of `ngx.print()` and `ngx.say`, assuming all the response body data
1127
is in a single call of `ngx.print()` or `ngx.say`. So if you want to
1128
support HTTP 1.0 clients like `ngx_proxy`, do not
1129
call `ngx.print()` or `ngx.say()` multiple times,
1130
try buffering the output data yourself wherever needed.
1132
Here is a small example:
1136
location /internal {
1137
rewrite ^/internal/(.*) /lua/$1 break;
1138
proxy_pass http://B;
1143
location = /lua/foo {
1145
data = "hello, world"
1150
Then accessing machine A's /internal/foo using curl gives the result that we expect.
1152
One caveat apples here: always send out the response body data in a single call of `ngx.print()` or `ngx.say()`, and subsequent calls of `ngx.print()` or `ngx.say()` will take no effect on the client side.
1139
The HTTP 1.0 protocol does not support chunked outputs and always requires an
1140
explicit `Content-Length` header when the response body is non-empty. So when
1141
an HTTP 1.0 request is present, This module will automatically buffer all the
1142
outputs of user calls of `ngx.say()` and `ngx.print()` and
1143
postpone sending response headers until it sees all the outputs in the response
1144
body, and at that time ngx_lua can calculate the total length of the body and
1145
construct a proper `Content-Length` header for the HTTP 1.0 client.
1147
Note that, common HTTP benchmark tools like `ab` and `http_load` always issue
1148
HTTP 1.0 requests by default. To force `curl` to send HTTP 1.0 requests, use
1176
1173
1. Download the latest version of the release tarball of this module from
1177
1174
lua-nginx-module [file list](http://github.com/chaoslawful/lua-nginx-module/downloads).
1178
(Mac 64-bit users need to edit ngx_lua's config file themselves, see the
1179
Known Issues section below.)
1181
1176
1. Grab the nginx source code from [nginx.net](http://nginx.net/), for example,
1182
1177
the version 0.8.54 (see nginx compatibility), and then build the source with
1233
1228
* drizzle-nginx-module: <http://github.com/chaoslawful/drizzle-nginx-module>
1234
1229
* rds-json-nginx-module: <http://github.com/agentzh/rds-json-nginx-module>
1235
1230
* set-misc-nginx-module: <http://github.com/agentzh/set-misc-nginx-module>
1231
* headers-more-nginx-module: <http://github.com/agentzh/headers-more-nginx-module>
1236
1232
* memc-nginx-module: <http://github.com/agentzh/memc-nginx-module>
1237
1233
* srcache-nginx-module: <http://github.com/agentzh/srcache-nginx-module>
1238
1234
* ngx_auth_request: <http://mdounin.ru/hg/ngx_http_auth_request_module/>
1285
* Do not use the `error_page` directive or `ngx.exec()` or ngx_echo's `echo_exec` directive
1286
within locations to be captured by `ngx.location.capture()`
1287
or `ngx.location.capture_multi()`; ngx_lua cannot capture locations with internal redirections.
1288
Also be careful with server-wide `error_page` settings that are automatically inherited by
1289
*all* locations by default. If you're using the ngx_openresty bundle (<http://github.com/agentzh/ngx_openresty>),
1290
you can use the `no_error_pages` directive within locations that are to be captured from within Lua, for example,
1293
# server-wide error page settings
1294
error_page 500 503 504 html/50x.html;
1297
# explicitly disable error_page setting inheritance
1298
# within this location:
1299
no_error_pages; # this directive is provided by ngx_openresty only
1301
set $memc_key $query_string;
1303
memc_pass 127.0.0.1:11211;
1308
local res = ngx.location.capture(
1309
"/memc", { args = 'my_key' }
1311
if res.status ~= ngx.HTTP_OK then
1288
1319
* Because the standard Lua 5.1 interpreter's VM is not fully resumable, the
1289
1320
`ngx.location.capture()` and `ngx.location.capture_multi` methods cannot be used within
1290
1321
the context of a Lua `pcall()` or `xpcall()`. If you're heavy on Lua exception model
1316
1347
package.loaded.xxx = nil
1319
* 64-bit Darwin OS (Mac OS X) needs special linking options to use LuaJIT. Change the line at the bottom of `config` file from
1321
CORE_LIBS="-Wl,-E $CORE_LIBS"
1325
CORE_LIBS="-Wl,-E -Wl,-pagezero_size,10000 -Wl,-image_base,100000000 $CORE_LIBS"
1350
* It's recommended to always put the following piece of code at the end of your Lua modules using `ngx.location.capture()` or `ngx.location.capture_multi()` to prevent casual use of module-level global variables that are shared among *all* requests, which is usually not what you want:
1352
getmetatable(foo.bar).__newindex = function (table, key, val)
1353
error('Attempt to write to undeclared variable "' .. key .. '": '
1354
.. debug.traceback())
1357
assuming your current Lua module is named `foo.bar`. This will guarantee that you have declared your Lua functions' local Lua variables as "local" in your Lua modules, or bad race conditions while accessing these variables under load will tragically happen. See the `Data Sharing within an Nginx Worker` for the reasons of this danger.
1360
Data Sharing within an Nginx Worker
1361
===================================
1363
**NOTE: This mechanism behaves differently when code cache is turned off, and should be considered as a DIRTY TRICK. Backward compatibility is NOT guaranteed. Use at your own risk! We're going to design a whole new data-sharing mechanism.**
1365
If you want to globally share user data among all the requests handled by the same nginx worker process, you can encapsulate your shared data into a Lua module, require the module in your code, and manipulate shared data through it. It works because required Lua modules are loaded only once, and all coroutines will share the same copy of the module.
1367
Here's a complete small example:
1370
module("mydata", package.seeall)
1378
function get_age(name)
1382
and then accessing it from your nginx.conf:
1385
content_lua_by_lua '
1386
local mydata = require("mydata")
1387
ngx.say(mydata.get_age("dog"))
1391
Your `mydata` module in this example will only be loaded
1392
and run on the first request to the location `/lua`,
1393
and all those subsequent requests to the same nginx
1394
worker process will use the reloaded instance of the
1395
module as well as the same copy of the data in it,
1396
until you send a `HUP` signal to the nginx master
1397
process to enforce a reload.
1399
This data sharing technique is essential for high-performance Lua apps built atop this module. It's common to cache reusable data globally.
1401
It's worth noting that this is *per-worker* sharing, not *per-server* sharing. That is, when you have multiple nginx worker processes under an nginx master, this data sharing cannot pass process boundry. If you indeed need server-wide data sharing, you can
1403
1. Use only a single nginx worker and a single server. This is not recommended when you have a mulit-core CPU or multiple CPUs in a single machine.
1404
2. Use some true backend storage like `memcached`, `redis`, or an RDBMS like `mysql`.