Reverse proxy#
The location
directive, used within the server
context, specifies how NGINX should process requests based on the URI.
More in correspoding section of the official documentation.
Setup#
It turns out that it’s quite a complex task to build examples that show how everything works, so this section describes what we need to show everything. In summary, we need
Proxied server - server where we’ll redirect requests to nginx.
And nginx, which can be configured differently for different examples.
A network that connects containers.
The following cell creates a Docker Compose file that satisfies all these requirements.
cat << EOF > reverse_proxy_files/docker-compose.yml
services:
proxied:
image: kennethreitz/httpbin
container_name: client_container
ports:
- 81:80
nginx:
image: nginx
container_name: experiment_nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
EOF
The following cell runs applications.
docker compose -f reverse_proxy_files/docker-compose.yml up -d &> /dev/null
Note Don’t forget to clear the environment.
docker compose -f reverse_proxy_files/docker-compose.yml down &> /dev/null
Proxy pass#
Find out more in:
The special section of the official documentation.
Specific page on this site.
proxy_path
in nginx specify the URL of the proxied service. This would be a URL that nginx will request.
The following example defines location /recsys
and ties to it http://client_container/anything/config/
.
cat << EOF > reverse_proxy_files/nginx.conf
events {}
http {
server {
listen 80;
location /recsys {
proxy_pass "http://client_container/anything/config/";
}
}
}
EOF
docker exec -it experiment_nginx nginx -s reload
2024/09/09 08:06:03 [notice] 30#30: signal process started
The following cell demonstrates a request to the <nginx address>/recsys/...
, which uses httpbin to display the result of the request.
curl -L http://localhost:80/recsys/101
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "client_container",
"User-Agent": "curl/7.81.0"
},
"json": null,
"method": "GET",
"origin": "172.19.0.2",
"url": "http://client_container/anything/config//101"
}
In the url
field you can check the exact url that nginx throws to the httpbin.
Headers to server (proxy_set_header
)#
Allows redefining or appending fields to the request header passed to the proxied server. You can add aditional headers to the http request that will be sent to the destination server. So by using syntax proxy_set_header <header field> <value>;
.
The following example changes the nginx config to add two new fields to the http header Name
and SecondName
and reloads nginx.
docker exec -i experiment_nginx sh -c 'cat > /etc/nginx/nginx.conf' << EOF
events {}
http {
server {
listen 80;
location / {
proxy_pass "http://client_container/headers";
proxy_set_header Name Fedor;
proxy_set_header SecondName Kobak;
}
}
}
EOF
docker exec -it experiment_nginx nginx -s reload
2024/09/09 07:44:06 [notice] 44#44: signal process started
First, consider what happens if we just request httpbin directly.
curl http://localhost:81/headers
{
"headers": {
"Accept": "*/*",
"Host": "localhost:81",
"User-Agent": "curl/7.81.0"
}
}
There are no additional headers - just the very basic ones generated by curl
.
curl http://localhost:80
{
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "client_container",
"Name": "Fedor",
"Secondname": "Kobak",
"User-Agent": "curl/7.81.0"
}
}
If you compare the output of the original and proxied requests, you can see that the proxied requests have additional headers - just as we specified in the nginx configuration.
Cache#
Nginx provides a caching facility that saves responses from proxied URLs and reuses them later. You can enable and configure it using directives that start with proxy_cache
. There is a special tutorial on the Nginx website that covers Nginx configuration.
Check:
A guide for caching at official nginx documentation.
Specific page on this website.
The following cell defines two locations, /cached
and /no_cached
, both proxying to the same httpbin URL that returns a specified number of random bytes. The /cached
location uses caching, while /no_cached
does not.
To achieve this, the following directives are used:
proxy_cache_path /var/cache/nginx/proxy_cache keys_zone=my_cache:10m;
:Sets
/var/cache/nginx/proxy_cache
as the folder for cache.Defines
my_cache
as the cache zone name to be used later.Specifies that
my_cache
can contain up to 10 megabytes of cache.
proxy_cache my_cache
specifies that themy_cache
cache zone should be used for the corresponding context.proxy_cache_valid 200 10m;
configures the system to save responses with a status code of 200 for 10 minutes.
cat << EOF > reverse_proxy_files/nginx.conf
events {}
http {
proxy_cache_path /var/cache/nginx/proxy_cache keys_zone=my_cache:10m;
server {
listen 80;
location /cached {
proxy_cache my_cache;
proxy_cache_valid 200 10m;
proxy_pass http://client_container/bytes/50;
}
location /no_cached {proxy_pass http://client_container/bytes/50;}
}
}
EOF
docker exec -it experiment_nginx nginx -s reload
2024/09/09 09:26:45 [notice] 80#80: signal process started
Now let’s try to request the /cached
location twice.
echo $(curl -s localhost:80/cached)
echo $(curl -s localhost:80/cached)
v24� �˯[��#`h�7Dv�}(�b�-V����G�1H[!��}ß�
v24� �˯[��#`h�7Dv�}(�b�-V����G�1H[!��}ß�
So we got the same response both times, indicating that the answer was cached.
Now let’s try to request the /no_cached
location to show the difference.
echo $(curl -s localhost:80/no_cached)
echo $(curl -s localhost:80/no_cached)
f���� �=�a�e�[�4y89et�����j�]�۵u���/f"8
&�N�WE�'���R0/�l佟�@�@�q��{K��
Each response differs from the previous one, showing that the /no_cached
location does not use caching.