How to set up a reverse proxy server with Nginx step by step

Last update: 28/01/2026
Author Isaac
  • Nginx acts as an intermediary layer that protects, optimizes, and distributes traffic to multiple backend servers.
  • Directives such as proxy_pass, proxy_set_header, and upstream blocks are used to implement load balancing, caching, and fine-tuning of requests.
  • Centralizing TLS, firewalls, and security headers in the reverse proxy strengthens the protection of the entire infrastructure.
  • Nginx logs and metrics allow you to monitor traffic and facilitate the diagnosis of problems in applications and services.

Configuring a reverse proxy with Nginx

Setting up a reverse proxy with Nginx has become one of those almost mandatory tasks when you want your web applications to be more secure, scalable and easier to manageYou don't need a huge infrastructure to benefit from it: even with just one or two services, you'll notice the difference in performance and organization.

Throughout this guide we'll see how Nginx acts as an intermediary between clients and your applications, how it integrates with Apache, .NET, Docker, and containerized services, which configuration directives are truly important, and how to fine-tune everything on servers Linux modern ones like Ubuntu, including HTTPS, load balancing, caching and some security optimizations.

What is an Nginx reverse proxy and what role does it play in your architecture?

When we talk about Nginx, many people only think of it as a web server for serving static pages, but for years one of its star uses has been to act as high-performance reverse proxy serverIn this role, it is placed in front of one or more backend servers (Apache, .NET applications, Docker containers, Gunicorn for Python, Node.js, etc.) and it is Nginx that receives all client requests.

Instead of the browser connecting directly to your application, that connection is always made through Nginx. From there, Nginx itself decides how to handle each request. whether it can serve the content itself (e.g., static files) or whether it must be sent to a specific backend so that it can process it and then return the answer to the client.

This middle layer offers very clear advantages: it hides the real IP addresses of internal servers, allows for the centralization of TLS certificate usage, enables the application of security rules at a single point, and greatly simplifies the deployment of new applications. Furthermore, Nginx is capable of Manage thousands of concurrent connections with very low resource consumption, which makes it ideal for high-traffic scenarios.

In more complex environments, Nginx is also used as load balancer and entrance door to Kubernetes architectures, as an ingress controller, or as an API gateway, handling the routing of requests to microservices and applying policies such as caching, rate limits, or authentication.

How Nginx processes requests as a reverse proxy

When a client sends an HTTP or HTTPS request to your domain, that request first reaches the server where Nginx is running. From that point on, the basic flow is: Nginx analyzes the URL, headers, method, and hostlocate the block server and the block location that best fits and decides what to do with the request.

If the requested resource is static (images, CSS, JS, downloadsetc.), Nginx usually serves it directly from the file system or even from its internal cache, greatly reducing There response and preventing backends from wasting time on content they don't need to process.

However, when the request requires business logic (for example, a WordPress-generated page, a .NET Core API, or a Jenkins application), Nginx forwards that request to one of the backend servers configured using the directive proxy_passIt waits for the response and returns it to the client as if it had come directly from Nginx.

This forwarding is not random: Nginx can apply several load balancing algorithms. The simplest is the round-robinwhich distributes requests in order among all nodes. You can also use least_conn (sends the request to the backend with fewer active connections) or ip_hash, which keeps each client connected to the same server to maintain sessions without needing to share them between instances.

In advanced environments, you can fine-tune even further with maps, variables, and additional modules: it's possible to make decisions based on headers, content type, routes, domains, or even custom code that extends the routing logicIn short, the reverse proxy part is not limited to "forwarding traffic", but rather to intelligently controlling how requests travel within your infrastructure.

Prerequisites before mounting Nginx as a reverse proxy

Before you start editing settings willy-nilly, it's a good idea to make sure you meet certain basic requirements to avoid headaches. The minimum you need is a server with access root or sweat (typically a VPS or a cloud machine) where you can install Nginx and manage its services.

It is also important to have one or more domains pointing to the public IP from the Nginx server. It's possible to work with IP only, but for real-world environments and in order to SEOFor TLS certificates and browsing, it is common to use domain names and subdomains: for example, example.com, jenkins.example.com, api.example.com, etc.

  How to create a home server using Docker Desktop

Additionally, you must already have the backend servers that will be behind the proxy operational: they can be Apache, ASP.NET Core applications on port 5000, Dockerized services, WordPress on another server or anything else that listens on HTTP/HTTPS and processes requests.

At the network level, ensure that the system firewall allows incoming traffic on the ports 80 (HTTP) and 443 (HTTPS)In Ubuntu with UFW, simply enabling the "Nginx Full" profile is enough to open both ports without having to define manual rules.

Finally, it's helpful to have a basic understanding of Nginx syntax and, if you're going to serve encrypted content, to have valid TLS certificates (for example, from Let's Encrypt) for the domains you will be using, so that you can terminate HTTPS connections in Nginx and protect the traffic from the client side.

Nginx installation and basic configuration structure

In distributions like Ubuntu, installing Nginx is reduced to practically just a couple of steps. commandsThe usual procedure is to first update the system packages and then install the web server using the package manager, so that get a stable and well-integrated version with systemd.

Once installed, Nginx starts a daemon that is managed with systemctlYou can check its status, start it, stop it, restart it, or enable it to start with the system. This service model makes both Nginx and the backend applications behave like background processes that start automatically after a machine restart.

The main configuration is located in /etc/nginx/nginx.confBut in practice, that file is almost never modified much. It contains other key files and directories, such as /etc/nginx/conf.d/ and the binomial /etc/nginx/sites-available / /etc/nginx/sites-enabled, where the definitions of virtual sites are stored.

The typical philosophy is to create a configuration file per domain or service in sites-available and then enable it with a symbolic link in sites-enabledThis allows you to easily activate or deactivate sites without deleting their files, and Keep the configuration organized even on servers with many different services..

Within these files, two fundamental blocks stand out. The block server, which defines each virtual server (domain, port, certificates, logs), and the blocks location, which control how to handle specific routes: serving statics, proxying, redirecting, caching, etc.

Configure Nginx as a simple reverse proxy

To start with the basics, simply define a block server that listens on port 80 for your domain and, inside, a block location that sends all traffic to the appropriate backend. In practical terms, this makes Nginx a single entry point to your application.

The core of the proxy configuration relies on the directive proxy_passThis indicates the internal URL of the backend server. It can point to an IP address and port, an internal domain, or a block. upstream that groups several servers, depending on how complex your architecture is.

Together with proxy_pass It is crucial to work with proxy_set_header to preserve relevant information from the original request: the requested host, the client's actual IP address, or the scheme (HTTP/HTTPS). These headers (Host, X-Real-IP, X-Forwarded-For, X-Forwarded-Protoetc.) allow the backend Properly log traffic, generate consistent absolute URLs and can apply logic based on the customer's origin.

Furthermore, in almost any serious environment it is advisable to manage proxy timeouts with directives such as proxy_connect_timeout, proxy_send_timeout y proxy_read_timeoutAdjusting them helps prevent slow connections or long processes from crashing the server, especially when you have heavy operations or background commands that take a long time to respond.

Load balancing and multi-backend management

When your application starts to grow or you have high availability needs, Nginx can work as load balancer in front of multiple backendsTo do this, a block is defined. upstream where the servers that will be part of the destination pool are listed.

Within that block upstream You can add additional directives: weights to prioritize certain nodes, request distribution algorithmsKeepalive parameters to maintain persistent connections, etc. The goal is to distribute incoming traffic efficiently, without a single server becoming a bottleneck.

Once the group has been defined, the proxy_pass of the block server It points to the upstream name, rather than a specific IP address. From there, each incoming request is sent to one of the backends following the chosen load balancing strategy, so that Adding or removing one of those nodes only involves modifying the upstream block, without touching the rest of the settings.

This same mechanism integrates very well with Docker or Swarm environments, where each service exposes an internal port and Nginx manages them externally. It's also common to group applications like Jenkins, GitLab, SonarQube, or Nexus, each in its own container, and have Nginx handle them. Centralize all external access from a single point.

  Kazeta OS: The distro that turns your PC into a console without accounts

If your infrastructure is distributed across multiple machines, a proxy of this type allows you to isolate services in different instances, with dedicated resources, while continuing to offer a single domain or set of well-organized domains outward.

Static content management, caching, and performance

One of Nginx's greatest strengths compared to other servers is its efficiency in delivering static content. Leveraging this effectively in a reverse proxy scenario involves defining blocks location specific for static routesso that they are not sent to the backend unless strictly necessary.

In these blocks, you can specify the path to the local directory where you store the files, add long expiration headers, and enable compression and caching mechanisms. This allows the user's browser to retain resources for longer and enables Nginx to function properly. you don't have to recalculate anything every time you request files that barely change.

Furthermore, Nginx allows for a more advanced caching system within the proxy itself, using directives such as proxy_cache y proxy_cache_pathYou can define a disk cache zone with a maximum size, idle time, and directory levels to store responses, and then activate that cache in the proxy blocks to your backends.

When this is combined with dynamic backends (e.g., WordPress or applications in PHP, .NET, or Python), the result is that The most frequent answers are served from the Nginx cache without the backend having to recalculate them.This translates into less CPU and memory load, faster response times, and greater capacity to handle traffic spikes.

If your project includes many repeated resources, such as HTML snippets that appear on different pages, you can also rely on external tools like Varnish or the use of ESI (Edge Side Includes) in combination with reverse proxies to further accelerate the delivery of modular content.

Security: TLS termination, firewalls, and headers

One of the main reasons for placing a reverse proxy in front of your servers is to strengthen security. By doing so All external connections go through NginxYou can concentrate encryption, filtering, and protection policies there without having to replicate it in each individual backend.

TLS termination is a clear example: certificates and keys are installed on Nginx and configured to accept HTTPS traffic on port 443. Internally, if you want, you can communicate with backends using pure HTTP, which simplifies their configuration. But externally, the user sees an encrypted and secure connection, with support for modern protocols such as TLS 1.3 and updated cipher suites.

Offloading encryption to the proxy reduces the workload for your applications, which no longer need to worry about negotiating certificates or encryption algorithms on every connection. You can even incorporate modules or hardware Optimized for SSL/TLS if you handle a very high volume of traffic and want to take performance a step further.

In addition to encryption, it is common to define safety headrests in the global configuration or in specific blocks: restrictions of Content-Security-PolicyXSS protection, MIME type sniffing blocking, control of HSTSrules of X-Frame-OptionsAmong other things. All of this strengthens your application's security posture without needing to modify the source code.

If you add a well-configured firewall (for example, UFW on Ubuntu) and DDoS protection systems, such as limiting requests per IP address or integration with external services, to this layer you end up with a fairly robust barrier that filters out common attacks before they reach your application servers.

Integration with Docker, Gunicorn, and other application servers

In many current scenarios, applications don't run directly on the host system, but rather within containers or behind dedicated servers. Nginx fits particularly well with this approach, as It doesn't go inside the container, but simply communicates via HTTP using the port exposed by each service.

For example, for a Python application, it's very common for the actual application server to be Gunicorn, which listens on an internal port (such as 8000) and manages several worker processes. Nginx, sitting in front, handles receiving public traffic, managing TLS, compression, timeouts, and caching, and Only forward "clean" requests to Gunicorn, which focuses on executing the application logic.

Something similar happens with Java, Node.js, or Ruby applications that use specialized application servers. Nginx acts as a common facade for all of them, allowing your users to access a single domain or several well-organized subdomains, while internally each service runs in its own environment with the technology stack that best suits its needs.

  How to connect to an EC2 instance on AWS with SSH

In the .NET world, when you deploy ASP.NET Core applications on Linux, they typically listen on ports like 5000 or 5001. The pattern that is usually followed is Start the application as a service or daemon managed by systemd and configure Nginx to forward traffic from port 80 or 443 to that internal port. This way, the user simply accesses http://localhost or to the public domain and doesn't see a single port number.

When an error occurs (for example, the backend crashes or isn't listening on the correct port), Nginx typically returns a code 502 Bad GatewayReviewing Nginx error logs and using tools such as netstat To check which ports are listening, you can diagnose on the fly whether the problem is with the proxy or the application, which makes the diagnostic task much easier.

Using Nginx with WordPress and other CMSs via reverse proxy

Another common use case is when you want to combine a main site that is not WordPress with a WordPress blogbut keeping both under the same domain and in a subdirectory instead of a subdomain. For example, having example.com served by a platform A and example.com/blog served by WordPress on another server.

From an SEO and content organization perspective, many administrators prefer to use subdirectories instead of subdomains, although at a technical level Google declare that it treats both similarly. A reverse proxy with Nginx allows just that: that the requests to /blog are sent to another host or IP, while the user-facing domain remains the same.

To achieve this, a block is configured. location that captures the subdirectory in question and does proxy_pass towards the address of the externally hosted blog, adding the appropriate headers so that the backend recognizes the real client and the HTTPS scheme if applicable.

At the same time, in the WordPress installation behind the proxy, the site and home URLs are adjusted to point to the main domain with the subdirectory, and some environment variables are modified ($_SERVER['HTTP_HOST'], REQUEST_URI, etc.) for avoid redirect loops or inconsistent routes.

This pattern can be repeated as many times as you need: online stores in subdirectories, internal panels, SPA applications, etc., each hosted wherever you prefer but all served under the same domain umbrella thanks to Nginx's reverse proxy function.

Monitoring, logging, and troubleshooting

One of the additional advantages of centralizing traffic in Nginx is that you have a single point of view from which to observe what is happening. The server generates access and error logs in /var/log/nginx/which collect both incoming requests and failures that occur when communicating with backends.

Access logs indicate which resources were requested, from which IP address, with which status code, and how long it took to be served. From this, you can identify particularly slow routes, recurring errors, or traffic spikes at certain times, which is very useful for... fine-tune cache rules, rate limits, or scaling decisions.

Error logs, on the other hand, are key when there are 502, 503, or other internal communication problems. They usually include messages such as "unable to connect to the upstream," "timed out," or "syntax errors in configuration files," making it easier to pinpoint the fault without having to search blindly.

In addition to logging, Nginx can be integrated with external monitoring tools, both at the server level (CPU, memory, disk) and focused on HTTP metrics (response times, number of active connections, error rates). In this way, You gain a comprehensive view of the proxy's behavior and the services it protects.This is critical when you manage high-traffic sites or applications.

With a combination of good logs, alerts, and a couple of basic commands to check the status of services, it becomes much easier to keep the infrastructure healthy and react quickly if an incident occurs that affects availability or performance.

All of the above makes Nginx, used as a reverse proxy, a central component of many modern architectures: it improves performance by serving static files and caching, strengthens security by hiding backends and centralizing encryption policies, facilitates the deployment of multiple applications under the same domain, and provides a single point of monitoring and control over traffic entering and leaving your platform.