EDITED: 07/2024. Please take a look at THIS ARTICLE for a follow up discussing the risks and nuances relating to using HTTPS Reverse Proxies. An HTTP reverse proxy may not be the right tool for your circumstances and using one carries certain risks.
Recently I was presented with a very common problem, offer up a service which uses an unprivileged port, present that service through a reverse proxy of some kind and keep the entire system secure. In our case the service is Hashicorp Vault, frustratingly for such a popular application I couldn’t find any guides or implementation examples for how to do this with the popular reverse proxies or loadbalancer solutions, so here I’m going to look at how to do this with NGINX.
Before jumping in to this, there are legitimate downsides to using NGINX as a reverse proxy to Vault, but we live in the real world here and sometimes we have to work with constraints and secure appropriately. In a follow up article I’ll take a look at those issues in a bit more depth and what alternatives exist.
Intended Goal
In our configuration, HTTPS requests will be sent to Vault on TCP port 443, then proxied to Vault on TCP port 8200 via the localhost, in this configuration Vault can ONLY be accessed via NGINX. Both NGINX and Vault will use TLS to secure their connection and NGINX will be configured with some hardening to prevent common security issues.
Our finished instance is going to look something like below:
I provisioned a Certificates and Private Keys ahead of time for both Vault and NGINX and placed them on the target instance, I have also installed the CA Certificate used to sign these certificates on the target instance.
Implementation – Vault
For the sake of brevity, we’ll be using an almost identical setup and hardening of Vault as described in a previous article here, our Vault instance is going to be TLS encrypted and offer connections to the localhost only. In advance I have provisioned a Certificate and Private Key for the Vault service.
Our vault.hcl should look like:
storage "file" { path = "/opt/vault" } ui = true listener "tcp" { address = "127.0.0.1:8200" #--Ensure that localhost is used rather than a hostname or host IP tls_disable = 0 #--TLS encrypt API tls_cert_file = "/etc/ssl/certs/mc-vault-frontend.crt" tls_key_file = "/etc/ssl/private/mc-vault-frontend.key" } max_lease_ttl = "10h" default_lease_ttl = "10h" api_addr = "https://127.0.0.1:8200" #--Ensure that localhost is used rather than a hostname or host IP
Implementation – NGINX
Now the real meat, NGINX is installed with a single package, for Ubuntu:
sudo apt-get install nginx
With the Certificate and Private Key located in /etc/ssl/certs and /etc/ssl/private respectively we can just use the NGINX default configuration file which is located in /etc/nginx/sites-enabled/default. If we edit the file using nano:
sudo nano /etc/nginx/sites-enabled/default
Delete the contents of the file and replace with the below functional configuration:
server { listen 443; server_name mc-vault.madcaplaughs.co.uk; #--Enable TLS encryption on the proxy ssl on; ssl_certificate /etc/ssl/certs/mc-vault-backend.crt; ssl_certificate_key /etc/ssl/private/mc-vault-backend.key; #--Tighten TLS configuration, disable caching, try and keep sensitive data out of the cache ssl_prefer_server_ciphers on; ssl_session_timeout 1h; ssl_session_cache off; ssl_session_tickets off; ssl_protocols TLSv1.2; location / { #--Fordward all requests to Vault running on the localhost proxy_pass https://127.0.0.1:8200; #--Ensures that NGINX will not attempt to connect to anything #--but a site signed signed by a certificate issued from our Certificate Authority proxy_ssl_verify on; proxy_ssl_trusted_certificate /etc/ssl/certs/mcl-root-ca.pem; #--Disable all caching, try and keep sensitive data out of the cache proxy_cache off; #--Session forwarding proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; } }
The magic here is happening in the location block, which is seeing our requests be forwarded from the server_name (defined in the server block) to the host and port defined as proxy_pass with some additional headers which will also be included. I have also included several hardened configurations which you may or may not wish to use depending on your environment.
In this example my server is named mc-vault.madcaplaughs.co.uk, yours should be substituted with the FQDN of your own host (which will also need to be reflected in the CNAME/SAN on your Certificate), this isn’t an issue here as I’m reusing the same certificate that was previously applied directly to Vault.
Completing
With the configs in place, we’ll need to restart NGINX in order to load the configuration:
sudo /etc/init.d/nginx restart # [ ok ] Restarting nginx (via systemctl): nginx.service.
Now if we attempt to connect to Vault via the GUI on TCP port 443 we should see a valid certificate presented:
As this is a new Vault instance we will need to init the backend, this is covered in my previous Setup and Installation post here so I’m not going to cover it again, as we can see the proxy is working correctly.
One important trade off to understand here is that Certificate Authentication with Vault will not work with this particular configuration as TLS termination is occurring at NGINX and a second TLS session is being established with Vault. This can however be done with some more advanced configuration of NGINX. If anyone is interested in any more advanced configuration let me know and I’ll write it up.
How come you don’t set up https on the vault instance as well?
Nothing stops you doing that! It’s certainly something I’d do for real world systems. This example just makes it a little easier to understand.
Terminating HTTPS at the loadbalancer level is not recommended by hashicorp. it compromises the security of Vault, also disable the tls auth method.
Thanks for the tip but it’s not all that helpful…do you maybe think you could explain why it is not recommended and how it compromises security, or at least where in the documentation it these recommendations are?
Besides the end users/softwares, Vault is supposed to be the only one who gets to see the secrets it shares.
If you put a layer 7 proxy between vault and the users, this proxy will terminate the TLS session, and recreate a TLS session to the Vault servers. Therefore it will be able to see the secrets in clear. Anyone with access to nginx will be able to enable logs and see all the secrets going through it.
Of course it’s doable but this is why it is strongly discouraged.
If you want to reverse proxy Vault, you should put a layer 4 load balancer with TLS Passthrough.
Nginx can do it too, but with a much different configuration.
This is a bit of an oversimplification of things. Yes a new TLS session will be initialised, but the session is still encrypted as far as the client is concerned. It is a big overstatement to say that “anyone with access to nginx can see all the secrets going through it”, how would they do that without access to the private key being used to encrypt Vault’s own TLS?
Even with logging turned up to maximum, secrets do not show up in the nginx logs under any circumstances and as long as both endpoints are encrypted (I.E. both NGINX and Vault) no clear HTTP connections will appear in a packet capture, they will all appear as TLS1.2 sessions and their contents be unreadable.
What is true that if the Vault endpoint is not encrypted, you will be able to packet sniff anything that goes through NGINX (including secrets), or if the storage backend is local to the same box as Vault then you can read everything.
It would be useful if any of you could point me to where exactly these ideas are “not recommended by Hashicorp”, as the security model they provide actually says that none of these things are within the concern of Vault’s security design, really I can think of totally legitimate reasons to configure a system this way π
Fair is fair though, I am planning to write an article looking at options for haproxy.
You have to set-up environmental variable:
export VAULT_ADDR=’http://127.0.0.1:8200′
otherwise local “vault status” will not work as it tries to use HTTPS by default
I believe that you can pair this approach with AWS’s support for ACM certificates in Nitro Enclaves, and you get a reasonably secure implementation of Vault.
Plus, you don’t have to manage / monitor certificates for expiration.
https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave-refapp.html
[…] I should then be able to authenticate to it by using some form of nginx reverse proxy setup like this.βIt is not entirely clear to me how to secure Vault but I imagine that this must be core […]
Hello,
i’d like to add an intermediate page before accessing the vault login page, I added something like location /vault {
proxy_pass http://127.0.0.1:8200;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
to access this via a button on the intermediate page, I have this line specified the intermediate HTML page `Go to Vault`
but the button doesn’t display any info