Configure Let's Encrypt with Nginx to act as Reverse Proxy for Logstash

I have Logstash running in a Docker container in a private subnet on AWS. We are able to use Logstash's HTTP Plugin to read and process Multi-line ASCII data before sending it on to Elasticsearch for storage. All of this works perfectly when I run the Elastic Stack docker containers locally on my machine. Specifically, I am able to use a cURL command to send HTTP PUT commands to Logstash and have the data show up in Elasticsearch as expected. This same cURL command also works perfectly when I log on to the AWS EC2 instance that is hosting Nginx and send the data to Logstash HTTP which is on the AWS private subnet. I intended to configure Nginx to act as the TLS termination point for Logstash, since I do not have SSL turned on for the Logstash HTTP Plugin.

I placed Nginx in a public subnet on AWS and configured it to use proxy_pass to forward HTTP traffic to Logstash (similar to the way the cURL command does. I'm still pretty new to Nginx, so I wouldn't be surprised if there weren't some things I'm missing. (And for now, I'm just experimenting, so that is why I have a single AWS EC2 instance hosting a single node Elasticsearch, Logstash, and Kibana. This will be reconfigured once we get this piece of the networking working)

Below is my /etc/nginx/nginx.conf file:

user nginx;
worker_processes auto;

error_log /var/log/nginx/error.log notice;
pid   /var/run/nginx.pid;

events {
  worker_connections 1024;
}

http {
    server {
       server_name elastic.example.com;
       location / {
         proxy_pass https://10.6.101.20:5601;
       }

       listen 443 ssl; # managed by Certbot
       ssl_certificate /etc/letsencrypt/live/elastic.example.com/fullchain.pem; # managed by Certbot
       ssl_certificate_key /etc/letsencrypt/live/elastic.example.com/privkey.pem; # managed by Certbot
       include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
       ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }

    server {
       server_name elastic.example.com;
       location / {
         proxy_pass http://10.6.101.20:8080;
       }

       listen 8080 ssl; # managed by Certbot
       ssl_certificate /etc/letsencrypt/live/elastic.example.com/fullchain.pem; # managed by Certbot
       ssl_certificate_key /etc/letsencrypt/live/elastic.example.com/privkey.pem; # managed by Certbot
       include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
       ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }

    server {
       server_name elastic.example.com;
       location / {
         proxy_pass https://10.6.101.20:9200;
       }

       listen 9200 ssl; # managed by Certbot
       ssl_certificate /etc/letsencrypt/live/elastic.example.com/fullchain.pem; # managed by Certbot
       ssl_certificate_key /etc/letsencrypt/live/elastic.example.com/privkey.pem; # managed by Certbot
       include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
       ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }

    server {
       if ($host = elastic.example.com) {
           return 301 https://$host$request_uri;
       } # managed by Certbot

       server_name elastic.example.com;
       listen 80;
       return 404; # managed by Certbot
    }
}

Here is an example cURL command that works when running the Elastic Stack locally or when working on the "back-side" of the Nginx server (e.g. when Nginx is not processing the cURL command.).

This example curl command works when run from the Nginx server (or locally, after changing out the private subnet IP address):

$ curl -0 -v --user elastic:changeme -XPUT 'http://10.6.101.20:8080/dat/_doc/1' -H 'Content-Type: text/csv; charset=utf-8' --data-binary "@mock.dat"

When I issue this same command (same data being sent) with a different document number, the data that Logstash receives is wrong. I have some ruby output that I can see Logstash received the data, but it was not what it expected and it "gets angry" about it. The Logstash pipeline tries to run and process the data, isn't able to come up with anything useful because it's as if it's not getting the correct data.

$ curl -0 -v --user elastic:changeme -XPUT 'http://elastic.example.com:8080/dat/_doc/2' -H 'Content-Type: text/csv; charset=utf-8' --data-binary "@mock.dat"

I just noticed the HTTP Headers that I'm using here and the "--data-binary" command line option when sending the curl command. I did not explicitly set those in the Nginx configuration that is specific to Logstash. Maybe I need to do that?

Edited: I can see from the curl command output that the correct Header/MIME type is being sent and "--data-binary" is also showing up in the curl output, but is it really?

'Content-Type: text/csv; charset=utf-8' --data-binary "@mock.dat"

*   Trying 10.6.101.20:8080...
* Connected to 10.6.101.20 (10.6.101.20) port 8080 (#0)
* Server auth using Basic with user 'elastic'
> PUT /dat/_doc/21 HTTP/1.0
> Host: 10.6.101.20:8080
> Authorization: Basic ZWxhc3RpYzpjaGFuZ2VtZQ==
> User-Agent: curl/7.79.1
> Accept: */*
> Content-Type: text/csv; charset=utf-8
> Content-Length: 18539
>
* Mark bundle as not supporting multiuse
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< content-length: 2
< content-type: text/plain
<
* Closing connection 0

Any help would be appreciated.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.