Collect logs from external deployments exposing a domain

I want to ship logs from remote deployments via filebeat or logstash to a central Elastic/Elastic stack via HTTP exposing just domains

I have considered two solutions, cant get any to work

The setup is as follows:

  • there is a central big framework on the company's server A,
  • and multiple clients services each one in a different server B,C,D etc.

NONE is in the same network. All services are on docker containers with compose.

We set up Elastic Stack on A.
The initial plan was to add filebeat on each server B C D and ship to logstash-elastic on A

However A sits behind a company firewall and haproxy etc. They are able to open a domain and subdomains and map to a specific ports. For elastic and logstash : 9200 : 5044

  1. The plan to ship logs with filebeat doesnt work, I guess because it can only connect via TCP and not HTTP

The following two tries fail

  enabled: true
  hosts: [""]

Failed to connect to backoff(async(tcp:// lookup https on no such host

and setting just the domain it adds the 5044

  enabled: true
  hosts: [""]

Failed to connect to backoff(async(tcp:// dial tcp [2606:asdfasdf::asdfasdf::]:5044: connect: network is unreachable

Going to plan 2

  1. Deploy filebeat and logstash in each server B C D and ship to elastic on A

The following setup doesnt work, the 9200 is added after the domain

logstash.yml (on servers B C D)

output {
    elasticsearch {
        hosts => [""]
        index => "logs-%{+YYYY.MM.dd}"
        document_type => "nginx_logs"
        user => "elastic"
        password => "changeme"
    stdout { codec => rubydebug }

][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [][Manticore::ResolutionFailure]"}

I am thinking another solution, maybe use the http plugin and send via REST requests ? I need some help with this configuration, what I have until now is the following (not working)

output {
    http {
         url => ""
         http_method => "post"
         content_type => "text/xml;charset=UTF-8"
         index => "logs-%{+YYYY.MM.dd}"
         headers => {
           "Authorization" => "Basic ZWxhc3RpYzpjaGFuZ2VtZQ=="
         format => "message"

Thank you for your help!

Is your listening on port 5044 on the internet or it is using another port, like 443 and redirecting it to 5044?

Filebeat does not use http, it uses a custom protocol over tcp, also your haproxy should be configured to use the tcp not http.

Also, can you test if you can connect using telnet to both and from the external server? Your issues seems to be related to some network configuration, it won't work until you fix it, changing the output plugin will make no difference if it still cannot connect.

@leandrojmp Thank you for your reply,

I dont about the first question, whether its 443 or 5044.

The only clue I can give about this, is that if I start a hello world webservice on Server A listening to IP:port 192.168.x.x:5044 instead of logstash, I can access it outside the VPN network from successfully. Same for 9200 and the elastic url

This means the connection is active and correct. Not sure how its done internally though :slight_smile:

So the question is how can I use the http plugin with this domain? when I enter it on the configuration I get results like where the 9200 is appended in the end.

Never used the the http output to send data to elasticsearch, so I can not help with this, but it is not that simple as you would need to use the path for the index direct in the url option.

Something like: url => " for example.

But again, It still looks like you have a connectivity issue and needs to troubleshoot it before anything else.

if the elasticsearch output cannot reach your server why would the http output reach it, since it is the same endpoint?

Also, your elasticsearch error seems to indicate a DNS resolution failure.


Can you check what is really exposed in the internet?

Is exposed in the public internet on port 5044 using TCP, not just HTTP/HTTPS, since it is behind a firewall?

Can you from one of the remote machines connect to this domain in this port using telnet for example?

Is exposed in the public internet on port 9200 using https?

Can you run a curl from the Logstash server and get a response?

There is no issue with the domains, its just like any domain exposing an ip port. As I said the hello world tries work fine.

I was finally able to find a solution. For anyone that may bump on this question in the future.

Logstash to elastic and beats to logstash are done with tcp protocols thus in our setup it cannot be done.

The solution is to use logstash to logstash, http output to http input plugin as explained in the official documentation Logstash-to-Logstash communication | Logstash Reference [8.3] | Elastic
in part HTTP-HTTP considerations

That way the domains that accept only http work fine, and the full flow is
Server B,C,D logs->filebeat->logstash http out -> over the internet -> logstash http in Server A -> Elasticsearch