Issues when running Logstash in Docker

I am trying to run ELK in Docker and I did the following pretty simple and straightforward steps:

Created directory /opt/elk/logstash and moved there sample log file and logstash.conf file:

input {
  file {
    path => "/opt/elk/logstash/access.log" # sample Apache log file on local machine
    type => "apachelogs"
    start_position => "beginning"
  }
}

filter {
  if [type] == "apache-access" { 
    grok {
      match => [ "message", "%{COMBINEDAPACHELOG}" ]
    }
  }
}

output {
  elasticsearch { embedded => true }
}

Then I started 3 containers:

docker run -d --name elasticsearch -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.3.0

docker run -d --name kibana -p 5601:5601 --link elasticsearch:docker.elastic.co/elasticsearch/elasticsearch -d docker.elastic.co/kibana/kibana:5.3.0

docker run -d --name logstash -p 5400:5400 -v /opt/elk/logstash/:/usr/share/logstash/pipeline/ --link elasticsearch:docker.elastic.co/elasticsearch/elasticsearch -d docker.elastic.co/logstash/logstash:5.3.0

Elasticsearch and Kibana started without any errors but Logstash shows some errors in log file.

Could anybody explain what's wrong with Logstash and how to fix that issue, please?

Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2017-04-11T05:31:08,008][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2017-04-11T05:31:08,097][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"973b685c-7315-4fa5-ae7f-d7fb6fca3a78", :path=>"/usr/share/logstash/data/uuid"}
[2017-04-11T05:31:09,570][ERROR][logstash.agent           ] Cannot load an invalid configuration {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}
[2017-04-11T05:31:10,710][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
[2017-04-11T05:31:10,720][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-04-11T05:31:12,266][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x38cee0b4 URL:http://logstash_system:xxxxxx@elasticsearch:9200/>}
[2017-04-11T05:31:12,271][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x24cdf4b URL:http://elasticsearch:9200>]}
[2017-04-11T05:31:12,272][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-04-11T05:31:12,292][INFO ][logstash.pipeline        ] Pipeline .monitoring-logstash started
[2017-04-11T05:31:12,657][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-04-11T05:31:22,351][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

One of your configuration files is invalid. Judging by the error message,

Cannot load an invalid configuration {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}

it looks like there's something at the very beginning of the file. Maybe a garbage character?

That aside, the embedded option is no longer valid for the elasticsearch output.

I suggest that you don't use container linking. Set up a dedicated Docker network for your Elastic containers instead.

This is very strange behavior since there are no errors in logstash.conf file, this is exactly what I have (without embedded option):

input {
  file {
    path => "/opt/elk/logstash/sample.log"
    type => "apachelogs"
    start_position => "beginning"
  }
}

filter {
  if [type] == "apache-access" { 
    grok {
      match => [ "message", "%{COMBINEDAPACHELOG}" ]
    }
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
  }
}

And I am getting the same error about invalid configuration. It also says that Elasticsearch Unreachable

[2017-04-12T02:14:49,263][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"2c1c6963-a832-436e-8081-636acff59ee5", :path=>"/usr/share/logstash/data/uuid"}
[2017-04-12T02:14:50,731][ERROR][logstash.agent           ] Cannot load an invalid configuration {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}
[2017-04-12T02:14:52,145][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
[2017-04-12T02:14:52,147][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-04-12T02:14:55,685][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x7ce6ec6d URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}

Do you have any ideas what's wrong?

Hexdump the file and look for garbage (possibly non-printable) characters. If that doesn't result in anything useful, create a new minimal file with input { stdin { } } in it. Does that work?

Thanks Magnus. That's probably because of ----- symbols at the line 1.
It works good now but Logstash is not able to connect to Elasticsearch (with the same logstash.conf file as posted above):

Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2017-04-12T05:15:10,695][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
[2017-04-12T05:15:10,759][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-04-12T05:15:11,751][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x78dc6c70 URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2017-04-12T05:15:11,771][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x253a12a5 URL:http://elasticsearch:9200>]}
[2017-04-12T05:15:11,772][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-04-12T05:15:11,801][INFO ][logstash.pipeline        ] Pipeline .monitoring-logstash started
[2017-04-12T05:15:11,909][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2017-04-12T05:15:11,913][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2017-04-12T05:15:11,932][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x5a382205 URL:http://elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2017-04-12T05:15:11,947][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-04-12T05:15:12,018][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2017-04-12T05:15:12,024][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsea
...
[2017-04-12T05:15:53,163][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x5a382205 URL:http://elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
[2017-04-12T05:15:53,279][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x4d57ba1a URL:http://logstash_system:xxxxxx@elasticsearch:9200/>}
[2017-04-12T05:15:58,167][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2017-04-12T05:15:58,178][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x5a382205 URL:http://elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
[2017-04-12T05:16:03,184][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2017-04-12T05:16:06,158][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x5a382205 URL:http://elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
[2017-04-12T05:16:09,159][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2017-04-12T05:16:09,166][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x5a382205 URL:http://elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
[2017-04-12T05:16:14,168][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}

Do you have any ideas what could be wrong?

Have you verified that ES really is running? If you step into the container with docker exec -it elasticsearch bash can you use netstat -an, curl, or whatever is available to verify that the port is responding to requests?

This is the output for netstat -an:

bash-4.3$ netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 :::9200                 :::*                    LISTEN      
tcp        0      0 :::9300                 :::*                    LISTEN      
tcp        0      0 ::ffff:XXX.XX.X.X:9200  ::ffff:XXX.XX.X.X:38260 ESTABLISHED 
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node Path
unix  2      [ ]         STREAM     CONNECTED     124334 
unix  2      [ ]         STREAM     CONNECTED     124460 

I can reach http://localhost:9200/ in browser. I can also see Elasticsearch cluster in Kibana dashboard.

By the way, there are the following errors in Elasticsearch log file:

[2017-04-12T05:15:14,439][INFO ][o.e.p.PluginsService     ] [uxSyVcL] loaded plugin [x-pack]
[2017-04-12T05:15:39,971][INFO ][o.e.n.Node               ] initialized
[2017-04-12T05:15:39,972][INFO ][o.e.n.Node               ] [uxSyVcL] starting ...
[2017-04-12T05:15:40,939][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 8d:f6:69:ea:4e:f2:bb:fb
[2017-04-12T05:15:41,404][INFO ][o.e.t.TransportService   ] [uxSyVcL] publish_address {172.20.0.2:9300}, bound_addresses {[::]:9300}
[2017-04-12T05:15:41,421][INFO ][o.e.b.BootstrapChecks    ] [uxSyVcL] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-04-12T05:15:48,782][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][young][3][5] duration [6.1s], collections [1]/[6.3s], total [6.1s]/[16.6s], memory [164.8mb]->[112mb]/[1.9gb], all_pools {[young] [121.2mb]->[35.3mb]/[133.1mb]}{[survivor] [16.6mb]->[11.7mb]/[16.6mb]}{[old] [26.9mb]->[64.9mb]/[1.8gb]}
[2017-04-12T05:15:48,857][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][3] overhead, spent [6.1s] collecting in the last [6.3s]
[2017-04-12T05:15:48,900][INFO ][o.e.c.s.ClusterService   ] [uxSyVcL] new_master {uxSyVcL}{uxSyVcLQQQik-8Auid_xBA}{74cgoakYQhS9_X2OWjMjKA}{172.20.0.2}{172.20.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-12T05:15:49,089][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [uxSyVcL] publish_address {172.20.0.2:9200}, bound_addresses {[::]:9200}
[2017-04-12T05:15:49,100][INFO ][o.e.n.Node               ] [uxSyVcL] started
[2017-04-12T05:15:50,260][ERROR][o.e.x.m.c.i.IndicesStatsCollector] [uxSyVcL] collector [indices-stats-collector] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
	at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:70) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:47) ~[elasticsearch-5.3.0.jar:5.3.0]
...

[2017-04-12T05:15:41,421][INFO ][o.e.b.BootstrapC

[2017-04-12T05:15:50,334][ERROR][o.e.x.m.c.i.IndexStatsCollector] [uxSyVcL] collector [index-stats-collector] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
	at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:70) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:47) ~[elasticsearch-5.3.0.jar:5.3.0]
...

[2017-04-12T05:15:50,367][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [uxSyVcL] collector [index-recovery-collector] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
	at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.3.0.jar:5.3.0]
...

[2017-04-12T05:15:51,787][INFO ][o.e.g.GatewayService     ] [uxSyVcL] recovered [5] indices into cluster_state
[2017-04-12T05:16:06,154][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][young][14][6] duration [6.3s], collections [1]/[7.2s], total [6.3s]/[23s], memory [202.7mb]->[115.6mb]/[1.9gb], all_pools {[young] [126mb]->[1.1mb]/[133.1mb]}{[survivor] [11.7mb]->[13.3mb]/[16.6mb]}{[old] [64.9mb]->[101.2mb]/[1.8gb]}
[2017-04-12T05:16:06,160][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][14] overhead, spent [6.3s] collecting in the last [7.2s]
[2017-04-12T05:16:06,678][INFO ][o.e.c.r.a.AllocationService] [uxSyVcL] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.monitoring-es-2-2017.04.12][0]] ...]).
[2017-04-12T05:16:59,653][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:17:09,654][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:18:13,106][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][young][136][13] duration [5.5s], collections [1]/[5.7s], total [5.5s]/[28.9s], memory [246.4mb]->[116.7mb]/[1.9gb], all_pools {[young] [131.2mb]->[1.3mb]/[133.1mb]}{[survivor] [8.9mb]->[6.4mb]/[16.6mb]}{[old] [106.2mb]->[108.8mb]/[1.8gb]}
[2017-04-12T05:18:13,122][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][136] overhead, spent [5.5s] collecting in the last [5.7s]
[2017-04-12T05:18:23,219][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:18:33,219][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.

Hmm. Not sure what that's about, but it's probably something you should fix before continuing to pursue Logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.