I am trying to run ELK in Docker and I did the following pretty simple and straightforward steps:
Created directory /opt/elk/logstash
and moved there sample log file and logstash.conf
file:
input {
file {
path => "/opt/elk/logstash/access.log" # sample Apache log file on local machine
type => "apachelogs"
start_position => "beginning"
}
}
filter {
if [type] == "apache-access" {
grok {
match => [ "message", "%{COMBINEDAPACHELOG}" ]
}
}
}
output {
elasticsearch { embedded => true }
}
Then I started 3 containers:
docker run -d --name elasticsearch -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.3.0
docker run -d --name kibana -p 5601:5601 --link elasticsearch:docker.elastic.co/elasticsearch/elasticsearch -d docker.elastic.co/kibana/kibana:5.3.0
docker run -d --name logstash -p 5400:5400 -v /opt/elk/logstash/:/usr/share/logstash/pipeline/ --link elasticsearch:docker.elastic.co/elasticsearch/elasticsearch -d docker.elastic.co/logstash/logstash:5.3.0
Elasticsearch and Kibana started without any errors but Logstash shows some errors in log file.
Could anybody explain what's wrong with Logstash and how to fix that issue, please?
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2017-04-11T05:31:08,008][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2017-04-11T05:31:08,097][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"973b685c-7315-4fa5-ae7f-d7fb6fca3a78", :path=>"/usr/share/logstash/data/uuid"}
[2017-04-11T05:31:09,570][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}
[2017-04-11T05:31:10,710][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
[2017-04-11T05:31:10,720][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-04-11T05:31:12,266][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x38cee0b4 URL:http://logstash_system:xxxxxx@elasticsearch:9200/>}
[2017-04-11T05:31:12,271][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x24cdf4b URL:http://elasticsearch:9200>]}
[2017-04-11T05:31:12,272][INFO ][logstash.pipeline ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-04-11T05:31:12,292][INFO ][logstash.pipeline ] Pipeline .monitoring-logstash started
[2017-04-11T05:31:12,657][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-04-11T05:31:22,351][ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}