Help configuring Logstash

Hello,

I'm new into logstash ; I am trying to input data into ELK but am struggling to get anything done.
The installation is done in a windows 2016 for both ELK and Logstash.

I have two major issues though :

  • first of all, no data at all is imputed into Kibana :frowning:
  • secondly, logs show multiple errors as shown below

Here is my logstash.conf

input {
udp{
port => 514
type => "syslog"
}
}
output {
elasticsearch {hosts => ["http://localhost:9200"] }
stdout{codec => rubydebug}
}

The logs gives me weird results :
It starts correctly at first but then for whatever reason multiples instances are executed... I do not see why.

[2020-05-11T16:44:30,937][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.1"}
[2020-05-11T16:44:32,608][INFO ][org.reflections.Reflections] Reflections took 31 ms to scan 1 urls, producing 20 keys and 40 values
[2020-05-11T16:44:33,468][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2020-05-11T16:44:33,655][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2020-05-11T16:44:33,702][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-05-11T16:44:33,702][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2020-05-11T16:44:33,765][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2020-05-11T16:44:33,812][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-05-11T16:44:33,843][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-05-11T16:44:33,843][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["C:/logstash/config/logstash.conf"], :thread=>"#<Thread:0x622ec2ed run>"}
[2020-05-11T16:44:33,890][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-05-11T16:44:34,624][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-05-11T16:44:34,687][INFO ][logstash.inputs.udp ][main] Starting UDP listener {:address=>"0.0.0.0:514"}
[2020-05-11T16:44:34,702][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2020-05-11T16:44:34,749][INFO ][logstash.inputs.udp ][main] UDP listener started {:address=>"0.0.0.0:514", :receive_buffer_bytes=>"65536", :queue_size=>"2000"}
[2020-05-11T16:44:34,952][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-05-11T16:47:22,197][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-11T16:47:22,338][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[2020-05-11T16:47:22,338][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2020-05-11T16:47:41,668][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-11T16:47:41,809][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[2020-05-11T16:47:41,809][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
[2020-05-11T16:48:01,153][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified

Does someone have an idea?

Thank you,

Hi @Joggy_John - Welcome to our community forums!

From the logs you provided, this is the main error:

[2020-05-11T16:47:22,338][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.

This indicates that there are multiple instances of Logstash running. I am not sure how you are running Logstash on your Windows host but you should check the Task Manager (Processes / Services) and ensure that you have only one instance of Logstash running.

I hope that helps.

Thank you indeed! I was running logstash using the Cmd while the service was also trying to set itself up.

It has now been solved.

However, while everything seems to be runnning whithout error, i do not see any change into kibana; logs should be pouring into Elastic, should'nt they?

Below is the main log :

[2020-05-12T09:39:24,226][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-12T09:39:24,351][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.1"}
[2020-05-12T09:39:25,897][INFO ][org.reflections.Reflections] Reflections took 46 ms to scan 1 urls, producing 20 keys and 40 values
[2020-05-12T09:39:26,726][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2020-05-12T09:39:26,897][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2020-05-12T09:39:26,944][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-05-12T09:39:26,960][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2020-05-12T09:39:27,007][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2020-05-12T09:39:27,054][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-05-12T09:39:27,085][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-05-12T09:39:27,085][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["C:/logstash/config/logstash.conf"], :thread=>"#<Thread:0x6befa4c6 run>"}
[2020-05-12T09:39:27,132][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-05-12T09:39:27,851][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-05-12T09:39:27,929][INFO ][logstash.inputs.udp ][main] Starting UDP listener {:address=>"0.0.0.0:514"}
[2020-05-12T09:39:27,944][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2020-05-12T09:39:27,991][INFO ][logstash.inputs.udp ][main] UDP listener started {:address=>"0.0.0.0:514", :receive_buffer_bytes=>"65536", :queue_size=>"2000"}
[2020-05-12T09:39:28,179][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

The conf file :
input {
udp{
port => 514
type => "syslog"
}
}

output {
elasticsearch {hosts => ["http://localhost:9200"] }
stdout{codec => rubydebug}
}

When I look into Kibana, I do not see any new data uploaded though. Did I miss anything?

Thank you for your support!

@Joggy_John - You're welcome.

Are you receiving any data on this Windows host on port 514? Is there any syslog server running on this Windows host? Any firewall in place that could block the traffic?

OK, i tried using a file instead of a port (from that tutorial https://dzone.com/articles/elk-stack-on-windows-server-part-3-customization ), logsash is obvioulsy not sending data to ES.

Maybe a port issue? Maybe should I reinstall the whole thing (on my own computer instead of the server for instance)?

Thank you!

@Joggy_John - I am not sure what you are trying to do exactly. You could probably use the generator input plugin to generate random log events and verify that the logs are ingested into Elasticsearch.

From the previous logs you shared, connection between Logstash and Elasticsearch is ok.

I am just trying to check if ingestion actually works on my server side or if I have some port issue; i'll give a try using the generator; thank you!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.