No index created

I have this below logstash config. However, it is not creating any index.

file {
type => "monitor1"
path => "/root/Documents/scripts/data/monitor.out"
start_position => "beginning"
sincedb_path => "/opt/sincedb/.sincedb_monitor1"
}
}
filter {
if [type] == "monitor1" {
grok {
match => { "message" => "hostname:%{GREEDYDATA:hostname}|ipaddress:%{IP:ipaddress}|status:%{WORD:status}" }
}
}
}
output {
if [type] == "monitor1" {
elasticsearch {
hosts => ["1XX.XX.XX.XX:9200"]
index => "monitor1"
}
}
}

grok filter is successful using grokdebugger and grokconstructor

Standard debugging advice:

  • Remove all conditionals.
  • Use (only) a stdout { codec => rubydebug } output until you've verified that events look like you want them to. Only then add back the elasticsearch output.
1 Like

i have tried above. Seems i am not seeing output from my logsource.

[2018-06-29T13:24:05,083][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2018-06-29T13:24:06,785][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x3b76393f@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
[2018-06-29T13:24:06,904][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-29T13:24:06,935][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1d813b41@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[2018-06-29T13:24:06,937][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-06-29T13:24:09,157][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x1d813b41@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
[2018-06-29T13:24:19,062][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-06-29T13:24:19,066][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-06-29T13:24:19,236][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-06-29T13:24:19,345][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-06-29T13:24:19,397][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-29T13:24:19,795][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-29T13:24:19,983][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x648dc3fa@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[2018-06-29T13:24:19,994][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}

Your file input is probably tailing the file. Deleting the sincedb file while Logstash isn't running should help with that. If that doesn't help, increase Logstash's loglevel and look for log entries containing "glob".

where can i increase the logstash log level?

In logstash.yml or via command line options. The details are described in the documentation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.