Config file is not processed by logstash

Hello,

I am having my pipeline set to look for .conf files under /etc/logstash/conf.d/, there i have multiple config files and I make sure to tag each one for filtering indices. Everything works fine for all the other configuration files but it doesnt process logs from the elastic.conf I placed there.

I have tested the config file when I run it alone using
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/elastic.conf "--path.settings" "/etc/logstash"

All logs are filtered fine and shipped to elasticsearch. However the logstash service in DEBUG mode, picks up the file initially when starting the server, I see no errors, but later on I cant find any more logs related to the specific index. It's an extremely strange behavior because i cant find WARN or ERROR logs related to the custom index neither from Elasticsearch or Logstash.

Any recommendations for further troubleshooting would be greatly appreciated.

Thanks,
Tony

Hi @Antonis_Michael

Do you have multiple conf files in your conf.d directory?

If you are not specifically naming the pipelines all those confs effectively get appended together into a one big single pipeline (think of 1 big conf) and perhaps there is a filter that is not allowing all the events through. This is a pretty common issue.. and can manifest as you seeing... runs fine alone but not when running with the other confs.

Perhaps try naming each conf explicitly in the pipelines.yml file.

Hey Stephen,

Thanks for the quick reply! Yes I have multiple config files, but i make sure that for every part (input, filter and output) I include a tag or type related to the specific conf file.

I also left the main pipeline as it was and moved the eslogs.conf to a different location during my troubleshooting but i had the same results.. This was the content of the pipeline.yml

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"
- pipeline.id: elastic
  path.config: "/etc/logstash/eslogs.conf"

Thanks again for the quick response and this is the conf example:

input {
  file {
    path => "/var/log/elasticsearch/*.log" # tail ES log and slowlogs
    type => "elasticsearch"
    start_position => "beginning" # parse existing logs, too
    codec => multiline { # put the whole exception in a single event
      pattern => "^\["
      negate => true
      what => "previous"
    }
  }
}

filter {
  if [type] == "elasticsearch" {
    grok {
      match => [ "message", "\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{DATA:severity}%{SPACE}\]\[%{DATA:source}%{SPACE}\]%{SPACE}(?<message>(.|\r|\n)*)" ]
      overwrite => [ "message" ]
    }

    if "_grokparsefailure" not in [tags] {
      grok {  # regular logs
        match => [
          "message", "^\[%{DATA:node}\] %{SPACE}\[%{DATA:index}\]%{SPACE}(?<short_message>(.|\r|\n)*)",
          "message", "^\[%{DATA:node}\]%{SPACE}(?<short_message>(.|\r|\n)*)" ]
        tag_on_failure => []
      }

      grok {  # slow logs
        match => [ "message", "took\[%{DATA:took}\], took_millis\[%{NUMBER:took_millis}\], types\[%{DATA:types}\], stats\[%{DATA:stats}\], search_type\[%{DATA:search_type}\], total_shards\[%{NUMBER:total_shards}\], source\[%{DATA:source_query}\], extra_source\[%{DATA:extra_source}\]," ]
        tag_on_failure => []
        add_tag => [ "elasticsearch-slowlog" ]
      }

      date { # use timestamp from the log
        "match" => [ "timestamp", "YYYY-MM-dd HH:mm:ss,SSS" ]
        target => "@timestamp"
      }

      mutate {
        remove_field => [ "timestamp" ]  # remove unused stuff
      }
    }
  }
}
output {
 if [type] == "elasticsearch"{
    elasticsearch {
............
}}

I changed all the configurations to be an individual pipeline and the issue persists. I attached all the logs related to the eslogs pipeline.

[root@Kibana-MD01 bin]# cat /var/log/logstash/logstash-plain.log | grep eslogs
[2021-02-19T14:43:18,432][WARN ][logstash.outputs.elasticsearch][eslogs] ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
[2021-02-19T14:43:18,978][INFO ][logstash.outputs.elasticsearch][eslogs] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elasticuser:xxxxxx@ElasticSearch-02.scholarchip.ad:9200/, https://elasticuser:xxxxxx@ElasticSearch-01.scholarchip.ad:9200/, https://elasticuser:xxxxxx@Kibana-01.scholarchip.ad:9200/]}}
[2021-02-19T14:43:19,647][WARN ][logstash.outputs.elasticsearch][eslogs] Restored connection to ES instance {:url=>"https://elasticuser:xxxxxx@ElasticSearch-02.scholarchip.ad:9200/"}
[2021-02-19T14:43:19,707][INFO ][logstash.outputs.elasticsearch][eslogs] ES Output version determined {:es_version=>7}
[2021-02-19T14:43:19,714][WARN ][logstash.outputs.elasticsearch][eslogs] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-02-19T14:43:19,776][WARN ][logstash.outputs.elasticsearch][eslogs] Restored connection to ES instance {:url=>"https://elasticuser:xxxxxx@ElasticSearch-01.scholarchip.ad:9200/"}
[2021-02-19T14:43:19,866][WARN ][logstash.outputs.elasticsearch][eslogs] Restored connection to ES instance {:url=>"https://elasticuser:xxxxxx@Kibana-01.scholarchip.ad:9200/"}
[2021-02-19T14:43:19,910][INFO ][logstash.outputs.elasticsearch][eslogs] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://ElasticSearch-02.scholarchip.ad:9200", "https://ElasticSearch-01.scholarchip.ad:9200", "https://Kibana-01.scholarchip.ad:9200"]}
[2021-02-19T14:43:20,022][INFO ][logstash.outputs.elasticsearch][eslogs] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-02-19T14:43:20,061][INFO ][logstash.outputs.elasticsearch][eslogs] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-02-19T14:43:20,172][INFO ][logstash.javapipeline    ][eslogs] Starting pipeline {:pipeline_id=>"eslogs", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/pipelines/eslogs.conf"], :thread=>"#<Thread:0x2932a3a3 run>"}
[2021-02-19T14:43:22,616][INFO ][logstash.javapipeline    ][eslogs] Pipeline Java execution initialization time {"seconds"=>2.44}
[2021-02-19T14:43:23,089][INFO ][logstash.inputs.file     ][eslogs] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5", :path=>["/var/log/elasticsearch/*.log"]}
[2021-02-19T14:43:23,160][INFO ][logstash.javapipeline    ][eslogs] Pipeline started {"pipeline.id"=>"eslogs"}
[2021-02-19T14:43:23,290][INFO ][filewatch.observingtail  ][eslogs][414a155acf0ca2129a3fb5d3fcf6a1439e63fa931463ab314235ab946afb8a4c] START, creating Discoverer, Watch with file and sincedb collections
[2021-02-19T14:43:23,901][INFO ][logstash.agent           ] Pipelines running {:count=>6, :running_pipelines=>[:beats, :vpcflowlogs, :eslogs, :cloudflare, :rdslogs, :rdsalerts], :non_running_pipelines=>[]}

Perhaps check ... looks like maybe one pipleline is named elastic and the other is eslogs ? where is main?

Sorry for the confusion! Everything is named eslogs now.

pipeline.yml:

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"
- pipeline.id: cloudflare
  path.config: "/etc/logstash/pipelines/cloudflare.conf"
- pipeline.id: vpcflowlogs
  path.config: "/etc/logstash/pipelines/vpcflowlogs.conf"
- pipeline.id: beats
  path.config: "/etc/logstash/pipelines/filebeat.conf"
- pipeline.id: rdslogs
  path.config: "/etc/logstash/pipelines/rdslogs.conf"
- pipeline.id: rdsalerts
  path.config: "/etc/logstash/pipelines/rdsalerts.conf"
- pipeline.id: eslogs
  path.config: "/etc/logstash/pipelines/eslogs.conf"

DEBUG logs related to eslogs:

[root@Kibana-MD01 bin]# cat /var/log/logstash/logstash-plain.log | grep eslogs
[2021-02-19T15:19:45,127][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:19:50,127][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:19:50,753][TRACE][filewatch.discoverer     ][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] discover_files {:count=>0}
[2021-02-19T15:19:51,757][DEBUG][filewatch.sincedbcollection][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] writing sincedb (delta since last write = 15)
[2021-02-19T15:19:51,758][TRACE][filewatch.sincedbcollection][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] sincedb_write: /var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5 (time = 2021-02-19 15:19:51 -0500)
[2021-02-19T15:19:55,129][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:00,127][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:05,127][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:05,771][TRACE][filewatch.discoverer     ][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] discover_files {:count=>0}
[2021-02-19T15:20:06,773][DEBUG][filewatch.sincedbcollection][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] writing sincedb (delta since last write = 15)
[2021-02-19T15:20:06,774][TRACE][filewatch.sincedbcollection][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] sincedb_write: /var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5 (time = 2021-02-19 15:20:06 -0500)
[2021-02-19T15:20:10,129][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:15,127][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:20,127][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:20,796][TRACE][filewatch.discoverer     ][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] discover_files {:count=>0}
[2021-02-19T15:20:21,798][DEBUG][filewatch.sincedbcollection][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] writing sincedb (delta since last write = 15)
[2021-02-19T15:20:21,799][TRACE][filewatch.sincedbcollection][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] sincedb_write: /var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5 (time = 2021-02-19 15:20:21 -0500)
[2021-02-19T15:20:25,130][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T15:20:30,136][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.

Its says is is finding no logs to process... when you look in /var/log/elasticsearch/*.log are there new log files? You are running logstash and elasticsearch on the same host?

I am running both at the same server. My question is why can it find the logs when running the configuration alone but not when running all of them at once?

Understand / Good question something is not loading correctly...

It's most likely typo somewhere...

You have changed the path / file names so I have lost a little

Now that you renamed it / moved if what's happens when you run standalone?

/usr/share/logstash/bin/logstash -f /etc/logstash/pipelines/eslogs.conf "--path.settings" "/etc/logstash"

When I run it on its own everything runs fine. See below some TRACE logs:

[2021-02-19T15:53:43,183][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x6196a4a2>}
[2021-02-19T15:53:43,183][DEBUG][logstash.codecs.multiline][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Multiline {:pattern=>"^\\[", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Merge Heap Roots: 0.3ms", :match=>true,:negate=>true}
[2021-02-19T15:53:43,183][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x6196a4a2>}
[2021-02-19T15:53:43,183][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x5acbb475>}
[2021-02-19T15:53:43,183][DEBUG][logstash.inputs.file     ][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Received line {:path=>"/var/log/elasticsearch/gc.log", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Evacuate Collection Set: 39.4ms"}
[2021-02-19T15:53:43,183][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x1af129ef>}
[2021-02-19T15:53:43,183][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x70fb16d7>}
[2021-02-19T15:53:43,183][DEBUG][logstash.codecs.multiline][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Multiline {:pattern=>"^\\[", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Evacuate Collection Set: 39.4ms", :match=>true, :negate=>true}
[2021-02-19T15:53:43,184][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x1af129ef>}
[2021-02-19T15:53:43,184][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x70fb16d7>}
[2021-02-19T15:53:43,184][DEBUG][logstash.inputs.file     ][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Received line {:path=>"/var/log/elasticsearch/gc.log", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Post EvacuateCollection Set: 2.8ms"}
[2021-02-19T15:53:43,184][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x26b413a1>}
[2021-02-19T15:53:43,184][DEBUG][logstash.codecs.multiline][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Multiline {:pattern=>"^\\[", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Post Evacuate Collection Set: 2.8ms", :match=>true, :negate=>true}
[2021-02-19T15:53:43,184][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x4ceaff1f>}
[2021-02-19T15:53:43,186][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x26b413a1>}
[2021-02-19T15:53:43,186][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x4ceaff1f>}
[2021-02-19T15:53:43,186][DEBUG][logstash.inputs.file     ][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Received line {:path=>"/var/log/elasticsearch/gc.log", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Other: 0.5ms"}
[2021-02-19T15:53:43,186][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x68583929>}
[2021-02-19T15:53:43,186][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x73c8dea2>}
[2021-02-19T15:53:43,186][DEBUG][logstash.codecs.multiline][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Multiline {:pattern=>"^\\[", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,phases   ] GC(257)   Other: 0.5ms", :match=>true, :negate=>true}
[2021-02-19T15:53:43,186][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x68583929>}
[2021-02-19T15:53:43,187][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x36eb3d3d>}
[2021-02-19T15:53:43,187][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x73c8dea2>}
[2021-02-19T15:53:43,187][DEBUG][logstash.inputs.file     ][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Received line {:path=>"/var/log/elasticsearch/gc.log", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,heap     ] GC(257) Eden regions: 823->0(828)"}
[2021-02-19T15:53:43,187][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x36eb3d3d>}
[2021-02-19T15:53:43,187][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x105610a9>}
[2021-02-19T15:53:43,187][DEBUG][logstash.codecs.multiline][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Multiline {:pattern=>"^\\[", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,heap     ] GC(257) Eden regions: 823->0(828)", :match=>true,:negate=>true}
[2021-02-19T15:53:43,187][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x4a362b17>}
[2021-02-19T15:53:43,187][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Event now:  {:event=>#<LogStash::Event:0x105610a9>}
[2021-02-19T15:53:43,187][DEBUG][logstash.inputs.file     ][main][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] Received line {:path=>"/var/log/elasticsearch/gc.log", :text=>"[2021-02-19T16:49:12.147+0000][19294][gc,heap     ] GC(257) Survivor regions: 30->26(107)"}
[2021-02-19T15:53:43,188][DEBUG][logstash.filters.grok    ][main][4ffcb6dc67fb4b1d998b1739d4e0c5d3e59e00566ca8374d8cc45df0e2ecbc0c] Running grok filter {:event=>#<LogStash::Event:0x5a1cd020>}

So why not trying with pipelines, comment out all the other pipelines turn on trace logs on and look...

btw the cat command above will not show the same logs as because you are just grepping line with eslogs in them (I am sure you know that)

Not sure what is going on there is a typo / mismatch something going on...

Hey Stephen,

I commented out all the other pipelines, leaving only the eslogs available and I still dont get any events..

[2021-02-19T16:26:22,132][INFO ][logstash.javapipeline    ][eslogs] Pipeline Java execution initialization time {"seconds"=>1.2}
[2021-02-19T16:26:22,314][TRACE][logstash.inputs.file     ][eslogs] Registering file input {:path=>["/var/log/elasticsearch/*.log"]}
[2021-02-19T16:26:22,361][INFO ][logstash.inputs.file     ][eslogs] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5", :path=>["/var/log/elasticsearch/*.log"]}
[2021-02-19T16:26:22,381][INFO ][logstash.javapipeline    ][eslogs] Pipeline started {"pipeline.id"=>"eslogs"}
[2021-02-19T16:26:22,390][DEBUG][logstash.javapipeline    ] Pipeline started successfully {:pipeline_id=>"eslogs", :thread=>"#<Thread:0x1fb42130 run>"}
[2021-02-19T16:26:22,392][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T16:26:22,407][TRACE][logstash.agent           ] Converge results {:success=>true, :failed_actions=>[], :successful_actions=>["id: eslogs, action_type: LogStash::PipelineAction::Create"]}
[2021-02-19T16:26:22,432][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:eslogs], :non_running_pipelines=>[]}
[2021-02-19T16:26:22,407][TRACE][logstash.agent           ] Converge results {:success=>true, :failed_actions=>[], :successful_actions=>["id: eslogs, action_type: LogStash::PipelineAction::Create"]}
[2021-02-19T16:26:22,432][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:eslogs], :non_running_pipelines=>[]}
[2021-02-19T16:26:22,437][INFO ][filewatch.observingtail  ][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] START, creating Discoverer, Watch with file and sincedb collections
[2021-02-19T16:26:22,463][DEBUG][logstash.agent           ] Starting puma
[2021-02-19T16:26:22,477][DEBUG][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: reading from /var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5
[2021-02-19T16:26:22,482][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2021-02-19T16:26:22,498][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: importing #<struct FileWatch::InodeStruct inode="201326700", maj=0, min=66305> => #<FileWatch::SincedbValue:0x6a312d4f @last_changed_at=1613764593.460017, @path_in_sincedb="/var/log/elasticsearch/elastic-cluster_index_indexing_slowlog.log", @watched_file=nil, @position=0>
[2021-02-19T16:26:22,506][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: importing #<struct FileWatch::InodeStruct inode="201326701", maj=0, min=66305> => #<FileWatch::SincedbValue:0x36ec659a @last_changed_at=1613764593.472921, @path_in_sincedb="/var/log/elasticsearch/elastic-cluster_index_search_slowlog.log", @watched_file=nil, @position=0>
[2021-02-19T16:26:22,507][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: importing #<struct FileWatch::InodeStruct inode="201328376", maj=0, min=66305> => #<FileWatch::SincedbValue:0x4dd9b9b8 @last_changed_at=1613764600.921809, @path_in_sincedb="/var/log/elasticsearch/elastic-cluster.log", @watched_file=nil, @position=8421121>
[2021-02-19T16:26:22,509][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: importing #<struct FileWatch::InodeStruct inode="201328371", maj=0, min=66305> => #<FileWatch::SincedbValue:0x2cef023a @last_changed_at=1613764593.480989, @path_in_sincedb="/var/log/elasticsearch/gc.log", @watched_file=nil, @position=0>
[2021-02-19T16:26:22,510][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: importing #<struct FileWatch::InodeStruct inode="201328333", maj=0, min=66305> => #<FileWatch::SincedbValue:0x300b223e @last_changed_at=1613764593.479392, @path_in_sincedb="/var/log/elasticsearch/elastic-cluster_deprecation.log", @watched_file=nil, @position=0>
[2021-02-19T16:26:22,512][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] open: count of keys read: 5
[2021-02-19T16:26:22,518][DEBUG][logstash.api.service     ] [api-service] start
[2021-02-19T16:26:22,539][TRACE][filewatch.discoverer     ][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] discover_files {:count=>0}
[2021-02-19T16:26:22,641][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2021-02-19T16:26:23,564][DEBUG][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] writing sincedb (delta since last write = 1613769983)
[2021-02-19T16:26:23,569][TRACE][filewatch.sincedbcollection][eslogs][0a6bbd6a4f1c0e1f61a82480deb9812aca25fc4036a5a3706e31d72f0f190422] sincedb_write: /var/lib/logstash/plugins/inputs/file/.sincedb_5cd297ac93ac4edccc765c8fc26ed0c5 (time = 2021-02-19 16:26:23 -0500)
[2021-02-19T16:26:25,107][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-02-19T16:26:25,108][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-02-19T16:26:27,391][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T16:26:30,119][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-02-19T16:26:30,127][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-02-19T16:26:32,391][DEBUG][org.logstash.execution.PeriodicFlush][eslogs] Pushing flush onto pipeline.
[2021-02-19T16:26:35,135][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}

so it doesn't run with the service but it runs fine as a standalone..
logstash.service:

[Unit]
Description=logstash

[Service]
Type=simple
User=logstash
Group=logstash
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity

[Install]
WantedBy=multi-user.target

Not sure what to suggest... when running logstash is running as a service are you checking are there new logs being written to "/var/log/elasticsearch/*.log" There is no obvious reason why is should not work?

How are you starting Elasticsearch? Which version of the stack?

Perhaps @badger our logstash expert might have a suggestion... I am out of ideas... it is there probably right in front of us.

The answer is

[TRACE][filewatch.discoverer ][eslogs][bc5ab04f2d956de5667fd6d17f303124efa9f39b777b88a02840b4c697a8bf3a] discover_files {:count=>0}

count=>0 means it is not finding any logs that match the path option of the file input. Possibly when you run on the command line you have different permissions, perhaps via group memberships, compared to the user running the service.

2 Likes

Right on! I checked the permissions before and everything seemed fine for the logs but forgot to check the directory's permissions..

@Badger @stephenb Thank you a ton for the quick response! You guys are amazing!

Best regards,
Tony

Do I get partial credit I saw that above and said it's seen no logs but did not think about different permissions on the log... :slight_smile:

Classic ...We should know!

Typos, Env Vars & Permissions...get us every time!!

Thanks @Badger took 3rd set of eyes