Not procesing logstash data to elasticsearch with error: [org.logstash.instrument.metrics.gauge.LazyDelegatingGauge]

Hi,

I had installed ELK on a CentOS7 machine and then make my ELK work with more than 10GB data. After 2 weeks I stopped Logstash and clean all data and indices at Kibana. It took 3 more weeks until I didn't start it again with more memory space on the machine. Now I'm working with a test for my new grok. The think is that it does not work fine. Logstash works correctly but, I can't find my indices at Kibana.

Here you have my .conf file:

input {
file {
path => "/home/admin/envoirments/mytests/02/messages.csv*"
sincedb_path => "/dev/null"
mode => "read"
ignore_older => "37 d"
file_completed_action => "delete"
}
}

filter {
grok {
match => { "message" => "I'M NOT GONA SHOW MY GROK CAUSE IS INNECESARY AND CONTAINS WORK INFORMATION."}
}
}

output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => ["mytest_02"]
}
}

And here my Logstash logs:

[2019-08-09T12:05:09,926][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-08-09T12:06:37,669][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2019-08-09T12:06:37,826][INFO ][filewatch.observingread  ] QUIT - closing all files and shutting down.
[2019-08-09T12:06:38,677][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2019-08-09T12:06:38,930][INFO ][logstash.runner          ] Logstash shut down.
[2019-08-09T12:07:02,721][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-08-09T12:07:09,604][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2019-08-09T12:07:09,837][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2019-08-09T12:07:09,889][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-08-09T12:07:09,893][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-08-09T12:07:09,924][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2019-08-09T12:07:10,035][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-08-09T12:07:10,150][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-08-09T12:07:10,324][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-08-09T12:07:10,329][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x45e78d92 run>"}
[2019-08-09T12:07:10,946][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-08-09T12:07:11,034][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-08-09T12:07:11,208][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-08-09T12:07:11,648][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

I read many people that changing the sincedb_path => "NUL" they could solve their problems as happened here and here (both thanks to Badger). I could not solve my problem this way.

Any recomendation?

Is it possible that all of the files are older than this now?

"NUL" is used on Windows. On UNIX "/dev/null" is correct.

Hi, I already check it, but files were made this week so, impossible.

Try enabling '--log.level trace' and see what filewatch has to say.

Hi, this is the result (I had to cut many lines because it was 140000 caracters and could not add it here as answer):

[2019-08-09T16:05:31,702][DEBUG][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main"}
[2019-08-09T16:05:33,676][DEBUG][logstash.filters.grok    ] Grok patterns path {:paths=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns", "/usr/share/logstash/patterns/*"]}
[2019-08-09T16:05:33,688][DEBUG][logstash.filters.grok    ] Grok patterns path {:paths=>[]}
[2019-08-09T16:05:33,725][DEBUG][logstash.filters.grok    ] Match data {:match=>{"message"=>"MY GROK THAT ISN'T NECESARY TO SHARE"}}
[2019-08-09T16:05:33,730][TRACE][logstash.filters.grok    ] Grok compile {:field=>"message", :patterns=>["MY GROK THAT ISN'T NECESARY TO SHARE"]}
[2019-08-09T16:05:33,733][DEBUG][logstash.filters.grok    ] regexp: /message {:pattern=>"MY GROK THAT ISN'T NECESARY TO SHARE"}
[2019-08-09T16:05:33,839][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-08-09T16:05:34,189][DEBUG][logstash.outputs.elasticsearch] Found existing Elasticsearch template. Skipping template management {:name=>"logstash"}
[2019-08-09T16:05:34,406][DEBUG][logstash.filters.grok    ] replacement_pattern => (?<DATA:xxxxx>.*?)
[2019-08-09T16:05:34,411][DEBUG][logstash.filters.grok    ] Grok compiled OK {:pattern=>"MY GROK THAT ISN'T NECESARY TO SHARE", :expanded_pattern=>"MY GROK THAT ISN'T NECESARY TO SHARE"}
[2019-08-09T16:05:34,593][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-08-09T16:05:34,611][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x708c066 run>"}
[2019-08-09T16:05:35,186][TRACE][logstash.inputs.file     ] Registering file input {:path=>["/home/admin/envoirments/mytests/02/messages.csv*"]}
[2019-08-09T16:05:35,301][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-08-09T16:05:35,308][DEBUG][logstash.javapipeline    ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x708c066 run>"}
[2019-08-09T16:05:35,386][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-09T16:05:35,441][TRACE][logstash.agent           ] Converge results {:success=>true, :failed_actions=>[], :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Create"]}
[2019-08-09T16:05:35,495][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-08-09T16:05:35,500][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-08-09T16:05:35,637][DEBUG][logstash.agent           ] Starting puma
[2019-08-09T16:05:35,651][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2019-08-09T16:05:35,684][TRACE][filewatch.sincedbcollection] open: reading from /dev/null
[2019-08-09T16:05:35,688][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-08-09T16:05:35,829][TRACE][filewatch.discoverer     ] discover_files {"count"=>0}
[2019-08-09T16:05:35,852][DEBUG][logstash.api.service     ] [api-service] start
[2019-08-09T16:05:35,896][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-09T16:05:35,899][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-09T16:05:34,720][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled filter
 P[filter-grok{"match"=>{"message"=>"MY GROK THAT ISN'T NECESARY TO SHARE"}}|[str]pipeline:12:3:```
grok {
    match => { "message" => "MY GROK THAT ISN'T NECESARY TO SHARE"}
  }
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@af2cdd9f
[2019-08-09T16:05:34,718][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled filter
 P[filter-grok{"match"=>{"message"=>"MY GROK THAT ISN'T NECESARY TO SHARE"}}|[str]pipeline:12:3:```
grok {
    match => { "message" => "MY GROK THAT ISN'T NECESARY TO SHARE"}
  }
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@af2cdd9f
[2019-08-09T16:05:36,215][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-08-09T16:05:36,337][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled output
 P[output-elasticsearch{"hosts"=>["127.0.0.1:9200"], "index"=>["mytest_02"]}|[str]pipeline:18:3:```
elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => ["mytest_02"]
  }
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@3a1579c8
[2019-08-09T16:05:36,420][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled output
 P[output-elasticsearch{"hosts"=>["127.0.0.1:9200"], "index"=>["mytest_02"]}|[str]pipeline:18:3:```
elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => ["mytest_02"]
  }
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@3a1579c8
[2019-08-09T16:05:40,351][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-09T16:05:40,910][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-09T16:05:40,910][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-09T16:05:45,351][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-09T16:05:45,924][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-09T16:05:45,925][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-09T16:05:50,112][TRACE][filewatch.discoverer     ] discover_files {"count"=>0}
[2019-08-09T16:05:50,351][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-09T16:05:50,939][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-09T16:05:50,939][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-09T16:05:55,351][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-09T16:05:55,952][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-09T16:05:55,953][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

It is not finding any files that match the pattern. Is the path correct? Should 'envoirments' be 'environments'

Yes, it is correct. There is a word in Norwegian that I wanted to change to give it some sense for others if they have same issue. But if I do ls -l <files_path> I can see all files I wanted to work with.

I'm thinking it can be any kind of error "because of elasticsearch memory" or something like that. But I can't find much about elasticsearch memory system.

I have changed /etc/elasticsearch/jvm.options file changing this:

-Xms1g
-Xmx1g

To this:

-Xms4g
-Xmx4g

But still can't solve the issue.

I don't think it is anything to do with elasticsearch. logstash cannot find any files that match the path option on the file input.

Reading this logs, the only interpretation I can really do is that logstash proceed the .conf correctly.

[2019-08-14T09:50:07,403][DEBUG][logstash.api.service     ] [api-service] start
[2019-08-14T09:50:07,443][TRACE][filewatch.sincedbcollection] open: reading from /dev/null
[2019-08-14T09:50:07,455][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-08-14T09:50:07,511][TRACE][filewatch.discoverer     ] discover_files {"count"=>0}
[2019-08-14T09:50:07,685][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled conditional
 [if ('_grokparsefailure'.include?event.getField('[tags]'))]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@9555b4b7
[2019-08-14T09:50:08,125][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-14T09:50:08,125][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-14T09:50:07,948][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled conditional
 [if ('_grokparsefailure'.include?event.getField('[tags]'))]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@9555b4b7
[2019-08-14T09:50:08,486][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled filter
 P[filter-drop{}|[str]pipeline:16:5:```
drop{}
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@28522e30
[2019-08-14T09:50:08,594][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled filter
 P[filter-drop{}|[str]pipeline:16:5:```
drop{}
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@28522e30
[2019-08-14T09:50:08,810][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled output
 P[output-elasticsearch{"hosts"=>["127.0.0.1:9200"], "index"=>["prueba_02"]}|[str]pipeline:21:3:```
elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => ["prueba_02"]
  }
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@d2b6e9b7
[2019-08-14T09:50:08,877][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-08-14T09:50:08,767][DEBUG][org.logstash.config.ir.CompiledPipeline] Compiled output
 P[output-elasticsearch{"hosts"=>["127.0.0.1:9200"], "index"=>["prueba_02"]}|[str]pipeline:21:3:```
elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => ["prueba_02"]
  }
```]
 into
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@d2b6e9b7
[2019-08-14T09:50:12,059][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-14T09:50:13,206][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-14T09:50:13,206][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-14T09:50:17,060][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-14T09:50:18,391][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-14T09:50:18,392][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-14T09:50:21,633][TRACE][filewatch.discoverer     ] discover_files {"count"=>0}
[2019-08-14T09:50:22,062][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-08-14T09:50:23,404][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-08-14T09:50:23,404][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-08-14T09:50:27,059][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.

So, the error should be in the logs data (but there seems not to be a grokparsefailure) or at any point that seems hard to reach, cause it fails here, at the moment the index jump to Elasticsearch and after that to Kibana.

Anyway, I just create a new .conf without filter and no result. I'll keep trying. Maybe should reinstall Logstash.

SOLVED!!!!

I have to apologize for this stupid issue. When I had this issue I did chmod 777 to all file I wanted to work with but the problem wasn't solved, so I thought it wasn't permissions problem. Today, I tried to reinstall Logstash and Elasticsearch and didn't solve it, so, while I was crying I though I could give permissions 755 to all directories like:

chmod 755 /home
chmod 755 /home/admin/
chmod 755 /home/admin/envoirments/
chmod 755 /home/admin/envoirments/tests/
chmod 755 /home/admin/envoirments/tests/03
chmod 755 /home/admin/envoirments/tests/03/*

And this solved the issue, so my conclusion is:

If there is a problem with Logstash to read the files because of permissions, the issue seems to be alerted by: [org.logstash.instrument.metrics.gauge.LazyDelegatingGauge]. I don't know if this is the best way of, but I think that It could be interesting to alert this easy issue with another text (at log).

Anyway, thank you Badger for your help and hope this could be useful for someone.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.