Logs time 6 hours behind local time

Could we see a few lines from both the Elasticsearch log file that's being delivered as well as one from the Logstash log file, please?

Elasticsearch:

[2019-03-05T08:44:21,781][INFO ][o.e.d.DiscoveryModule ] [met-elk-exp6a] using discovery type [zen] and host providers [settings]
[2019-03-05T08:44:22,606][INFO ][o.e.n.Node ] [met-elk-exp6a] initialized
[2019-03-05T08:44:22,607][INFO ][o.e.n.Node ] [met-elk-exp6a] starting ...
[2019-03-05T08:44:22,839][INFO ][o.e.t.TransportService ] [met-elk-exp6a] publish_address {128.206.0.162:9300}, bound_addresses {[::]:9300}
[2019-03-05T08:44:22,941][INFO ][o.e.b.BootstrapChecks ] [met-elk-exp6a] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-03-05T08:44:26,003][INFO ][o.e.c.s.MasterService ] [met-elk-exp6a] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {met-elk-exp6a}{8lmPMbyUS6-0v6v87BNmOA}{NSWlpR3SQCafgv_LPxWWZQ}{128.206.0.162}{128.206.0.162:9300}{ml.machine_memory=12392050688, rack=hyperv, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-03-05T08:44:26,012][INFO ][o.e.c.s.ClusterApplierService] [met-elk-exp6a] new_master {met-elk-exp6a}{8lmPMbyUS6-0v6v87BNmOA}{NSWlpR3SQCafgv_LPxWWZQ}{128.206.0.162}{128.206.0.162:9300}{ml.machine_memory=12392050688, rack=hyperv, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {met-elk-exp6a}{8lmPMbyUS6-0v6v87BNmOA}{NSWlpR3SQCafgv_LPxWWZQ}{128.206.0.162}{128.206.0.162:9300}{ml.machine_memory=12392050688, rack=hyperv, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-03-05T08:44:26,066][INFO ][o.e.h.n.Netty4HttpServerTransport] [met-elk-exp6a] publish_address {128.206.0.162:9200}, bound_addresses {[::]:9200}
[2019-03-05T08:44:26,066][INFO ][o.e.n.Node ] [met-elk-exp6a] started
[2019-03-05T08:44:26,845][INFO ][o.e.c.s.ClusterSettings ] [met-elk-exp6a] updating [xpack.monitoring.collection.enabled] from [false] to [true]
[2019-03-05T08:44:27,354][WARN ][r.suppressed ] [met-elk-exp6a] path: /.kibana/doc/kql-telemetry%3Akql-telemetry, params: {index=.kibana, id=kql-telemetry:kql-telemetry, type=doc}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
[2019-03-05T12:00:03,766][INFO ][o.e.c.m.MetaDataMappingService] [met-elk-exp6a] [winlogbeat-6.6.0-dc-mst-2019.03.05/aiNgaDhJQveJ3RDL-GkfPA] update_mapping [doc]

logstash:

[2019-03-05T08:57:49,802][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-03-05T08:57:49,806][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-03-05T08:57:49,935][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::Elasticsearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_e3ed3e0d-5293-47ad-b780-bbc0dd1002c4", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-03-05T08:57:49,951][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-03-05T08:57:49,959][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-03-05T09:59:18,545][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-03-05T09:59:23,635][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>50, "name"=>"[filebeat]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.1.8-java/lib/logstash/inputs/beats.rb:212:in run'"}, {"thread_id"=>47, "name"=>"[filebeat]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in read_batch'"}, {"thread_id"=>48, "name"=>"[filebeat]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in read_batch'"}, {"thread_id"=>49, "name"=>"[filebeat]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in read_batch'"}]}}
[2019-03-05T09:59:23,639][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2019-03-05T09:59:30,572][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>37, "name"=>"[exchange]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.1.8-java/lib/logstash/inputs/beats.rb:212:in run'"}, {"thread_id"=>34, "name"=>"[exchange]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in read_batch'"}, {"thread_id"=>35, "name"=>"[exchange]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in read_batch'"}, {"thread_id"=>36, "name"=>"[exchange]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:333:in read_batch'"}]}}
[2019-03-05T09:59:30,572][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2019-03-05T09:59:58,999][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}

Hmmm. If you modify the pipeline so that timezone is explicitly set to UTC in the "date" block does it still throw the errors?

Replace: {< if .convert_timezone >}"timezone": "{{ event.timezone }}",{< end >}

With: "timezone": "UTC"

So that is in the filebeat dynamic creation of the pipeline? I couldn't get that to reload the pipelines, would they be overwritten or would I have to delete them?

I edited the pipelines in Dev Tools. I got elasticsearch working. I used CST6CDT for the timezone since the logs have local time, not UTC.

I've noticed that the pipelines refer to event.timezone (for elasticsearch) and event.timezone (for logstash), but neither exist. "beat.timezone" exists in the elasticsearch events, but I can't get anything to ingest from logstash now, not sure why, no errors, no nothing.

Or add_locale: ~ isn't working...

A lot of the code that seems to be related to what we're seeing here landed in v6.6.1 via this PR: https://github.com/elastic/beats/pull/9797

I'm going to link this issue in that PR to see if the original author can have a look.

Additionally, I think I can confirm what you're seeing. I think that what might possibly have happened was that the PR referenced a field that was changed in 7.0 but the fix went into 6.6.1. I will need to confirm this with the original developer but it would align with what we're seeing here.

Again, thanks for looking, this is just something I'm poking at while waiting for "the project" to start :slight_smile:

You're quite welcome. I think there could well be a legitimate bug here so I'm glad you brought it to our attention. Much appreciated!

Another question. Since I just started filebeat with these modules enabled, then discovered convert_timestamp, the pipelines were already loaded without the date conversion. I wonder if the pipelines need to be deleted before the changed code will be stored?

You can get the pipelines to update by running filebeat setup --pipelines. Otherwise, they are not updated if they exist in Elasticsearch already.

By way of an update here, we have confirmed what you're seeing as a legitimate bug and have written a patch for it that is currently under review. Please find it here:

Thanks again for bringing this to our attention. Much appreciated and apologies for the trouble.

Yes u can get the pipelines to update by running the code filebeat setup --pipelines.And check their updation,bcoz chances are high that they exist in Elasticsearch already.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.