Logs time 6 hours behind local time

I just started investigating logs. I setup filebeat on my elastic cluster and enabled the elasticsearch and logstash modules. The data is there, but the log time is -6 hours from local.

For example @timestamp is February 23rd 2019, 05:41:30.460, event.created is February 23rd 2019, 11:41:31.059 and the log record starts with [2019-02-23T11:41:30,460].

Not ironically, I'm US/Central time zone, we we are -6 offset now.

Is this a bug or a configuration issue? If configuration, what component?

Thanks

Hi @rugenl,

this sounds like you are being affected by elastic/beats#9756. A fix should have been added in version 6.6 that adds the var.convert_timezone setting also for the elasticsearch and logstash modules.

Like this?

- module: logstash
  # logs
  log:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    var.convert_timezone: true

Even at 6.6.1 I get: invalid config: yaml: line 16: did not find expected key

That is curious. I just tried to reproduce it, but I did not get the error you described:

  1. Download Filebeat 6.6.1
  2. run ./filebeat modules enable logstash
  3. edit modules.d/logstash.yml to contain:
    - module: logstash
      # logs
      log:
        enabled: true
    
        # Set custom paths for the log files. If left empty,
        # Filebeat will choose the paths depending on your OS.
        #var.paths:
    
        # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
        var.convert_timezone: true
    
  4. run ./filebeat test config

Can you double-check your version by running ./filebeat version?

Verified 6.6.1, filebeat test config doesn't give errors, but it fails when ran.

> # filebeat version
> filebeat version 6.6.1 (amd64), libbeat 6.6.1 [928f5e3f35fe28c1bd73513ff1cc89406eb212a6 built 2019-02-13 16:12:26 +0000 UTC]
> # filebeat run
> Exiting: 1 error: invalid config: yaml: line 16: did not find expected key
> # filebeat test config
> Config OK

The var.convert_timezone works in the elasticsearch module, it fails in the logstash module.

I found the problem, as shipped:

  # Slow logs
  slowlog:
   enabled: true

"enabled:" is only indented one space, when I copied the var. line from the first log, it was indented 2 spaces.

Oh, that's a difficult one to spot. Thanks for letting us know. Looks like it is already fixed in 10754 but missed the 6.6.1 release.

Does that option fix your original time offset problem?

It does for elasticsearch, but not for some reason, logstash data isn't getting ingested. They were working, not sure what has changed. Other problems bumped this down....

Let me know when you're returning to this topic and require assistance figuring that out.

It looks like I'm getting different results on two different test stacks, both are elasticsearch 6.6.0, but different filebeat versions:

filebeat 6.6.1 (update from 6.6.0), elasticsearch time is OK, no logstash ingest.

Filebeat 6.6.0, time is still -6 hours.

I think filebeat setup (probably automatic) did load the proper pipelines on both stacks.

You can check the pipeline on your 6.6.1 stack by querying

curl -XGET http://${ES_HOST}:${ES_PORT}/_ingest/pipeline/filebeat-6.6.1-logstash-log-\*

Do either filebeat or elasticsearch print anything on the console related to ingest errors?

Today's filebeat stats on the stack with filebeat 6.6.0
image

Stopped filebeat, deleted the registry, upgrade filebeat to 6.6.1, restart filebeat and create a Kibana index pattern for filebeat-6.6.1-*. First problem, there doesn't seem to be an @timestamp field, it's mapped, but doesn't exist in any data, switched to the event.created field.

The elasticsearch logs are there with the right time, no logstash data. I see pipeline stats on the nodes:

      "filebeat-6.6.1-logstash-log-pipeline-plain" : {
        "count" : 91,
        "time_in_millis" : 74,
        "current" : 0,
        "failed" : 0,
        "processors" : [
          {
            "grok" : {
              "count" : 91,
              "time_in_millis" : 0,
              "current" : 0,
              "failed" : 0
            }

All the count fields are the same for all processors, of course different counts on each node.

The curl get of the pipeline looks OK.

Got a little more time, I don't see any errors in any of the logs.

We are still in development but are heading toward production in a month or two. This isn't a critical path, I was just researching the benefit of consolidating our logs instead of having to chase logs across all the nodes in a cluster. This does look promising :slight_smile:

Thanks

If the @timestamp is missing, there must be something seriously wrong in the ingestion pipeline. Could you post such a malformed sample document for us to take a look at?

#! Deprecation: Field expansion matches too many fields, got: 1122. This will be limited starting with version 7.0 of Elasticsearch. The limit will be detemined by the indices.query.bool.max_clause_count setting which is currently set to 1024. You should look at lowering the maximum number of fields targeted by a query or increase the above limit while being aware that this can negatively affect your clusters performance.
{
"took" : 70,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1304,
"max_score" : 1.0,
"hits" : [
{
"_index" : "filebeat-6.6.1-2019.03.04",
"_type" : "doc",
"_id" : "vOINSmkBRoWwD8qx5i6X",
"_score" : 1.0,
"_source" : {
"offset" : 0,
"log" : {
"file" : {
"path" : "/var/log/elasticsearch/elk-exp6.log"
},
"level" : "INFO"
},
"prospector" : {
"type" : "log"
},
"source" : "/var/log/elasticsearch/elk-exp6.log",
"message" : "creating index, cause [auto(bulk api)], templates [exp-shards, metricbeat-6.6.0], shards [1]/[0], mappings [doc]",
"fileset" : {
"module" : "elasticsearch",
"name" : "server"
},
"input" : {
"type" : "log"
},
"elasticsearch" : {
"server" : {
"component" : "o.e.c.m.MetaDataCreateIndexService"
},
"node" : {
"name" : "met-elk-exp6a"
},
"index" : {
"name" : "metricbeat-6.6.0-2019.03.04"
}
},
"service" : {
"name" : "elasticsearch"
},
"beat" : {
"hostname" : "met-elk-exp6a",
"timezone" : "-06:00",
"name" : "met-elk-exp6a",
"version" : "6.6.1"
},
"host" : {
"os" : {
"codename" : "Core",
"name" : "CentOS Linux",
"family" : "redhat",
"version" : "7 (Core)",
"platform" : "centos"
},
"containerized" : true,
"name" : "met-elk-exp6a",
"id" : "c90656fd991348499dd37b4c2f145caf",
"architecture" : "x86_64"
},
"event" : {
"created" : "2019-03-04T18:53:37.739Z",
"dataset" : "elasticsearch.server"
}
}
},

Unless the log formats change for elasticsearch and logstash between 6.6.0 and 6.6.1, maybe there is something wrong with the date filter.

Comparing the pipelines, I think the field: message is just order in the grok.

> $ diff 6.6.0.txt 6.6.1.txt 
> 2c2
> <   "filebeat-6.6.0-logstash-log-pipeline-plain" : {
> ---
> >   "filebeat-6.6.1-logstash-log-pipeline-plain" : {
> 15d14
> <           "field" : "message",
> 23c22,23
> <           ]
> ---
> >           ],
> >           "field" : "message"
> 38c38,39
> <         "rename" : {
> ---
> >         "date" : {
> >           "field" : "logstash.log.timestamp",
> 39a41,49
> >           "formats" : [
> >             "ISO8601"
> >           ],
> >           "timezone" : "{{ event.timezone }}",
> >           "ignore_failure" : true
> >         }
> >       },
> >       {
> >         "remove" : {

I updated all elastic components to 6.6.1 on my single-node stack, no change in the filebeat 6.6.1 results. Just eliminating the version mismatch where I can.

Does any additional information come to light when setting ignore_failure to false?

All that that did was change the counts in the pipeline, so I took out the on_failure section. Now I'm getting the error below, no timezone?

[2019-03-05T10:20:58,964][DEBUG][o.e.a.b.TransportBulkAction] [met-elk-exp6a] failed to execute pipeline [filebeat-6.6.1-logstash-log-pipeline-plain] for document [filebeat-6.6.1-2019.03.05/doc/null]
org.elasticsearch.ElasticsearchException: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: unable to parse date [2019-03-05T10:20:34,053]
at org.elasticsearch.ingest.CompoundProcessor.newCompoundProcessorException(CompoundProcessor.java:195) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:134) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.Pipeline.execute(Pipeline.java:97) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.IngestService.innerExecute(IngestService.java:473) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.IngestService.access$100(IngestService.java:68) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.IngestService$4.doRun(IngestService.java:402) [elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.1.jar:6.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
Caused by: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: unable to parse date [2019-03-05T10:20:34,053]
... 11 more
Caused by: java.lang.IllegalArgumentException: unable to parse date [2019-03-05T10:20:34,053]
at org.elasticsearch.ingest.common.DateProcessor.execute(DateProcessor.java:97) ~[?:?]
at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:124) ~[elasticsearch-6.6.1.jar:6.6.1]
... 9 more
Caused by: java.lang.IllegalArgumentException: The datetime zone id '' is not recognised
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:234) ~[joda-time-2.10.1.jar:2.10.1]
at org.elasticsearch.ingest.common.DateProcessor.newDateTimeZone(DateProcessor.java:69) ~[?:?]
at org.elasticsearch.ingest.common.DateProcessor.lambda$new$0(DateProcessor.java:64) ~[?:?]
at org.elasticsearch.ingest.common.DateProcessor.execute(DateProcessor.java:89) ~[?:?]
at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:124) ~[elasticsearch-6.6.1.jar:6.6.1]
... 9 more