I just started investigating logs. I setup filebeat on my elastic cluster and enabled the elasticsearch and logstash modules. The data is there, but the log time is -6 hours from local.
For example @timestamp is February 23rd 2019, 05:41:30.460, event.created is February 23rd 2019, 11:41:31.059 and the log record starts with [2019-02-23T11:41:30,460].
Not ironically, I'm US/Central time zone, we we are -6 offset now.
Is this a bug or a configuration issue? If configuration, what component?
this sounds like you are being affected by elastic/beats#9756. A fix should have been added in version 6.6 that adds the var.convert_timezone setting also for the elasticsearch and logstash modules.
- module: logstash
# logs
log:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
var.convert_timezone: true
Even at 6.6.1 I get: invalid config: yaml: line 16: did not find expected key
That is curious. I just tried to reproduce it, but I did not get the error you described:
Download Filebeat 6.6.1
run ./filebeat modules enable logstash
edit modules.d/logstash.yml to contain:
- module: logstash
# logs
log:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
var.convert_timezone: true
run ./filebeat test config
Can you double-check your version by running ./filebeat version?
Verified 6.6.1, filebeat test config doesn't give errors, but it fails when ran.
> # filebeat version
> filebeat version 6.6.1 (amd64), libbeat 6.6.1 [928f5e3f35fe28c1bd73513ff1cc89406eb212a6 built 2019-02-13 16:12:26 +0000 UTC]
> # filebeat run
> Exiting: 1 error: invalid config: yaml: line 16: did not find expected key
> # filebeat test config
> Config OK
It does for elasticsearch, but not for some reason, logstash data isn't getting ingested. They were working, not sure what has changed. Other problems bumped this down....
Today's filebeat stats on the stack with filebeat 6.6.0
Stopped filebeat, deleted the registry, upgrade filebeat to 6.6.1, restart filebeat and create a Kibana index pattern for filebeat-6.6.1-*. First problem, there doesn't seem to be an @timestamp field, it's mapped, but doesn't exist in any data, switched to the event.created field.
The elasticsearch logs are there with the right time, no logstash data. I see pipeline stats on the nodes:
Got a little more time, I don't see any errors in any of the logs.
We are still in development but are heading toward production in a month or two. This isn't a critical path, I was just researching the benefit of consolidating our logs instead of having to chase logs across all the nodes in a cluster. This does look promising
If the @timestamp is missing, there must be something seriously wrong in the ingestion pipeline. Could you post such a malformed sample document for us to take a look at?
I updated all elastic components to 6.6.1 on my single-node stack, no change in the filebeat 6.6.1 results. Just eliminating the version mismatch where I can.
All that that did was change the counts in the pipeline, so I took out the on_failure section. Now I'm getting the error below, no timezone?
[2019-03-05T10:20:58,964][DEBUG][o.e.a.b.TransportBulkAction] [met-elk-exp6a] failed to execute pipeline [filebeat-6.6.1-logstash-log-pipeline-plain] for document [filebeat-6.6.1-2019.03.05/doc/null]
org.elasticsearch.ElasticsearchException: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: unable to parse date [2019-03-05T10:20:34,053]
at org.elasticsearch.ingest.CompoundProcessor.newCompoundProcessorException(CompoundProcessor.java:195) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:134) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.Pipeline.execute(Pipeline.java:97) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.IngestService.innerExecute(IngestService.java:473) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.IngestService.access$100(IngestService.java:68) ~[elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.ingest.IngestService$4.doRun(IngestService.java:402) [elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:759) [elasticsearch-6.6.1.jar:6.6.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.6.1.jar:6.6.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
Caused by: java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: unable to parse date [2019-03-05T10:20:34,053]
... 11 more
Caused by: java.lang.IllegalArgumentException: unable to parse date [2019-03-05T10:20:34,053]
at org.elasticsearch.ingest.common.DateProcessor.execute(DateProcessor.java:97) ~[?:?]
at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:124) ~[elasticsearch-6.6.1.jar:6.6.1]
... 9 more
Caused by: java.lang.IllegalArgumentException: The datetime zone id '' is not recognised
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:234) ~[joda-time-2.10.1.jar:2.10.1]
at org.elasticsearch.ingest.common.DateProcessor.newDateTimeZone(DateProcessor.java:69) ~[?:?]
at org.elasticsearch.ingest.common.DateProcessor.lambda$new$0(DateProcessor.java:64) ~[?:?]
at org.elasticsearch.ingest.common.DateProcessor.execute(DateProcessor.java:89) ~[?:?]
at org.elasticsearch.ingest.CompoundProcessor.execute(CompoundProcessor.java:124) ~[elasticsearch-6.6.1.jar:6.6.1]
... 9 more
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.