I have this filter which works very well except for mucking up the date in dissection. (What's in the ellipsis below, ..., is too long and everything is working anyway.)
acme.date: November 7th 2018, 17:00:00.000
acme.time: 21:57:37,208
acme.version: 0
I don't know where the , 17:00:00,000 comes from, but I don't want it and I would prefer that acme.date contain 2018-11-08 in the end. (acme.time and acme.version are perfect.)
If you are using a date filter to parse the acme.date field, it will create a UTC timestamp. This will include time, and I suspect the fact that it is UTC is why you see it offset by a number of hours.
I am not intentionally using or not using a date filter--the code is above. I'm new to this. I find the Elastic documentation on this point well above entry level.
Can you check what the raw value is in kibana rather than the index value. Go to the document and press "view single document" then press "JSON" - what is the value in the field now? Still the same or different?
Wait, I don't know what I was seeing that prompted my last response, maybe latent data from over the intervening holiday. I redid it. Here's what I'm really seeing (the Table tab results are as I said, but JSON tab results are):
I did exactly that (several times) and the result is (unencouragingly) this page:
Index management
Update your Elasticsearch indices indivudually... _ Include system indices
+----------------+
| Manage 1 index | acme.date
+----------------+
No indices to show
Maybe I'm doing something wrong. When I select the index, I'm looking at
x Name Health Status etc....
x filebeat-2018.11.23 yellow open etc....
I played around with this, but could not get anything but "No indices to show" no matter what field I chose.
However, this must be done as a manual operation in Kibana. I don't want my customer to have to perform this adjustment. What can I do, by configuration in Filebeat or Logstash, to avoid that? I need what comes out in Kibana to be as turn-key as it can.
Then in logstash you set the field type to string/text rather than allowing it to be automatically picked up as a "date". Then Kibana will not change the results.
You can set the field type in the logstash config, except that if you try to change the field type now you will get an error because the field type is already set to date. So you would need to create a new index/delete old indexes (if the data is test and does not matter at the moment) and set the field type to string from the get go.
I'm in development; I can do anything I want (and can figure out how) to do.
Where do I set the type of this field seeing as I only create it in the dissect filter thus (see below) in the first place? (Filebeat sent it in as a subset of the message field originally. Without my filter, acme.date doesn't exist.) Is there additional syntax I can decorate this code with that will accomplish it?
If you try this now without deleting you will probably get a "cannot index because field already has type date" or something like that. Then delete the index and it should start to go in properly.
If that does not work, then you specify the field type in the elasticsearch template before indexing:
See the top example on that page, it has a field called "created_at" of type date, you could do the same but remove the HH:MM:SS and change it to the correct format you want.
To confirm, the first solution does not do the trick.
For the second I'm not certain, in the framework of deployment of my ELK stack container to production, how to emit the PUT template. I guess I could just curl it at Elasticsearch from the Dockerfile assuming Elasticsearch will be up to receive it which I doubt.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.