I would like to know how to specify which date field is used in the date filter at the top right of the dashboard creation page.
I am using only one index, but the two date fields I have were added in the Index Patterns section using "Add Field", and so there was no date field assigned as the timestamp when the index was imported.
Currently, changing the date filter in the dashboard does nothing to the displayed data, so I am not sure why it allows me to interact with the date filter at all.
Where are you seeing the date filter? I've created an index pattern without a timestamp field and I cannot search in Discover, for example.
On the other hand, in Lens you can always create a chart (say a
Bar vertical) that has any time field as one axis, so for the Bar vertical, it would be the horizontal axis, achieving the same experience as with the time filter.
In fact, now that I see it, the time filter is then populated but I'm not sure if this is intended (Lens being smart about you using a time field to aggregate). @flash1293 may be able to help
The time filter I am referring to is on the Dashboard editor, not Discovery. Also, I don't have a "@timestamp" because my date field is not in a format that is recognized by Elastic.
I have found a solution, but it involves some work outside of Kibana. I have created a staging table that transforms my data (which is .csv, by the way) into a format that is picked up as a date field, then Elastic is happy.
I know that in Power BI you can transform the data as it comes in (before what would be "indexing"). Maybe this is a feature that Elastic can consider adding in future versions.
You described Ingest Pipelines You can actually define them in Kibana with the Stack Management app.
Additionally, if you are in 7.15 you don't need to reindex your data with the new Runtime fields,.
Runtime Fields are also available in the Index Patterns interface: in this screenshot, I'm doing the opposite of what you requested, that is creating a new field that takes a date and generates a custom representation as a keyword field.
I have not used Ingest Pipelines yet, but I will try that out shortly.
However, I have created a date field in the Index Patterns interface. The issue is that I have not found a way to use the new field as the "@timestamp". Elastic does not pick it up automatically as such, and I have not found a manual way to do it.
The incoming field is of the form "yyyyMMddHHmmss++", where "++" means hundredths of a second. For example: 2021101508130522, so I created a new date field using:
String dateString = doc['DATE'].value.toString().substring(0,14);
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyyMMddHHmmss");
Date date = dateFormat.parse(dateString);
long l = date.getTime();
In other words I trim off the last two characters (the "++"), and then use an accepted date format. The field is successfully created, but now I need it to be the "@timestamp".
Is there a way of trimming off the last two characters using Ingest Pipelines so that Elastic picks it up as the "@timestamp"?
If you want Elasticsearch to automatically pick up the date, you have to specify the format you are using in the mapping. See here: Date field type | Elasticsearch Guide [7.15] | Elastic
About @jsanz question above: Right, Lens does something special here - if a field is used in a date histogram, it's automatically bound to the date picker in the top right.
@flash1293 Thanks for the reply. I've already read through that and other docs and haven't found a way to specify a date format that includes hundredths of a second.
Ah got it, yeah it seems like this is actually not possible - according to DateTimeFormatter (Java Platform SE 8 )
S means milliseconds, but it won't help your specific case. You can totally use ingest pipelines to trim of the last two digits by using a script processor which allows you to use painless to change the value of a field: Script processor | Elasticsearch Guide [7.15] | Elastic
Okay, so I have added the following to the ingest pipeline:
"source": "ctx['newDate'] = ctx['DATE'].substring(0,14)"
DATE is the one with split seconds, e.g.
newDate has the last two digits trimmed off, i.e.
Now, is there any way to get Elastic to recognize and use
newDate as a date type field? And if so, to use it as the timestamp?
You need to define the format for
newDate in the mapping you are ingesting into. Then, in Kibana, when creating the index pattern pick
newDate as the default time field - this will bind it to the time picker in all places.
Alright, so when I import the data, I have the following under Mappings:
And under Ingest pipeline I have the following:
This successfully creates the field newDate with type "date".
At this point we are past the stage of selecting a default time field:
I need to know if there is a way setting it under the "Mappings" or "Ingest pipeline" sections shown above. I have fiddled around with using a "date" processor in the Ingest pipeline, but with no luck so far.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.