Thanks again for the help here. I just wanted to report back in case someone else ran into a similar issue in case it is helpful to them.
While my original query was a simple formatting issue, I ran into another problem where using day-month-year was causing Elasticsearch to treat any of the date fields as "text" which resulted in the following error from Kibana: "Visualize: Fielddata is disabled on text fields by default.”
So instead I changed my queries of the Oracle database to be ISO 8601 although because time is in UTC I didn't use time zone information. There may be a better way to do this, but anyway this is how I ended up doing it:
Select statement for Oracle database (irrelevant fields omitted for the purpose of this issue):
TO_CHAR(table.create_date, 'YYYY-MM-DD') || 'T' || TO_CHAR(table.create_date, 'HH24:MI:SS') AS create_date FROM table
And then in my LogStash configuration I had the following date filter to match the formatting:
filter { date { locale => "EN" timezone => "UTC" match => [ "create_date", "yyyy-MM-dd'T'HH:mm:ss" ] target => "@timestamp" } }
If you need to use milliseconds (which I didn't need to use), you need to match the number of digits exactly otherwise you'll get a parse error (I did experiment with this initially but ditched them because seconds is granular enough for my purposes, but if you need to use milliseconds, for example from a timestamp formatted field in Oracle DB which defaults to a precision of 6 digits, you can use the following formats:
In your query:
"YYYY-MM-DD HH24:MI:SSxFF6"
In LogStash (note: LogStash will only use precision to 3 digits but it still needs to match exactly the same number of digits else pare will fail):
"yyyy-MM-dd'T'HH:mm:ss.SSSSSS"