I'm experiencing "histogram" aggregation crashing 6.1.1 consistently. But I can't seem to find a bug relating to my observation.
The closest I can find are a couple on 6.0.0.
I'm wondering if anybody has experience similar crashes.
The crash log only shows closed connection, so it might be logging the result of the crash.
java.nio.channels.ClosedChannelException: null
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) ~[?:?]
[2018-11-19T21:33:14,842][DEBUG][o.e.a.a.i.g.TransportGetIndexAction] [es-access-000] no known master node, scheduling a retry
[2018-11-19T21:33:14,848][DEBUG][o.e.a.a.i.g.TransportGetIndexAction] [es-access-000] no known master node, scheduling a retry
[2018-11-19T21:33:14,975][INFO ][o.e.c.s.ClusterApplierService] [es-access-000] detected_master {es-access-001}{EHi7qn7XQXe2M_xpN-MZaQ}{GrLXhBHNTi-3EbTJvd1Ggg}{172.31.25.158}{172.31.25.158:9300}, reason: apply cluster state (from master [master {es-access-001}{EHi7qn7XQXe2M_xpN-MZaQ}{GrLXhBHNTi-3EbTJvd1Ggg}{172.31.25.158}{172.31.25.158:9300} committed version [162517]])
My aggregation is simply "histogram" on a date field. Using "date_histogram" seems fine.
"aggs": {
** "term": {**
** "histogram": {**
** "field": "When",**
** "interval": "1d"**
** }**
** }**
** }**
This command is issued via kibana.
I saw few _tasks were running for more than several minutes by issuing the command
"GET _cat/tasks?v".
Then elasticsearch dies and exits. I had to restart the service to bring it back up.
Any pointers? I'm hoping it's a resolved issue, hence looking into release notes. But this particular crash seems different than the ones logged. Any insights?
Thanks in advance.