Error with bulk index operation when displaying in kibana

I have elasticsearch 7.10.2 and kibana 7.10.2 (technically opensearch) running. I import data into an existing index with a bulk index command. The response is a 201 back from elasticsearch. When i go to kibana, the dashboards are broken, showing Error then "bad request" when i mouse over them. going into the kibana logs, i see this in the logs ""message":"[search_phase_execution_exception]: all shards failed"}". Additionally, I also see the following error: Error: Bad Request
at Fetch._callee3$ (https://192.168.49.2:30020/36473/bundles/core/core.entry.js:6:59575)
at l (https://192.168.49.2:30020/36473/bundles/osd-ui-shared-deps/osd-ui-shared-deps.js:380:982149)
at Generator._invoke (https://192.168.49.2:30020/36473/bundles/osd-ui-shared-deps/osd-ui-shared-deps.js:380:981902)
at Generator.forEach.e. [as next] (https://192.168.49.2:30020/36473/bundles/osd-ui-shared-deps/osd-ui-shared-deps.js:380:982506)
at fetch_asyncGeneratorStep (https://192.168.49.2:30020/36473/bundles/core/core.entry.js:6:52678)
at _next (https://192.168.49.2:30020/36473/bundles/core/core.entry.js:6:52994) I am unable to access the bulk imported data from my dashboard but other instances of this dashboard which were not using the bulk api to import data show the data fine. also, i do see the data in the discover tab for the index, so i know kibana is seeing the data in some form but the dashboard is not seeing it correctly. what could be the issue?

You should look at upgrading, 7.14 is latest :slight_smile:

What do your Elasticsearch logs show?

there are no errors in the elasticsearch logs, the errors are only on kibana (opensearch dashboard). unfortunately i'm unable to upgrade because i have to use opensearch, which is forked off ES 7.10.2, are there any bugs related to this that were fixed between 7.10.2 and 7.14.

Without further information it's hard to say what the issue is.

If there's nothing in your logs, then what is the state of the indices in the system?

i'll figure out how to get the index state, but like i said, the error seems to be in kibana's communication with elasticsearch. here are the errors from kibana:
"type":"response","@timestamp":"2021-09-07T09:46:15Z","tags":,"pid":1,"method":"post","statusCode":400,"req":{"url":"/internal/search/opensearch","method":"post","headers":{"host":"192.168.49.2:30020","connection":"keep-alive","content-length":"757","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36","osd-version":"1.0.0","content-type":"application/json","accept":"/","origin":"https://192.168.49.2:30020","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://192.168.49.2:30020/app/dashboards","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"172.17.0.1","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36","referer":"https://192.168.49.2:30020/app/dashboards"},"res":{"statusCode":400,"responseTime":1439,"contentLength":9},"message":"POST /internal/search/opensearch 400 1439ms - 9.0B"}
{"type":"log","@timestamp":"2021-09-07T09:46:17Z","tags":["error","opensearch","data"],"pid":1,"message":"[search_phase_execution_exception]: all shards failed"}
{"type":"response","@timestamp":"2021-09-07T09:46:16Z","tags":,"pid":1,"method":"post","statusCode":400,"req":{"url":"/internal/search/opensearch","method":"post","headers":{"host":"192.168.49.2:30020","connection":"keep-alive","content-length":"403","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36","osd-version":"1.0.0","content-type":"application/json","accept":"/","origin":"https://192.168.49.2:30020","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://192.168.49.2:30020/app/dashboards","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"172.17.0.1","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36","referer":"https://192.168.49.2:30020/app/dashboards"},"res":{"statusCode":400,"responseTime":623,"contentLength":9},"message":"POST /internal/search/opensearch 400 623ms - 9.0B"}
{"type":"log","@timestamp":"2021-09-07T09:46:17Z","tags":["error","opensearch","data"],"pid":1,"message":"[search_phase_execution_exception]: all shards failed"}
{"type":"response","@timestamp":"2021-09-07T09:46:16Z","tags":,"pid":1,"method":"post","statusCode":400,"req":{"url":"/internal/search/opensearch","method":"post","headers":{"host":"192.168.49.2:30020","connection":"keep-alive","content-length":"402","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36","osd-version":"1.0.0","content-type":"application/json","accept":"/","origin":"https://192.168.49.2:30020","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"https://192.168.49.2:30020/app/dashboards","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"172.17.0.1","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36","referer":"https://192.168.49.2:30020/app/dashboards"},"res":{"statusCode":400,"responseTime":475,"contentLength":9},"message":"POST /internal/search/opensearch 400 475ms - 9.0B"}

i'm not sure if this is what you meant for the index state, but when i get the index statistics, this is what i have
"indices" : {
"titan.ium-event-2021.09.03" : {
"uuid" : "Dbgx5tW3TuWVP0UVTlMWvA",
"primaries" : {
"docs" : {
"count" : 3,
"deleted" : 0
},
"store" : {
"size_in_bytes" : 35233,
"reserved_in_bytes" : 0
},
"indexing" : {
"index_total" : 3,
"index_time_in_millis" : 7,
"index_current" : 0,
"index_failed" : 0,
"delete_total" : 0,
"delete_time_in_millis" : 0,
"delete_current" : 0,
"noop_update_total" : 0,
"is_throttled" : false,
"throttle_time_in_millis" : 0
},
"get" : {
"total" : 0,
"time_in_millis" : 0,
"exists_total" : 0,
"exists_time_in_millis" : 0,
"missing_total" : 0,
"missing_time_in_millis" : 0,
"current" : 0
},
"search" : {
"open_contexts" : 0,
"query_total" : 1,
"query_time_in_millis" : 116,
"query_current" : 0,
"fetch_total" : 1,
"fetch_time_in_millis" : 0,
"fetch_current" : 0,
"scroll_total" : 0,
"scroll_time_in_millis" : 0,
"scroll_current" : 0,
"suggest_total" : 0,
"suggest_time_in_millis" : 0,
"suggest_current" : 0
},
"merges" : {
"current" : 0,
"current_docs" : 0,
"current_size_in_bytes" : 0,
"total" : 0,
"total_time_in_millis" : 0,
"total_docs" : 0,
"total_size_in_bytes" : 0,
"total_stopped_time_in_millis" : 0,
"total_throttled_time_in_millis" : 0,
"total_auto_throttle_in_bytes" : 20971520
},
"refresh" : {
"total" : 7,
"total_time_in_millis" : 30,
"external_total" : 6,
"external_total_time_in_millis" : 30,
"listeners" : 0
},
"flush" : {
"total" : 1,
"periodic" : 0,
"total_time_in_millis" : 24
},
"warmer" : {
"current" : 0,
"total" : 5,
"total_time_in_millis" : 0
},
"query_cache" : {
"memory_size_in_bytes" : 0,
"total_count" : 0,
"hit_count" : 0,
"miss_count" : 0,
"cache_size" : 0,
"cache_count" : 0,
"evictions" : 0
},
"fielddata" : {
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"completion" : {
"size_in_bytes" : 0
},
"segments" : {
"count" : 3,
"memory_in_bytes" : 16764,
"terms_memory_in_bytes" : 12960,
"stored_fields_memory_in_bytes" : 1464,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 2112,
"points_memory_in_bytes" : 0,
"doc_values_memory_in_bytes" : 228,
"index_writer_memory_in_bytes" : 0,
"version_map_memory_in_bytes" : 0,
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : -1,
"file_sizes" : { }
},
"translog" : {
"operations" : 2,
"size_in_bytes" : 1187,
"uncommitted_operations" : 2,
"uncommitted_size_in_bytes" : 1187,
"earliest_last_modified_age" : 37
},
"request_cache" : {
"memory_size_in_bytes" : 0,
"evictions" : 0,
"hit_count" : 0,
"miss_count" : 1
},
"recovery" : {
"current_as_source" : 0,
"current_as_target" : 0,
"throttle_time_in_millis" : 0
}
},
"total" : {
"docs" : {
"count" : 3,
"deleted" : 0
},
"store" : {
"size_in_bytes" : 35233,
"reserved_in_bytes" : 0
},
"indexing" : {
"index_total" : 3,
"index_time_in_millis" : 7,
"index_current" : 0,
"index_failed" : 0,
"delete_total" : 0,
"delete_time_in_millis" : 0,
"delete_current" : 0,
"noop_update_total" : 0,
"is_throttled" : false,
"throttle_time_in_millis" : 0
},
"get" : {
"total" : 0,
"time_in_millis" : 0,
"exists_total" : 0,
"exists_time_in_millis" : 0,
"missing_total" : 0,
"missing_time_in_millis" : 0,
"current" : 0
},
"search" : {
"open_contexts" : 0,
"query_total" : 1,
"query_time_in_millis" : 116,
"query_current" : 0,
"fetch_total" : 1,
"fetch_time_in_millis" : 0,
"fetch_current" : 0,
"scroll_total" : 0,
"scroll_time_in_millis" : 0,
"scroll_current" : 0,
"suggest_total" : 0,
"suggest_time_in_millis" : 0,
"suggest_current" : 0
},
"merges" : {
"current" : 0,
"current_docs" : 0,
"current_size_in_bytes" : 0,
"total" : 0,
"total_time_in_millis" : 0,
"total_docs" : 0,
"total_size_in_bytes" : 0,
"total_stopped_time_in_millis" : 0,
"total_throttled_time_in_millis" : 0,
"total_auto_throttle_in_bytes" : 20971520
},
"refresh" : {
"total" : 7,
"total_time_in_millis" : 30,
"external_total" : 6,
"external_total_time_in_millis" : 30,
"listeners" : 0
},
"flush" : {
"total" : 1,
"periodic" : 0,
"total_time_in_millis" : 24
},
"warmer" : {
"current" : 0,
"total" : 5,
"total_time_in_millis" : 0
},
"query_cache" : {
"memory_size_in_bytes" : 0,
"total_count" : 0,
"hit_count" : 0,
"miss_count" : 0,
"cache_size" : 0,
"cache_count" : 0,
"evictions" : 0
},
"fielddata" : {
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"completion" : {
"size_in_bytes" : 0
},
"segments" : {
"count" : 3,
"memory_in_bytes" : 16764,
"terms_memory_in_bytes" : 12960,
"stored_fields_memory_in_bytes" : 1464,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 2112,
"points_memory_in_bytes" : 0,
"doc_values_memory_in_bytes" : 228,
"index_writer_memory_in_bytes" : 0,
"version_map_memory_in_bytes" : 0,
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : -1,
"file_sizes" : { }
},
"translog" : {
"operations" : 2,
"size_in_bytes" : 1187,
"uncommitted_operations" : 2,
"uncommitted_size_in_bytes" : 1187,
"earliest_last_modified_age" : 37
},
"request_cache" : {
"memory_size_in_bytes" : 0,
"evictions" : 0,
"hit_count" : 0,
"miss_count" : 1
},
"recovery" : {
"current_as_source" : 0,
"current_as_target" : 0,
"throttle_time_in_millis" : 0
}
}
}

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

Just to be clear, you are using opensearch, which is not Elasticsearch. You would probably have better luck asking the developers of that sorry to say.

I will keep that in mind. Also, i figured out my issue, it was because i was running kibana and elasticsearch, but not logstash, and in my case, the index templates for kibana were created in logstash, so running logstash cleared the error.

Not sure I am following how that is the core problem, but glad you got it sorted! :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.