Kibana is not able to fetch data from kibana collector

Kibana is not able to fetch data from kibana collector error is showing as screen shot below. Also Elastic Search is crashing , I am running ES 6.5.4

http://localhost:9200/_cluster/stats

{"_nodes":{"total":1,"successful":1,"failed":0},"cluster_name":"elasticsearch","cluster_uuid":"JlQW2XTaS6WNB3Av9qvjow","timestamp":1601560937455,"status":"red","indices":{"count":714,"shards":{"total":3545,"primaries":3545,"replication":0.0,"index":{"shards":{"min":1,"max":5,"avg":4.964985994397759},"primaries":{"min":1,"max":5,"avg":4.964985994397759},"replication":{"min":0.0,"max":0.0,"avg":0.0}}},"docs":{"count":26181250,"deleted":505087},"store":{"size_in_bytes":7757660809},"fielddata":{"memory_size_in_bytes":0,"evictions":0},"query_cache":{"memory_size_in_bytes":0,"total_count":0,"hit_count":0,"miss_count":0,"cache_size":0,"cache_count":0,"evictions":0},"completion":{"size_in_bytes":0},"segments":{"count":826,"memory_in_bytes":19268575,"terms_memory_in_bytes":10639340,"stored_fields_memory_in_bytes":1961960,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":536192,"points_memory_in_bytes":4195587,"doc_values_memory_in_bytes":1935496,"index_writer_memory_in_bytes":0,"version_map_memory_in_bytes":0,"fixed_bit_set_memory_in_bytes":3341584,"max_unsafe_auto_id_timestamp":1601560378899,"file_sizes":{}}},"nodes":{"count":{"total":1,"data":1,"coordinating_only":0,"master":1,"ingest":1},"versions":["6.5.4"],"os":{"available_processors":64,"allocated_processors":64,"names":[{"name":"Windows Server 2016","count":1}],"mem":{"total_in_bytes":68163448832,"free_in_bytes":42088034304,"used_in_bytes":26075414528,"free_percent":62,"used_percent":38}},"process":{"cpu":{"percent":2},"open_file_descriptors":{"min":-1,"max":-1,"avg":0}},"jvm":{"max_uptime_in_millis":625664,"versions":[{"version":"1.8.0_144","vm_name":"Java HotSpot(TM) 64-Bit Server VM","vm_version":"25.144-b01","vm_vendor":"Oracle Corporation","count":1}],"mem":{"heap_used_in_bytes":856835264,"heap_max_in_bytes":1037959168},"threads":324},"fs":{"total_in_bytes":2898566049792,"free_in_bytes":2873972973568,"available_in_bytes":2873972973568},"plugins":[{"name":"ingest-attachment","version":"6.5.4","elasticsearch_version":"6.5.4","java_version":"1.8","description":"Ingest processor that uses Apache Tika to extract contents","classname":"org.elasticsearch.ingest.attachment.IngestAttachmentPlugin","extended_plugins":,"has_native_controller":false}],"network_types":{"transport_types":{"security4":1},"http_types":{"security4":1}}}}

Error log ES :

[2020-10-01T19:02:17,421][ERROR][o.e.ExceptionsHelper ] [uUQH632] fatal error
at org.elasticsearch.ExceptionsHelper.lambda$maybeDieOnAnotherThread$2(ExceptionsHelper.java:264)
at java.util.Optional.ifPresent(Optional.java:159)
at org.elasticsearch.ExceptionsHelper.maybeDieOnAnotherThread(ExceptionsHelper.java:254)
at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:176)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
at

You have 3545 shards on a single node, that is way too much. You need to reduce that to a few hundred.

Thanks , but how do I do it , is there any configuration for this , I m doing it on a windows server

I am doing it through elastic client c# dot net using below code where docindex is being created dynamically for each file. Pls suggest

Code :

string docIndex = "index_" + dm.Id;

var settings = new ConnectionSettings(new Uri(node))

            .InferMappingFor<DocAttachment>(m => m.IndexName(docIndex));

var client = new ElasticClient(settings);

var indexResponse = client.CreateIndex(docIndex, c => c

                              .Settings(s => s

                                .Analysis(a => a

                                  .Analyzers(ad => ad

                                    .Custom("windows_path_hierarchy_analyzer", ca => ca

                                      .Tokenizer("windows_path_hierarchy_tokenizer")

                                    )

                                  )

                                  .Tokenizers(t => t

                                    .PathHierarchy("windows_path_hierarchy_tokenizer", ph => ph

                                      .Delimiter('\\')

                                    )

                                  )

                                )

                              )



                              .Mappings(m => m

                                  .Map<DocAttachment>(mp => mp

                                  .AutoMap()

                                  .AllField(all => all

                                    .Enabled(false)

                                  )



                                  .Properties(ps => ps

                                    .Text(s => s

                                      .Name(n => n.Path)

                                      .Analyzer("windows_path_hierarchy_analyzer")

                                    )

                                    .Object<Attachment>(a => a

                                      .Name(n => n.Attachments)

                                      .AutoMap()

                                    )

                                  )

                                )

                              )

                            );

Look at the _shrink API to start.

I tried doing this but it gave an error, Please help me at what stage i can fix this , what are the other options.

curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{"index.number_of_shards" : "2","index.number_of_replicas" : "1"}'

error

{

"error": {

"root_cause": [

{

"type": "illegal_argument_exception",

"reason": "Can't update non dynamic settings [[index.number_of_shards]] for open indices [[mykase_af0c2213-2d3f-4c02-8646-b5b358029ebb/fG-XT4omTLijm4VeRe0_hA], [mykase_9b47d389-4298-4d68-972b-3d4b90d4d0a0/bFeuLmjDRwiTibM6lXVPVg], [mykase_7aa622e8-7824-4529-af7e-

You can not modify the number of primary shards of an existing index.
Instead, as @warkolm wrote, you can use the Shrink API.

In case you did not find the documentation, here it is:

i wish to do reindex , now how do i can control the shards while creating new index , can i do something while creating it .so that i can take care it initially , alos i have many indices so do i need to shrink each index one by one, PLs suggest what should i do now

i am trying to do this way where mykase is prefix of all indices

http://localhost:9200/mykase_*/_shrink/archive_mykase_2oct2020

can something be done in code so that i can define shards at index creation level.

string docIndex = "mykase_" + dm.Id;

             var settings = new ConnectionSettings(new Uri(node))
            .InferMappingFor<DocAttachment>(m => m.IndexName(docIndex));


            var client = new ElasticClient(settings);

var indexResponse = client.CreateIndex(docIndex, c => c
.Settings(s => s
.Analysis(a => a
.Analyzers(ad => ad
.Custom("windows_path_hierarchy_analyzer", ca => ca
.Tokenizer("windows_path_hierarchy_tokenizer")
)
)
.Tokenizers(t => t
.PathHierarchy("windows_path_hierarchy_tokenizer", ph => ph
.Delimiter('\')
)
)
)
)

                              .Mappings(m => m
                                  .Map<DocAttachment>(mp => mp
                                  .AutoMap()
                                  .AllField(all => all
                                    .Enabled(false)
                                  )

                                  .Properties(ps => ps
                                    .Text(s => s
                                      .Name(n => n.Path)
                                      .Analyzer("windows_path_hierarchy_analyzer")
                                    )
                                    .Object<Attachment>(a => a
                                      .Name(n => n.Attachments)
                                      .AutoMap()
                                    )
                                  )
                                )
                              )
                            );

Yes.

You can define the index setting: index.number_of_shards. See

And

Note that it's probably better and easier to define an index template than changing your code.
As you probably want to be able to change this configuration without having to ship your code again.

See

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.