Could Not index event to ElasticSearch

May I see the full logs please? I mean at least the logs from the start until you see that the node has been started.

    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) [netty-common-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.43.Final.jar:4.1.43.Final]
    at java.lang.Thread.run(Thread.java:830) [?:?]
[2020-09-30T19:27:46,162][DEBUG][o.e.a.s.TransportSearchAction] [node-1] All shards failed for phase: [query]
[2020-09-30T19:27:46,162][WARN ][r.suppressed             ] [node-1] path: /.kibana/_search, params: {rest_total_hits_as_int=true, size=20, index=.kibana, from=0, _source=ui-metric.count,namespace,type,references,migrationVersion,updated_at,count}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
    at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:534) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:305) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:563) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:384) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:219) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:284) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.5.1.jar:7.5.1]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:830) [?:?]
[2020-09-30T19:27:47,487][WARN ][r.suppressed             ] [node-1] path: /.kibana/_doc/space%3Adefault, params: {index=.kibana, id=space:default}
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][_doc][space:default]: routing [null]]
    at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:224) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:201) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:103) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:62) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:153) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:123) [x-pack-security-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:151) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:129) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:396) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:494) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.rest.action.document.RestGetAction.lambda$prepareRequest$0(RestGetAction.java:95) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:108) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:69) [x-pack-security-7.5.1.jar:7.5.1]
    at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:222) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:295) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:322) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:372) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:301) [elasticsearch-7.5.1.jar:7.5.1]
    at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:69) [transport-netty4-client-7.5.1.jar:7.5.1]
    at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:31) [transport-netty4-client-7.5.1.jar:7.5.1]
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:58) [transport-netty4-client-7.5.1.jar:7.5.1]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]

The log is too large to send the entire log. But this is the logfile /var/log/elasticsearch after a restart

I am unfamiliar with how to do this. Can you give me the command to do this?

{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [_cluster]",
        "resource.type" : "index_expression",
        "resource.id" : "_cluster",
        "index_uuid" : "_na_",
        "index" : "_cluster"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [_cluster]",
    "resource.type" : "index_expression",
    "resource.id" : "_cluster",
    "index_uuid" : "_na_",
    "index" : "_cluster"
  },
  "status" : 404

}

The command is in the docs I linked to.

When I run the command with curl, this is my output:

  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [_cluster]",
        "resource.type" : "index_expression",
        "resource.id" : "_cluster",
        "index_uuid" : "_na_",
        "index" : "_cluster"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [_cluster]",
    "resource.type" : "index_expression",
    "resource.id" : "_cluster",
    "index_uuid" : "_na_",
    "index" : "_cluster"
  },
  "status" : 404

What is the exact command you run?

I just spotted something in the logs you provided earlier. It appears you have hit the 1000 shards per node limit. If this is correct I would recommend reducing the number of shards in the system significantly.

curl -X Get "XXX.XXX.XXX.XXX:9200 /_cluster/stats/=human&pretty"

Where the XXX's are the ip of the Elasticsearch server
localhost does not work or configured.

Awesome, yes, I did see that as well.
What is causing the shards, and how do I reduce them?

You can delete data and make sure indices you are creating get reasonably large, e.g. by switching from daily to monthly or weekly indices.

Link for me on this?

dhoman@ElasticSearch:~$ sudo journalctl -u elasticsearch.service
-- Logs begin at Tue 2020-10-06 08:14:59 UTC, end at Tue 2020-10-06 21:11:35 UTC. --
Oct 06 20:31:51 ElasticSearch systemd[1]: elasticsearch.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Oct 06 20:40:25 ElasticSearch systemd[1]: Stopping Elasticsearch...
Oct 06 20:40:28 ElasticSearch systemd[1]: Stopped Elasticsearch.
-- Reboot --
Oct 06 20:48:26 ElasticSearch systemd[1]: Starting Elasticsearch...
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]: Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]: output:
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]: error:
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]: Unrecognized VM option 'UseConcMarkSweepGC'
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]: Error: Could not create the Java Virtual Machine.
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]: Error: A fatal exception has occurred. Program will exit.
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]:         at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]:         at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]:         at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]:         at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
Oct 06 20:48:29 ElasticSearch systemd-entrypoint[1101]:         at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
Oct 06 20:48:29 ElasticSearch systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Oct 06 20:48:29 ElasticSearch systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Oct 06 20:48:29 ElasticSearch systemd[1]: Failed to start Elasticsearch.
Oct 06 20:59:30 ElasticSearch systemd[1]: Starting Elasticsearch...
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]: Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]: output:
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]: error:
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]: Unrecognized VM option 'UseConcMarkSweepGC'
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]: Error: Could not create the Java Virtual Machine.
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]: Error: A fatal exception has occurred. Program will exit.
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]:         at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]:         at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]:         at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]:         at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
Oct 06 20:59:32 ElasticSearch systemd-entrypoint[3356]:         at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
Oct 06 20:59:32 ElasticSearch systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Oct 06 20:59:32 ElasticSearch systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Oct 06 20:59:32 ElasticSearch systemd[1]: Failed to start Elasticsearch.
Oct 06 21:06:05 ElasticSearch systemd[1]: Starting Elasticsearch...
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]: Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]: output:
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]: error:
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]: Unrecognized VM option 'UseConcMarkSweepGC'
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]: Error: Could not create the Java Virtual Machine.
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]: Error: A fatal exception has occurred. Program will exit.
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]:         at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]:         at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]:         at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]:         at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
Oct 06 21:06:07 ElasticSearch systemd-entrypoint[3605]:         at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
Oct 06 21:06:07 ElasticSearch systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Oct 06 21:06:07 ElasticSearch systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Oct 06 21:06:07 ElasticSearch systemd[1]: Failed to start Elasticsearch.

I restarted the Elasticsearch machine, and now Elasticsearch will not start. logfile above.
Can you help me?

curl -v -GET '206.189.196.214:9200/_cat/nodes?v'
*   Trying 206.189.196.214...
* TCP_NODELAY set
* Connected to 206.189.196.214 (206.189.196.214) port 9200 (#0)
> GET /_cat/nodes?v HTTP/1.1
> Host: 206.189.196.214:9200
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/plain; charset=UTF-8
< content-length: 186
<
ip              heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
206.189.196.214           71          97  79    3.35    3.15     3.19 dilmrt    *      node-1
* Connection #0 to host 206.189.196.214 left intact
root@ElasticSearch:/home/dhoman#
curl -v -GET '206.189.196.214:9200/_cluster/stats'
*   Trying 206.189.196.214...
* TCP_NODELAY set
* Connected to 206.189.196.214 (206.189.196.214) port 9200 (#0)
> GET /_cluster/stats HTTP/1.1
> Host: 206.189.196.214:9200
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 3409
<
{"_nodes":{"total":1,"successful":1,"failed":0},"cluster_name":"elasticsearch","cluster_uuid":"HgSWRRDZR76gW2a6NJjANg","timestamp":1602525239743,"status":"green","indices":{"count":752,"shards":{"total":1000,"primaries":1000,"replication":0.0,"index":{"shards":{"min":1,"max":2,"avg":1.3297872340425532},"primaries":{"min":1,"max":2,"avg":1.3297872340425532},"replication":{"min":0.0,"max":0.0,"avg":0.0}}},"docs":{"count":45266855,"deleted":15},"store":{"size_in_bytes":78626507796,"reserved_in_bytes":0},"fielddata":{"memory_size_in_bytes":363728,"evictions":0},"query_cache":{"memory_size_in_bytes":18818,"total_count":10256,"hit_count":1495,"miss_count":8761,"cache_size":468,"cache_count":468,"evictions":0},"completion":{"size_in_bytes":0},"segments":{"count":8579,"memory_in_bytes":152088884,"terms_memory_in_bytes":121890976,"stored_fields_memory_in_bytes":20846616,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":3168448,"points_memory_in_bytes":0,"doc_values_memory_in_bytes":6182844,"index_writer_memory_in_bytes":0,"version_map_memory_in_bytes":0,"fixed_bit_set_memory_in_bytes":1296,"max_unsafe_auto_id_timestamp":1598486401734,"file_sizes":{}},"mappings":{"field_types":[{"name":"boolean","count":558,"index_count":502},{"name":"date","count":6792,"index_count":752},{"name":"double","count":2485,"index_count":497},{"name":"flattened","count":2,"index_count":2},{"name":"geo_point","count":994,"index_count":497},{"name":"geo_shape","count":4,"index_count":4},{"name":"integer","count":1078,"index_count":503},{"name":"ip","count":3976,"index_count":497},{"name":"keyword","count":183524,"index_count":752},{"name":"long","count":25960,"index_count":501},{"name":"nested","count":16,"index_count":6},{"name":"object","count":28592,"index_count":752},{"name":"scaled_float","count":1,"index_count":1},{"name":"short","count":1,"index_count":1},{"name":"text","count":11243,"index_count":751}]},"analysis":{"char_filter_types":[],"tokenizer_types":[],"filter_types":[],"analyzer_types":[],"built_in_char_filters":[],"built_in_tokenizers":[],"built_in_filters":[],"built_in_analyzers":[]}},"nodes":{"count":{"total":1,"coordinating_only":0,"data":1,"ingest":1,"master":1,"ml":1,"remote_cluster_client":1,"transform":1,"voting_only":0},"versions":["7.9.2"],"os":{"available_processors":4,"allocated_processors":4,"names":[{"name":"Linux","count":1}],"pretty_names":[{"pretty_name":"Ubuntu 18.04.4 LTS","count":1}],"mem":{"total_in_bytes":8363704320,"free_in_bytes":174018560,"used_in_bytes":8189685760,"free_percent":2,"used_percent":98}},"process":{"cpu":{"percent":0},"open_file_descriptors":{"min":7305,"max":7305,"avg":7305}},"jvm":{"max_uptime_in_millis":278101819,"versions":[{"version":"15","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"15+36","vm_vendor":"AdoptOpenJDK","bundled_jdk":true,"using_bundled_jdk":true,"count":1}],"mem":{"heap_used_in_bytes":5232470016,"heap_max_in_bytes":6442450944},"threads":106},"fs":{"total_in_bytes":166318571520,"free_in_bytes":61382205440,"available_in_bytes":61365428224},"plugins":[],"network_types":* Connection #0 to host 206.189.196.214 left intact
{"transport_types":{"security4":1},"http_types":{"security4":1}},"discovery_types":{"zen":1},"packaging_types":[{"flavor":"default","t

Any suggestions on what I can do to reduce the shards significantly?
Can you you guide me in the right direction ...thank you.

I would recommend you look at these docs on the topic.

Thank you Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.