ELK - Snapshot timeout

Hi team,

I'm trying to create a snapshot from the cluster but i'm getting some timeout errors:

PUT /_snapshot/my_backup
   {
     "type": "fs",
      "settings": {
        "compress" : true,
        "location": "/etc/elasticsearch/snapshots/" 
      }
}

I am getting this error:

{
  "statusCode": 504,
  "error": "Gateway Time-out",
  "message": "Client request timeout"
}

Kibana is behind an Nginx proxy so just in case i tried to perform the snapshot directly from the node (master) but i am getting the same timeout error:

curl -XPUT 10.2.18.10:9200/_snapshot/newbackup -H 'Content-Type: application/json' -d '{
     "type": "fs",
     "settings": {
       "location": "/etc/elasticsearch/snapshots",
         "compress": true,
         "chunk_size": "10m"
     }
}'

error:

error":{"root_cause":[{"type":"process_cluster_event_timeout_exception","reason":"failed to process cluster event (put_repository [newbackup]) within 30s"}],"type":"process_cluster_event_timeout_exception","reason":"failed to process cluster event (put_repository [newbackup]) within 30s"},"status":503

I have 4 elasticsearch nodes in total and each elasticsearch.yml file includes the "path.repo: /etc/elasticsearch/snapshots"

I verified this with "curl 10.2.18.10:9200/_nodes/settings"

it might be related to the amount of shard?

GET /_cluster/health

{
  "cluster_name" : "xx-zz-elk",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 4,
  "number_of_data_nodes" : 4,
  "active_primary_shards" : 1552,
  "active_shards" : 3027,
  "relocating_shards" : 0,
  "initializing_shards" : 2,
  "unassigned_shards" : 70,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 2,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 64623,
  "active_shards_percent_as_number" : 97.67666989351403
}

I tried to check Kibana's logs in "/var/log" but no logs for Kibana. Also "journalctl" does not give me too much info. On the other hand, i cannot see too many tasks running "GET _tasks".

As I mentioned Kibana is behind a nginx proxy but if i tried to run other commands i am not getting any issues (usually GET commands). However, if proxy was the issue, running the command directly on the elasticsearch node should work, isn't it?

Thanks in advance.

Which version is that?
Is the dir /etc/elasticsearch/snapshots accessible from all nodes?

Some unrelated comment:

  "number_of_data_nodes" : 4,
  "active_shards" : 3027,

That's a lot of shards!

What is the output of:

GET /
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v

If some outputs are too big, please share them on gist.github.com and link them here.

1 Like

Hello @dadoonet ,

Thanks for your answer.

The stack is 6.5.0. I know that this version is already EOF but i need to test something with this version prior to upgrade it.

The snapshots folder seems to be accessible:

wxr-sr-x  2 root elasticsearch  4096 Sep 16 01:54 snapshots/

The path "path.repo" is setup in every elasticsearch.yml node. Also, the same folder is setup in every node.

GET /

{
  "name" : "aa-yy-01",
  "cluster_name" : "xx-zz-elk",
  "cluster_uuid" : "xxLAyyi7RpmkuK-yzzBYaa",
  "version" : {
    "number" : "6.5.0",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "816e6f6",
    "build_date" : "2018-11-09T18:58:36.352602Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

GET /_cat/nodes?v

{
  "error": {
    "root_cause": [
      {
        "type": "illegal_argument_exception",
        "reason": "request [/_cat/nodes] contains unrecognized parameter: [v/]"
      }
    ],
    "type": "illegal_argument_exception",
    "reason": "request [/_cat/nodes] contains unrecognized parameter: [v/]"
  },
  "status": 400
}

GET /_cat/health?v

epoch      timestamp cluster    status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1631802612 14:30:12  xx-zz-elk green           4         4   3099 1552    0    0        0             0                  -                100.0%

GET /_cat/indices?v

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   winlogbeat-6.5.0-2021.08.25     dkbWMUDXTTek6ovElo0-LA   5   1        710            0      886kb        431.3kb
green  open   logstash-2021.05.22             FpbBcNj2QZ-DnX_MV4qRfg   5   1        646            0      2.1mb       1019.2kb
green  open   logstash-2021.08.09             yqNPYq-4R5GbpD59tOR7Jw   5   1        874            0      1.6mb        898.2kb
green  open   logstash-2021.05.16             HZC73MvPT5qUuNKPJ9GCtw   5   1        628            0      1.7mb          844kb
green  open   logstash-2021.08.08             FFRhN6QqRy6C53tJBu6n0Q   5   1        546            0      1.6mb        871.9kb
green  open   logstash-2021.05.03             qhSGv_22SLSBTLMATluqAg   5   1        624            0      1.8mb        975.2kb
green  open   logstash-2021.05.26             x8XQw-_YS-G-scXi8ANlZA   5   1        616            0      1.4mb        735.7kb
green  open   logstash-2021.06.20             2ObxNNquRv-t0-msmDAjig   5   1        610            0        2mb            1mb
green  open   .monitoring-es-6-2021.09.14     vVgGjbA_QI2Oz2cJbYYE-w   1   1    1527503        33836      1.9gb        975.8mb
green  open   winlogbeat-6.5.0-2021.09.11     4z6GPUPsRoGMgr6665fsDQ   5   1       3966            0       10mb          5.1mb
green  open   logstash-2021.06.16             ZY8LggOvS1-eYUqXO1no6A   5   1        662            0      1.9mb       1000.4kb
green  open   winlogbeat-6.5.0-2021.09.10     Px3UaCS9TgiLO3VskFq-2w   5   1       4029            0     10.2mb          5.1mb
green  open   logstash-2021.07.04             8SMYdCoAS5ijnr9gU8ahEQ   5   1        598            0      1.6mb        865.1kb
green  open   logstash-2021.08.01             Q5L9SZ8gTEGzDJxzvYMTaw   5   1        626            0        2mb        961.6kb
green  open   winlogbeat-6.5.0-2021.07.30     zhKfSWvoQMq9GnKSK_TiiQ   5   1        301            0    671.9kb        324.2kb
green  open   .monitoring-es-6-2021.09.16     a5Ln8ZMBSMq8Dqs98jqv_A   1   1     610171        13033    446.8mb        246.1mb
green  open   winlogbeat-6.5.0-2021.07.16     zZ-7cMD1RuOq4D_IRrVPpw   5   1        344            0    806.3kb          369kb
green  open   logstash-2021.06.02             lU6-IvpiSdyigns1Y6H48A   5   1        644            0      1.9mb        934.3kb
green  open   winlogbeat-6.5.0-2021.09.15     bIBDt4XMSq6YYdK6dfr0bA   5   1       3971            0     10.1mb          4.8mb
green  open   ldmetricbeat_backup             kKE5ZBCaR2ONFmG0N5KX6A   5   1          0            0      2.5kb          1.2kb
green  open   winlogbeat-6.5.0-2021.06.15     JtoNS3mnTMK7NgO9AtwaCw   5   1        305            0      688kb        330.8kb
green  open   s-az-md-dv1-02-2021.09.01       6ReETDbxQVuy0qaFhWkaqw   5   1        316            0    813.6kb        406.7kb
green  open   winlogbeat-6.5.0-2021.06.01     FMZzPA5URXabCsd6UP8rnA   5   1        328            0    747.2kb        346.9kb
green  open   winlogbeat-6.5.0-2021.05.10     MmnNMonuRvO_tehNVz2Cng   5   1        313            0    706.7kb          341kb
green  open   winlogbeat-6.5.0-2021.07.25     RKZz4Ji0R0azfy9DQonMoQ   5   1        306            0    725.6kb        362.8kb
green  open   winlogbeat-6.5.0-2021.09.01     Ipo9pnrJRR2VLC8Qiw_Frw   5   1       3869            0      9.6mb          4.8mb
green  open   winlogbeat-6.5.0-2021.06.27     KIViKLofT4SaM2jOwuLozw   5   1        304            0      712kb        331.3kb
green  open   winlogbeat-6.5.0-2021.07.17     AChEIeupTsm2SqzwrQQi_w   5   1        488            0    977.5kb        465.6kb
green  open   winlogbeat-6.5.0-2021.07.10     i4DHabcdTF2PunZzAYDAGg   5   1        316            0    723.8kb        361.9kb
green  open   s-az-md-dv1-02-2021.09.10       QSCJPLcKSRCgOhrJ0dFNbg   5   1       1425            0      2.5mb          1.1mb
green  open   winlogbeat-6.5.0-2021.09.05     xc9r1Wg8SE6P__Q7Q_47cA   5   1       3861            0     10.3mb          5.1mb
green  open   logstash-2021.07.24             j6-xDzLwSpuqlMvxnXqqig   5   1        594            0      1.9mb          953kb
green  open   winlogbeat-6.5.0-2021.07.14     Yrkh6gEGQiKMTv52d_VLIQ   5   1        308            0      700kb        330.5kb
green  open   logstash-2021.05.27             wXURj57DRRenedHci_JkUg   5   1        630            0      1.8mb        859.1kb
green  open   .monitoring-kibana-6-2021.09.16 3rAgFjr7SHG2xAFKJA7lfQ   1   1       5345            0      4.9mb          2.4mb
green  open   mislogs-2021.09.01              jF9wezDDRWivRDfQOSiVZw   5   1        316            0    739.8kb        348.8kb
green  open   winlogbeat-6.5.0-2021.06.17     MiZkh1qhQjGDKo_tCPFlJg   5   1        331            0    779.6kb        369.8kb
green  open   logstash-2021.06.01             9upJpzHqQ5yv0eoUA0g3Bg   5   1        656            0      2.3mb          1.1mb
green  open   winlogbeat-6.5.0-2021.06.10     oKTexqCRQQuxWEMulrangw   5   1        316            0    710.4kb        344.6kb
green  open   winlogbeat-6.5.0-2021.06.14     FAEtHOFISZaWu7oq8V9iAQ   5   1        322            0    726.9kb        350.8kb
green  open   winlogbeat-6.5.0-2021.07.04     jhqbxTqxQ26JFd8iiiYbWg   5   1        299            0    694.2kb        323.9kb
green  open   logstash-2021.05.07             8NR_l5cSSLqWXxCkKe1odA   5   1        620            0      1.6mb        813.3kb
green  open   winlogbeat-6.5.0-2021.05.27     e5EGOlsCSaeo_OCHN08Ijg   5   1        315            0    737.2kb          345kb
green  open   logstash-2021.05.12             Vf7sD-IdS1C7JEhdycOJhQ   5   1        642            0      1.8mb            1mb
green  open   logstash-2021.07.31             3urQSQhSTISroVtBozfWfg   5   1        614            0      1.4mb        758.9kb

UNABLE TO ATTACH MORE

If you need more output from the last command i can use gist.

Update: I upgraded the stack to 7.14.1

Ok the timeout issue seems to be related with the amount of Indices and shards. I deleted most of the indices and now it seems to be working better. I reduced the shards from 3027 to 104. However I have 2 questions here:

  1. I am creating a couple of index everyday with filebeat and metricbeat. So every day i see more shards. My cluster is 4 elasticsearch nodes, 1 kibana and 1 logstash. How can i limit the amount of shards for such configuration if everyday i create 2 new indices?

  2. I tried to create the snapshot and i do not get any error. The snapshot is supposed to be stored locally in /etc/elasticsearch/snapshots folder in elasticsearch.

When i run the command :

PUT /_snapshot/my_backup
   {
     "type": "fs",
      "settings": {
  
        "location": "/etc/elasticsearch/snapshots/" 
      }
}

I get this message:

#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-minimal-setup.html to enable security.
{
  "acknowledged" : true
}

However i cannot see any snapshot created inside that folder /etc/elasticsearch/snapshots

Also when querying the snapshots i cannot see anything:

GET /_cat/snapshots

message:

#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-minimal-setup.html to enable security.

Any idea about this?

Also, if you can also give me a hand about logstash being monitored in Kibana:

Before the upgrade i could monitor logstash in kibana. Now it does not appear in kibana monitoring.

This is the xpack config in my logstash.yml

# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
xpack.management.elasticsearch.hosts: ["http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.url: ["http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200"]
#xpack.management.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s

Regarding the errors i cannot see any error in the logs. Just something that the address is already in use but i logs from filbeat and metricbeats seems to get correctly to elasticsearch.

filebeat and metricbeat are running in the same logstash server:

netstat output:

/etc/logstash# netstat -an | grep 5044
tcp        0      0 127.0.0.1:58174         127.0.0.1:5044          ESTABLISHED
tcp6       0      0 :::5044                 :::*                    LISTEN
tcp6       0      0 ::1:55858               ::1:5044                ESTABLISHED
tcp6       0      0 ::1:5044                ::1:55858               ESTABLISHED
tcp6       0      0 127.0.0.1:5044          127.0.0.1:58174         ESTABLISHED

logstash logs:

  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats port=>5044, id=>"7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_abf92cab-fe1c-41c0-8b73-f362a8d18366", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>8>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:455)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:447)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
[2021-09-20T01:19:36,586][INFO ][org.logstash.beats.Server][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] Starting server on port: 5044
[2021-09-20T01:19:42,696][ERROR][logstash.javapipeline    ][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats port=>5044, id=>"7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_abf92cab-fe1c-41c0-8b73-f362a8d18366", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>8>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:455)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:447)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
[2021-09-20T01:19:43,699][INFO ][org.logstash.beats.Server][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] Starting server on port: 5044
[2021-09-20T01:19:49,771][ERROR][logstash.javapipeline    ][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main

I am not using any username and password in elasticsearch just an nginx proxy in front of kibana to ask for a username and password.

Great!

Yes.

Have a look at the index templates you have. With the new version, the default number of shard for everything coming from beats agents should be 1 primary and 1 replica.

You also had a lot of "old indices" which you might want to remove at some point.

I'd recommend using ILM (read from ILM: Manage the index lifecycle | Elasticsearch Guide [8.11] | Elastic). So at some point you can archive or delete old indices automatically. Also look at Index management in Kibana | Elasticsearch Guide [8.11] | Elastic.

You can also have a look at data streams (Set up a data stream | Elasticsearch Guide [8.11] | Elastic) which are now used by default by elastic agents.

1 Like
PUT /_snapshot/my_backup

This command creates a repository that could be used for the snapshots.
It does not create a snapshot.

To create a snapshot, run the commands you have here: Create a snapshot | Elasticsearch Guide [7.14] | Elastic

Easier, use the Snapshot Management interface you have in Kibana :slight_smile:

1 Like

thanks for the answer i will have a look this week.

Any idea of why logstash monitoring is not displayed in kibana? i attached logs in my previous comment.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.