Update: I upgraded the stack to 7.14.1
Ok the timeout issue seems to be related with the amount of Indices and shards. I deleted most of the indices and now it seems to be working better. I reduced the shards from 3027 to 104. However I have 2 questions here:
-
I am creating a couple of index everyday with filebeat and metricbeat. So every day i see more shards. My cluster is 4 elasticsearch nodes, 1 kibana and 1 logstash. How can i limit the amount of shards for such configuration if everyday i create 2 new indices?
-
I tried to create the snapshot and i do not get any error. The snapshot is supposed to be stored locally in /etc/elasticsearch/snapshots folder in elasticsearch.
When i run the command :
PUT /_snapshot/my_backup
{
"type": "fs",
"settings": {
"location": "/etc/elasticsearch/snapshots/"
}
}
I get this message:
#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-minimal-setup.html to enable security.
{
"acknowledged" : true
}
However i cannot see any snapshot created inside that folder /etc/elasticsearch/snapshots
Also when querying the snapshots i cannot see anything:
GET /_cat/snapshots
message:
#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-minimal-setup.html to enable security.
Any idea about this?
Also, if you can also give me a hand about logstash being monitored in Kibana:
Before the upgrade i could monitor logstash in kibana. Now it does not appear in kibana monitoring.
This is the xpack config in my logstash.yml
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
xpack.management.elasticsearch.hosts: ["http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.url: ["http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200", "http://10.xx.18.yy:9200"]
#xpack.management.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
Regarding the errors i cannot see any error in the logs. Just something that the address is already in use but i logs from filbeat and metricbeats seems to get correctly to elasticsearch.
filebeat and metricbeat are running in the same logstash server:
netstat output:
/etc/logstash# netstat -an | grep 5044
tcp 0 0 127.0.0.1:58174 127.0.0.1:5044 ESTABLISHED
tcp6 0 0 :::5044 :::* LISTEN
tcp6 0 0 ::1:55858 ::1:5044 ESTABLISHED
tcp6 0 0 ::1:5044 ::1:55858 ESTABLISHED
tcp6 0 0 127.0.0.1:5044 127.0.0.1:58174 ESTABLISHED
logstash logs:
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_abf92cab-fe1c-41c0-8b73-f362a8d18366", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>8>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:455)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:447)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
[2021-09-20T01:19:36,586][INFO ][org.logstash.beats.Server][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] Starting server on port: 5044
[2021-09-20T01:19:42,696][ERROR][logstash.javapipeline ][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_abf92cab-fe1c-41c0-8b73-f362a8d18366", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, add_hostname=>false, ssl_verify_mode=>"none", ssl_peer_metadata=>false, include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>8>
Error: Address already in use
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:455)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:447)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:227)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:134)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:562)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1334)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:506)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:491)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:973)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:260)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:356)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:164)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:472)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:500)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(io/netty/util/internal/ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:829)
[2021-09-20T01:19:43,699][INFO ][org.logstash.beats.Server][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] Starting server on port: 5044
[2021-09-20T01:19:49,771][ERROR][logstash.javapipeline ][main][7229c3a66cd58b359b8a161ac7fc6463350bfaa1ccd0f1c3f1c461870a1c7f28] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
I am not using any username and password in elasticsearch just an nginx proxy in front of kibana to ask for a username and password.