[2018-05-13T05:42:01,336][WARN ][r.suppressed ] path: /.kibana/_search, params: {size=10000, index=.kibana, from=0}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:274) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:132) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:243) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:107) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$4(InitialSearchPhase.java:205) ~[elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:184) [elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.1.1.jar:6.1.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.1.jar:6.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
will some one please help me out ,am unable to resolve the issue.
hi @anuj6664,
Your elasticsearch and kibana service is running ?
And which version and OS are you using ?
Thanks & Regards,
Krunal.
@Krunal_kalaria
yes both are running ,elasticsearch version 6.1.1 ,kibana 6.1.1
ubuntu16.04
下载文件错误
And what is the total memory and how many you have provided in Elasticsearch jvm.options ?
and try this command
sudo swapoff -a
Thanks & Regards,
Krunal.
@Krunal_kalaria 5g for elastic 1g for logstash ,am using 8gb server only for elk
can you share your elasticsearch and kibana yml file and also jsv.options file ?
@Krunal_kalaria
======================== Elasticsearch Configuration =========================
NOTE: Elasticsearch comes with reasonable defaults for most settings.
Before you set out to tweak and tune the configuration, make sure you
understand what are you trying to accomplish and the consequences.
The primary way of configuring a node is via this file. This template lists
the most important settings you may want to configure for a production cluster.
Please consult the documentation for further information on configuration options:
https://www.elastic.co/guide/en/elasticsearch/reference/index.html
---------------------------------- Cluster -----------------------------------
Use a descriptive name for your cluster:
#xpack.security.enabled: false
#cluster.name: my-application
------------------------------------ Node ------------------------------------
Use a descriptive name for the node:
#node.name: node-1
Add custom attributes to the node:
#node.attr.rack: r1
----------------------------------- Paths ------------------------------------
Path to directory where to store the data (separate multiple locations by comma):
path.data: /mnt/xvdv/elk-data
Path to log files:
path.logs: /mnt/xvdv/elk-logs
----------------------------------- Memory -----------------------------------
#bootstrap.memory_lock: true
Make sure that the heap size is set to about half the memory available
on the system and that the owner of the process is allowed to use this
limit.
Elasticsearch performs poorly when the system is swapping the memory.
---------------------------------- Network -----------------------------------
Set the bind address to a specific IP (IPv4 or IPv6):
#network.host: xxx.xxx.x.x
Set a custom port for HTTP:
http.port: 9200
For more information, consult the network module documentation.
--------------------------------- Discovery ----------------------------------
Pass an initial list of hosts to perform discovery when new node is started:
The default list of hosts is ["127.0.0.1", "[::1]"]
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#discovery.zen.minimum_master_nodes:
For more information, consult the zen discovery module documentation.
---------------------------------- Gateway -----------------------------------
Block initial recovery after a full cluster restart until N nodes are started:
#gateway.recover_after_nodes: 3
For more information, consult the gateway module documentation.
---------------------------------- Various -----------------------------------
Require explicit names when deleting indices:
#action.destructive_requires_name: true
@Krunal_kalaria
kibana.yml
Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
The default is 'localhost', which usually means remote machines will not be able to connect.
To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
to Kibana. This setting cannot end in a slash.
#server.basePath: ""
The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
The Kibana server's name. This is used for display purposes.
server.name: "kibana.xxx.xx"
The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
When this setting's value is true Kibana uses the hostname specified in the server.host
setting. When the value of this setting is false, Kibana uses the hostname of the host
that connects to this Kibana instance.
#elasticsearch.preserveHost: true
Kibana uses an index in Elasticsearch to store saved searches, visualizations and
dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
The default application to load.
#kibana.defaultAppId: "home"
If your Elasticsearch is protected with basic authentication, these settings provide
the username and password that the Kibana server uses to perform maintenance on the Kibana
index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
is proxied through the Kibana server.
elasticsearch.username: "kibana"
elasticsearch.password: "xxxxxxxxx"
Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
Optional settings that provide the paths to the PEM-format SSL certificate and key files.
These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
@Krunal_kalaria
jvm.options
JVM configuration
################################################################
IMPORTANT: JVM heap size
################################################################
You should always set the min and max JVM heap
size to the same value. For example, to set
the heap to 4 GB, set:
-Xms4g
-Xmx4g
See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
for more information
################################################################
Xms represents the initial size of total heap space
Xmx represents the maximum size of total heap space
-Xms5g
-Xmx5g
################################################################
Expert settings
################################################################
All settings below this section are considered
expert settings. Don't tamper with them unless
you understand what you are doing
################################################################
GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
optimizations
pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
basic
force the server VM
-server
explicitly set the stack size
-Xss1m
set to headless, just in case
-Djava.awt.headless=true
ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
use our provided JNA always versus the system one
-Djna.nosys=true
turn off a JDK optimization that throws away stack traces for common
exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow
flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
heap dumps
generate a heap dump when an allocation from the Java heap fails
heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
specify an alternative path for heap dumps
ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=/heap/dump/path
GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
log GC status to a file with time stamps
ensure the directory exists
#-Xloggc:${loggc}
By default, the GC log file will not rotate.
By uncommenting the lines below, the GC log file
will be rotated every 128MB at most 32 times.
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=32
-XX:GCLogFileSize=128M
Hey i have seen your 3 of files its good but can you check it with network.host in kibana.yml give comment to this line you have 0.0.0.0 can you do with it as it is if you are using localhost then no any change you need to do try with this may be its work for you.
Thanks & Regards,
Krunal.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.