Yes. The particular task you quoted is starting some shards so that they can be searched, which seems important to me. But they shouldn't really take very long.
After some time the shards get assigned and the status changes to green.
I feel that it takes to much time. I switched from my old node (has an older elasticsearch version) to docker and i didn't have that problem on the old node.
Nor I. It should not take nearly this long to start this cluster. Some tasks have been waiting longer than 6 minutes to complete.
Are you using local disks for storage, or are you using something network-attached?
Can you share all the logs that the node writes from when it starts until when the cluster is fully green? Use https://gist.github.com/ as it'll be too large for a forum post.
Now I only started elasticsearch and kibana without logstash and the cluster state update task didn't show up again.
But still it takes around 8mins for the cluster health to change to "yellow" (Yellow because in some nodes i have set replicas to 1 but i only have one node).
Is that normal?
Here is my logs from the startup:
Do you think it has something to do with not sufficient resources? Or maybe the size or number of shards? But i had the same size shards on my old node and it wasn't such a big problem there. Querying also takes a bit longer than before i feel like.
No, it's definitely not normal for this to take 8 minutes. I was hoping for some kind of logged message indicating what was running slowly but I see only silence for those 8 minutes.
Could you add logger.org.elasticsearch.cluster.service: TRACE to the elasticsearch.yml file and restart it? That should start to tell us a bit more about what's happening in that missing time.
Could you also get some thread dumps by calling GET _nodes/hot_threads?threads=9999 a few times during those 8 minutes?
The only slightly strange thing I noticed in your log was that the JVM options contain -Xms1g, -Xmx1g, -Xmx10g and -Xms10g. It's picking up the larger, 10g values, but you probably want to get rid of the 1g ones to avoid confusion in future.
The only slightly strange thing I noticed in your log was that the JVM options contain -Xms1g , -Xmx1g , -Xmx10g and -Xms10g . It's picking up the larger, 10g values, but you probably want to get rid of the 1g ones to avoid confusion in future.
Yes, i noticed that too and was a bit confused. I set it in my docker-compose.yml as environment variable. But i see it using 10g of memory.
Logs with logger.org.elasticsearch.cluster.service: TRACE in startup.log
I put GET _nodes/hot_threads?threads=9999 responses in hot_threads.txt.
Thanks. From what I can see this looks like a normal startup: the node is busy starting up all the shards, reading data from disk etc. Can you tell us a bit more about the cluster, i.e. how many indices and shards you have, and how much data there is in total?
I actually switched from a SSD to HDD. I made some space and will switch to a SSD again and see if that may change anything. I will test it next week and update here.
I have around 125GB of data.
Here is my output for the indices info:
I plan to actually reindex from weekly indices to monthly indices, because I have a lot of small indices. Is that a good idea?
That could well explain the difference. Starting Elasticsearch can be quite I/O intensive, so I'm not surprised that a spinning disk is slower.
Yes, these indices are mostly very small, and fewer larger indices would be preferable. I also think you should consider reducing the number of shards in each index from 5 to 1.
The SSD changed a lot. It now takes about 2-3 minutes for the node to be ready. Is that more reasonable now right?
I tried to shrink my indices but ran into problems.
The new index created by the shrink operation is marked as red.
GET /_cluster/allocation/explain gave following response:
{
"index" : "logstash-2018.35_shrinked",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "ALLOCATION_FAILED",
"at" : "2019-04-08T15:59:48.615Z",
"failed_allocation_attempts" : 1,
"details" : "failed shard on node [MEqShQSoRoqabapHo385Fw]: failed to create shard, failure IndexNotFoundException[no such index]",
"last_allocation_status" : "no"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions" : [
{
"node_id" : "MEqShQSoRoqabapHo385Fw",
"node_name" : "MEqShQS",
"transport_address" : "172.18.0.2:9300",
"node_decision" : "no",
"weight_ranking" : 1,
"deciders" : [
{
"decider" : "max_retry",
"decision" : "NO",
"explanation" : "shard has exceeded the maximum number of retries [1] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2019-04-08T15:59:48.615Z], failed_attempts[1], delayed=false, details[failed shard on node [MEqShQSoRoqabapHo385Fw]: failed to create shard, failure IndexNotFoundException[no such index]], allocation_status[deciders_no]]]"
}
]
}
]
}
This is what i found in the logs:
FileSystemException was thrown.
I already retried the allocation with POST /_cluster/reroute?retry_failed=true. Same result.
There also should be enough storage left... I am not sure what to problem is.
Sounds like something went wrong reading this file, or maybe writing it, but I've never seen this before. Shrinking relies on hard-linking the index files, but most file systems support this just fine.
Try deleting the shrunk index logstash-2018.35_shrinked and shrinking it again, and if it fails for the same reason then it'd be good if you could find the full stack trace containing that FileSystemException and share it here.
Thanks, at least that tells us a bit more. Does this file exist? /usr/share/elasticsearch/data/nodes/0/indices/iI_tnAUtSri2X3P-4sm18w/0/index/_7.cfs. Is it readable (e.g. try sha1sum /usr/share/elasticsearch/data/nodes/0/indices/iI_tnAUtSri2X3P-4sm18w/0/index/_7.cfs). Does dmesg show any kernel messages about filesystem problems?
Oh hang on the logs you shared above are dated 2019-04-08T16:07:56,775, a few days ago. There should be more recent messages from your later attempt at the shrink.
Yes. It is the old one but the ones from today are the same:
[2019-04-11T12:31:04,944][WARN ][o.e.i.c.IndicesClusterStateService] [-BmHCz1] [[logstash-2018.35_shrinked6][0]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [logstash-2018.35_shrinked6][0]: Recovery failed on {-BmHCz1}{-BmHCz1qSZ2qqCUw33JjUw}{NWDdRyRmR6Kd7_6SNmQIUA}{172.18.0.2}{172.18.0.2:9300}
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$12(IndexShard.java:2455) [elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.7.1.jar:6.7.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed to recover from local shards
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromLocalShards$3(StoreRecovery.java:140) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:310) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromLocalShards(StoreRecovery.java:124) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.IndexShard.recoverFromLocalShards(IndexShard.java:1725) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$12(IndexShard.java:2450) ~[elasticsearch-6.7.1.jar:6.7.1]
... 4 more
Caused by: java.nio.file.FileSystemException: /usr/share/elasticsearch/data/nodes/0/indices/IrvCj5v7T1OKKdj-a1HHjg/0/index/_7.cfs: Invalid argument
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:100) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) ~[?:?]
at java.nio.channels.FileChannel.open(FileChannel.java:292) ~[?:?]
at java.nio.channels.FileChannel.open(FileChannel.java:345) ~[?:?]
at org.apache.lucene.util.IOUtils.fsync(IOUtils.java:460) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.FSDirectory.fsync(FSDirectory.java:332) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.FSDirectory.sync(FSDirectory.java:287) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.FilterDirectory.sync(FilterDirectory.java:84) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.store.LockValidatingDirectoryWrapper.sync(LockValidatingDirectoryWrapper.java:68) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4824) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3297) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3465) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3430) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
at org.elasticsearch.index.shard.StoreRecovery.addIndices(StoreRecovery.java:189) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromLocalShards$3(StoreRecovery.java:132) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:310) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromLocalShards(StoreRecovery.java:124) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.IndexShard.recoverFromLocalShards(IndexShard.java:1725) ~[elasticsearch-6.7.1.jar:6.7.1]
at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$12(IndexShard.java:2450) ~[elasticsearch-6.7.1.jar:6.7.1]
... 4 more
Does it exist? What does stat /usr/share/elasticsearch/data/nodes/0/indices/IrvCj5v7T1OKKdj-a1HHjg/0/index/_7.cfs say? How about sha1sum /usr/share/elasticsearch/data/nodes/0/indices/IrvCj5v7T1OKKdj-a1HHjg/0/index/_7.cfs?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.