Hi guys I have problem with elasticsearch Java its stopping, please see logs below

org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:869) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:343) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:230) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:115) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4443) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4083) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:624) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.elasticsearch.index.engine.ElasticsearchConcurrentMergeScheduler.doMerge(ElasticsearchConcurrentMergeScheduler.java:99) ~[elasticsearch-6.2.2.jar:6.2.2]
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:661) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
[2018-03-10T18:07:46,696][INFO ][o.e.n.Node ] [] initializing ...
[2018-03-10T18:07:46,818][INFO ][o.e.e.NodeEnvironment ] [9hb_Gdj] using [1] data paths, mounts [[/var (/dev/md126)]], net usable_space [1.6tb], net total_space [1.7tb], types [ext4]
[2018-03-10T18:07:46,819][INFO ][o.e.e.NodeEnvironment ] [9hb_Gdj] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-03-10T18:07:46,948][INFO ][o.e.n.Node ] node name [9hb_Gdj] derived from node ID [9hb_GdjPTH-qN-aMrqivkg]; set [node.name] to override
[2018-03-10T18:07:46,948][INFO ][o.e.n.Node ] version[6.2.2], pid[32615], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Linux/4.9.0-6-amd64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_161/25.161-b12]
[2018-03-10T18:07:46,948][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.DNj7es11, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [aggs-matrix-stats]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [analysis-common]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [ingest-common]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [lang-expression]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [lang-mustache]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [lang-painless]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [mapper-extras]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [parent-join]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [percolator]
[2018-03-10T18:07:47,571][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [rank-eval]
[2018-03-10T18:07:47,572][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [reindex]
[2018-03-10T18:07:47,572][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [repository-url]
[2018-03-10T18:07:47,572][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [transport-netty4]
[2018-03-10T18:07:47,572][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [tribe]
[2018-03-10T18:07:47,572][INFO ][o.e.p.PluginsService ] [9hb_Gdj] no plugins loaded
[2018-03-10T18:07:50,027][INFO ][o.e.d.DiscoveryModule ] [9hb_Gdj] using discovery type [zen]
[2018-03-10T18:07:50,521][INFO ][o.e.n.Node ] initialized
[2018-03-10T18:07:50,521][INFO ][o.e.n.Node ] [9hb_Gdj] starting ...
[2018-03-10T18:07:50,656][INFO ][o.e.t.TransportService ] [9hb_Gdj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-03-10T18:07:53,827][INFO ][o.e.c.s.MasterService ] [9hb_Gdj] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{A0pnHA65QCOxGr9sd_d3Mg}{localhost}{127.0.0.1:9300}
[2018-03-10T18:07:53,832][INFO ][o.e.c.s.ClusterApplierService] [9hb_Gdj] new_master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{A0pnHA65QCOxGr9sd_d3Mg}{localhost}{127.0.0.1:9300}, reason: apply cluster state (from master [master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{A0pnHA65QCOxGr9sd_d3Mg}{localhost}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-03-10T18:07:53,850][INFO ][o.e.h.n.Netty4HttpServerTransport] [9hb_Gdj] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-03-10T18:07:53,850][INFO ][o.e.n.Node ] [9hb_Gdj] started
[2018-03-10T18:07:55,284][INFO ][o.e.g.GatewayService ] [9hb_Gdj] recovered [19] indices into cluster_state
[2018-03-10T18:08:15,566][INFO ][o.e.c.r.a.AllocationService] [9hb_Gdj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2015.05.20][3], [logstash-2015.05.20][2], [logstash-2015.05.20][4]] ...]).

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.

Also move your title content in the body instead and add a more explicit title.

Please update your post.

Hi Dadoonet

Thanks for your replay, but im new in this and I dont know how to struture this because I dont know where the line started and where that stopped.

We have more then 23gb memery and 1.7tb disck, using Linux debian 8

[2018-03-10T18:07:50,656][INFO ][o.e.t.TransportService ] [9hb_Gdj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}

[2018-03-10T18:07:46,948][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.DNj7es11, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]

Just use markdown format as I explained to make all your logs looking like code.

Looking to this, its working but to work I have to disable filebeat index, my implementation was for Squid Transparent.

I need to upgrade memory in Java to feat my implementation.

[2018-03-11T04:57:23,783][WARN ][o.e.b.JNANatives ] Unable to lock JVM Memory: error=12, reason=Não foi possível alocar memória
[2018-03-11T04:57:23,789][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2018-03-11T04:57:23,790][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-03-11T04:57:23,790][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

[2018-03-11T04:57:23,790][WARN ][o.e.b.JNANatives ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2018-03-11T04:57:23,935][INFO ][o.e.n.Node ] [] initializing ...
[2018-03-11T04:57:24,073][INFO ][o.e.e.NodeEnvironment ] [9hb_Gdj] using [1] data paths, mounts [[/var (/dev/md126)]], net usable_space [1.6tb], net total_space [1.7tb], types [ext4]

[2018-03-11T04:57:24,073][INFO ][o.e.e.NodeEnvironment ] [9hb_Gdj] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-03-11T04:57:24,194][INFO ][o.e.n.Node ] node name [9hb_Gdj] derived from node ID [9hb_GdjPTH-qN-aMrqivkg]; set [node.name] to override

[2018-03-11T04:57:24,194][INFO ][o.e.n.Node ] version[6.2.2], pid[3009], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Linux/4.9.0-6-amd64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_161/25.161-b12]

[2018-03-11T04:57:24,194][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.yJqDcFtp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]

[2018-03-11T04:57:24,782][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [aggs-matrix-stats]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [analysis-common]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [ingest-common]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [lang-expression]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [lang-mustache]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [lang-painless]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [mapper-extras]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [parent-join]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [percolator]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [rank-eval]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [reindex]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [repository-url]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [transport-netty4]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService ] [9hb_Gdj] loaded module [tribe]
[2018-03-11T04:57:24,784][INFO ][o.e.p.PluginsService ] [9hb_Gdj] no plugins loaded
[2018-03-11T04:57:27,236][INFO ][o.e.d.DiscoveryModule ] [9hb_Gdj] using discovery type [zen]
[2018-03-11T04:57:27,732][INFO ][o.e.n.Node ] initialized
[2018-03-11T04:57:27,732][INFO ][o.e.n.Node ] [9hb_Gdj] starting ...
[2018-03-11T04:57:27,862][INFO ][o.e.t.TransportService ] [9hb_Gdj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-03-11T04:57:27,906][WARN ][o.e.b.BootstrapChecks ] [9hb_Gdj] memory locking requested for elasticsearch process but memory is not locked

[2018-03-11T04:57:30,956][INFO ][o.e.c.s.MasterService ] [9hb_Gdj] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{ZNNXw_2bSO6hTRwwWeIlEg}{localhost}{127.0.0.1:9300}

[2018-03-11T04:57:30,961][INFO ][o.e.c.s.ClusterApplierService] [9hb_Gdj] new_master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{ZNNXw_2bSO6hTRwwWeIlEg}{localhost}{127.0.0.1:9300}, reason: apply cluster state (from master [master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{ZNNXw_2bSO6hTRwwWeIlEg}{localhost}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])

[2018-03-11T04:57:30,981][INFO ][o.e.h.n.Netty4HttpServerTransport] [9hb_Gdj] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}

[2018-03-11T04:57:30,982][INFO ][o.e.n.Node ] [9hb_Gdj] started
[2018-03-11T04:57:32,507][INFO ][o.e.g.GatewayService ] [9hb_Gdj] recovered [19] indices into cluster_state
[2018-03-11T04:57:59,050][INFO ][o.e.c.r.a.AllocationService] [9hb_Gdj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2015.05.20][0]] ...]).

[2018-03-11T04:58:08,287][INFO ][o.e.c.m.MetaDataCreateIndexService] [9hb_Gdj] [logstash-2018.03.11] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]

[2018-03-11T04:58:09,568][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] create_mapping [doc]
[2018-03-11T04:59:37,168][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]
[2018-03-11T04:59:37,404][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]
[2018-03-11T04:59:37,495][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]
[2018-03-11T07:23:37,898][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]

Please format your logs.

Hi Dadoonet, as I said, I dont know how to format it Im new in this challenge, I guess this will not take to you long time, could you take one of my posted log to format and show me this first time?

I will appreciate

What is unclear when I say to use:

```
Insert your logs here
```

?

Thanks for support

"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16}


[2018-03-11T04:57:23,783][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Não foi possível alocar memória
[2018-03-11T04:57:23,789][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
[2018-03-11T04:57:23,790][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-03-11T04:57:23,790][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
	# allow user 'elasticsearch' mlockall
			elasticsearch soft memlock unlimited
			elasticsearch hard memlock unlimited


[2018-03-11T04:57:23,790][WARN ][o.e.b.JNANatives         ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2018-03-11T04:57:23,935][INFO ][o.e.n.Node               ] [] initializing ...
[2018-03-11T04:57:24,073][INFO ][o.e.e.NodeEnvironment    ] [9hb_Gdj] using [1] data paths, mounts [[/var (/dev/md126)]], net usable_space [1.6tb], net total_space [1.7tb], types [ext4]

[2018-03-11T04:57:24,073][INFO ][o.e.e.NodeEnvironment    ] [9hb_Gdj] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-03-11T04:57:24,194][INFO ][o.e.n.Node               ] node name [9hb_Gdj] derived from node ID [9hb_GdjPTH-qN-aMrqivkg]; set [node.name] to override

[2018-03-11T04:57:24,194][INFO ][o.e.n.Node               ] version[6.2.2], pid[3009], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Linux/4.9.0-6-amd64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_161/25.161-b12]

[2018-03-11T04:57:24,194][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.yJqDcFtp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]

[2018-03-11T04:57:24,782][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [aggs-matrix-stats]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [analysis-common]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [ingest-common]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [lang-expression]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [lang-mustache]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [lang-painless]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [mapper-extras]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [parent-join]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [percolator]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [rank-eval]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [reindex]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [repository-url]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [transport-netty4]
[2018-03-11T04:57:24,783][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] loaded module [tribe]
[2018-03-11T04:57:24,784][INFO ][o.e.p.PluginsService     ] [9hb_Gdj] no plugins loaded
[2018-03-11T04:57:27,236][INFO ][o.e.d.DiscoveryModule    ] [9hb_Gdj] using discovery type [zen]
[2018-03-11T04:57:27,732][INFO ][o.e.n.Node               ] initialized
[2018-03-11T04:57:27,732][INFO ][o.e.n.Node               ] [9hb_Gdj] starting ...
[2018-03-11T04:57:27,862][INFO ][o.e.t.TransportService   ] [9hb_Gdj] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-03-11T04:57:27,906][WARN ][o.e.b.BootstrapChecks    ] [9hb_Gdj] memory locking requested for elasticsearch process but memory is not locked

[2018-03-11T04:57:30,956][INFO ][o.e.c.s.MasterService    ] [9hb_Gdj] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{ZNNXw_2bSO6hTRwwWeIlEg}{localhost}{127.0.0.1:9300}

[2018-03-11T04:57:30,961][INFO ][o.e.c.s.ClusterApplierService] [9hb_Gdj] new_master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{ZNNXw_2bSO6hTRwwWeIlEg}{localhost}{127.0.0.1:9300}, reason: apply cluster state (from master [master {9hb_Gdj}{9hb_GdjPTH-qN-aMrqivkg}{ZNNXw_2bSO6hTRwwWeIlEg}{localhost}{127.0.0.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])

[2018-03-11T04:57:30,981][INFO ][o.e.h.n.Netty4HttpServerTransport] [9hb_Gdj] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}

[2018-03-11T04:57:30,982][INFO ][o.e.n.Node               ] [9hb_Gdj] started
[2018-03-11T04:57:32,507][INFO ][o.e.g.GatewayService     ] [9hb_Gdj] recovered [19] indices into cluster_state
[2018-03-11T04:57:59,050][INFO ][o.e.c.r.a.AllocationService] [9hb_Gdj] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2015.05.20][0]] ...]).

[2018-03-11T04:58:08,287][INFO ][o.e.c.m.MetaDataCreateIndexService] [9hb_Gdj] [logstash-2018.03.11] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]

[2018-03-11T04:58:09,568][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] create_mapping [doc]
[2018-03-11T04:59:37,168][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]
[2018-03-11T04:59:37,404][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]
[2018-03-11T04:59:37,495][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]
[2018-03-11T07:23:37,898][INFO ][o.e.c.m.MetaDataMappingService] [9hb_Gdj] [logstash-2018.03.11/FmEPa6S9R12uUrb2Ke_h1g] update_mapping [doc]

Its a JAVA memory error - java.lang.OutOfMemoryError: Java heap space

java.lang.OutOfMemoryError: Java heap space
	at org.apache.lucene.util.packed.PackedLongValues$Builder.<init>(PackedLongValues.java:185) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.util.packed.DeltaPackedLongValues$Builder.<init>(DeltaPackedLongValues.java:59) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.util.packed.PackedLongValues.deltaPackedBuilder(PackedLongValues.java:55) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.util.packed.PackedLongValues.deltaPackedBuilder(PackedLongValues.java:60) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.SortedSetDocValuesWriter.<init>(SortedSetDocValuesWriter.java:70) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.DefaultIndexingChain.indexDocValue(DefaultIndexingChain.java:562) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:466) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:240) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1729) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1464) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:1070) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1012) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:878) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:738) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:707) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:673) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:548) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequest(TransportShardBulkAction.java:140) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:236) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:123) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:110) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:72) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1034) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:1012) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:103) ~[elasticsearch-6.2.2.jar:6.2.2]
	at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:238) ~[elasticsearch-6.2.2.jar:6.2.2]
[2018-03-10T14:18:19,314][INFO ][o.e.n.Node               ] [] initializing ...
[2018-03-10T14:18:19,442][INFO ][o.e.e.NodeEnvironment    ] [9hb_Gdj] using [1] data paths, mounts [[/var (/dev/md126)]], net usable_space [1.6tb], net total_space [1.7tb], types [ext4]
[2018-03-10T14:18:19,442][INFO ][o.e.e.NodeEnvironment    ] [9hb_Gdj] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-03-10T14:18:19,568][INFO ][o.e.n.Node               ] node name [9hb_Gdj] derived from node ID [9hb_GdjPTH-qN-aMrqivkg]; set [node.name] to override
[2018-03-10T14:18:19,568][INFO ][o.e.n.Node               ] version[6.2.2], pid[19121], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Linux/4.9.0-6-amd64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_161/25.161-b12]
[2018-03-10T14:18:19,568][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.pfVLbZJk, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]

Sounds like you are starting with only 1gb of heap: -Xms1g, -Xmx1g

May be give more memory?

How much data you have? Are you running a query when this happens?

The project I have is, that I have a squid in transparent mode, send logs to Logstash and elasticsearch receiving and sending to Kibana based in this project - https://reticent.net.nz/visualising-kibana-squid-logs/

I have 23gb and 15gb located to squid cache and 1.7tb for cache

I need to increase Elastic memory its stopping java or java is stopping elastic

Not sure I'm fully understanding.

Anyway, you need to stop the node, change the jvm.options settings and start again the node.

Hi Dadoonet

Please check this as I have adjust my memory to increase ELS but getting this error below, unable to lock JVM Memory, I have configured the limites.conf file to include both line SOFT and HARD, but I still getting this error.

And the second scenario is about does 6 WARN you can check, Why its causing exception on http traffic?

I edited your posts to remove the unneeded citation you have been adding.
Could you do the same for the last 2 posts?

I'd recommend reading all this chapter: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html

Its to install java8 or java9?

Both works

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.