Logstash can´t index event to Elasticsearch

Hello everyone,
First of all, I want to introduce you to my problem, what I want to do, and how I have it installed.
I have ELK Stack installed this docker, and this docker, at the same time, on a virtualized ubuntu with VMWARE.
On the other hand, I have an ubuntu server, from where I am going to collect the logs of a servers with Filebeat
Filebeat already works, it collects the logs, it connects to logstash ...
ELK stack should too, but when I pick up the logstash logs in docker I get this error.
When i pick up the elastic or kibana logs,it works properly.
I have already looked for information on similar topics but I cannot find anyone with the same error

On kibana i only see this,and i don't know if its from filebeat or where it froms.

As if that were not enough, I have all the indexes open, it is already checked, so I do not understand where the error could be.
Here are my configuration files and the error.

docker compose logs -f logstash

[2021-02-12T09:54:40,760][WARN ][logstash.outputs.elasticsearch][main][4adfb87a563351eeacd0d5f84a3d4889120060933a3dd82a5ba02ab713b550c3] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x8202d25>], :response=>{"index"=>{"_index"=>"logstash", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"index_closed_exception", "reason"=>"closed", "index_uuid"=>"jjnQtLBjS9qYgWjlBL9lhw", "index"=>"logstash-2021.02.10-000001"}}}}

pipeline/config.yml

input {
        beats {
                port => 5044
        }

        tcp {
                port => 5000
        }
}

## Add your filters / logstash plugins configuration here

output {
        elasticsearch {
                hosts => ["elasticsearch:9200"]
                user => "elastic"
                password => "changeme"
                ecs_compatibility => disabled
        }

}



Thank you very much.

That means the index is not writable.

What is the output from GET /_cat/indices?v&s=status:asc in the Kibana Console?

Thanks for your response.

That's the output.

health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-kibana-7-2021.02.12   H9-cACwYSC-_qcYnzAT38w   1   0       5496            0        1mb            1mb
green  open   .triggered_watches                9ntWQexAROa38XVK-KK3CQ   1   0          0          360    186.7kb        186.7kb
green  open   .monitoring-kibana-7-2021.02.17   R0yYiOFHRfSbS13_zOXiPQ   1   0         32            0    106.4kb        106.4kb
green  open   .monitoring-kibana-7-2021.02.16   XG-Z0UX0Qymor9MVFx-v3A   1   0       5508            0      1.1mb          1.1mb
green  open   .monitoring-kibana-7-2021.02.11   yyS8DqJ6Rw-qg2pZQit9Hw   1   0       5682            0        1mb            1mb
green  open   .monitoring-kibana-7-2021.02.10   2kNfHS6OTqOifgCdCG-YlQ   1   0       3100            0    634.5kb        634.5kb
yellow open   logstash1                         me9gXINORDuM4G31wDnhsQ   1   1          0            0       208b           208b
green  open   .apm-custom-link                  DRv4gRoAT0yfslR2uQm0OA   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1            8UR3kom0RxutXVA7hqDyiw   1   0          6         1336    322.3kb        322.3kb
yellow open   logstash-2021.02.10-000001        jjnQtLBjS9qYgWjlBL9lhw   1   1     159794            0     43.1mb         43.1mb
green  open   .monitoring-alerts-7              _3WH6F4qQueU4GpcQW7u0Q   1   0          7          455    107.1kb        107.1kb
green  open   .watches                          dyidscj1Q5-su5BvRc_vJA   1   0          6         2750      1.4mb          1.4mb
green  open   .monitoring-logstash-7-2021.02.10 cajGc_drQB-_KJggJB-q0Q   1   0       4846            0      776kb          776kb
green  open   .monitoring-logstash-7-2021.02.12 sWsSs-DWRieBotWqeU5Vgg   1   0      14068            0      1.4mb          1.4mb
yellow open   test                              RDgizmoJQzSQR2ksTnK3_A   3   2          0            0       624b           624b
green  open   .apm-agent-configuration          QCStlZOtTmy1H6YTBhrCIw   1   0          0            0       208b           208b
green  open   .monitoring-logstash-7-2021.02.17 e3Hvnmj-Tx-4mrMqxOLFUQ   1   0        160            0    183.5kb        183.5kb
green  open   .kibana_1                         1iW0eMAOTjK8y5JBUR1WIw   1   0         79           34      2.1mb          2.1mb
green  open   .monitoring-logstash-7-2021.02.16 xkc-aOBvRd2BtkgxKLqnEQ   1   0      24170            0      2.1mb          2.1mb
green  open   .security-7                       61149E68RFOUiKheU7WmPA   1   0         55            0    210.4kb        210.4kb
green  open   .monitoring-es-7-2021.02.16       W9O0QFQfScq43RIkM0tTMg   1   0      80296       105990     48.2mb         48.2mb
green  open   .monitoring-es-7-2021.02.17       iQH7PgXzQJismlW5aPjAGQ   1   0        649          581      1.1mb          1.1mb
green  open   .async-search                     _rFKM08WTW2nbGikyrPHgw   1   0          7            0    162.9kb        162.9kb
green  open   .monitoring-es-7-2021.02.10       868GmJqSSzy-WrHfxTD6eA   1   0      29886        12906     15.6mb         15.6mb
green  open   .monitoring-es-7-2021.02.11       WiV5akMRQvmawgnb4f6GKA   1   0      59720        59598     35.8mb         35.8mb
green  open   .kibana-event-log-7.10.2-000001   EK_B9xvRSVaGBDwflXvdIg   1   0          7            0       38kb           38kb
green  open   .monitoring-es-7-2021.02.12       jmcYdbL2Tj-tfh7_NHR1sQ   1   0      65295         3762     32.9mb         32.9mb

My index are logstash1 and logstash-2021.02.10-000001.

Cheers.

Both of those are open, so it's odd it would be complaining about it. Do you still see it in the log?

Yes, when i run this
docker logs -f "my logstash"

[2021-02-12T09:54:36,732][WARN ][logstash.outputs.elasticsearch][main][4adfb87a563351eeacd0d5f84a3d4889120060933a3dd82a5ba02ab713b550c3] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x4b907cc5>], :response=>{"index"=>{"_index"=>"logstash", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"index_closed_exception", "reason"=>"closed", "index_uuid"=>"jjnQtLBjS9qYgWjlBL9lhw", "index"=>"logstash-2021.02.10-000001"}}}}

This is the output, and it appears a lot of times repeated.

But, when you wait above 30 seconds, it appears this.

[2021-02-16T08:07:25,676][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2021-02-16T08:07:27,334][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2021-02-16T08:07:58,555][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.10.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2021-02-16T08:07:59,800][WARN ]
[2021-02-16T08:08:00,679][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
[2021-02-16T08:08:00,962][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
[2021-02-16T08:08:01,023][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2021-02-16T08:08:01,025][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-02-16T08:08:03,681][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2021-02-16T08:08:03,731][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-02-16T08:08:03,824][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x69ce3428 run>"}
[2021-02-16T08:08:03,834][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x28456688 run>"}
[2021-02-16T08:08:03,831][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-02-16T08:41:30,719][INFO ][org.logstash.beats.BeatsHandler][main][874b08719fbf5e54e9130fcc65d8060ef10482454569146825a4eff69f0f0822] [local: 172.18.0.3:5044, remote: 192.168.14.19:59341] Handling exception: io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: -1 (caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: -1)
[2021-02-16T08:41:30,727][WARN ][io.netty.channel.DefaultChannelPipeline][main][874b08719fbf5e54e9130fcc65d8060ef10482454569146825a4eff69f0f0822] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: -1
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:370) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: -1
	at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.12.jar:?]
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.12.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
{:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
[2021-02-16T12:14:40,838][ERROR][logstash.outputs.elasticsearch][main][4adfb87a563351eeacd0d5f84a3d4889120060933a3dd82a5ba02ab713b550c3] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[2021-02-16T12:14:43,643][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
[2021-02-16T12:19:03,232][WARN ][logstash.outputs.elasticsearch][main][4adfb87a563351eeacd0d5f84a3d4889120060933a3dd82a5ba02ab713b550c3] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://elastic:xxxxxx@elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2021-02-16T12:19:03,349][ERROR][logstash.outputs.elasticsearch][main][4adfb87a563351eeacd0d5f84a3d4889120060933a3dd82a5ba02ab713b550c3] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2021-02-16T12:19:06,923][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash][e25002d4133dd2b3a77cec256442389c808ff2646bf98b7e5e7f891fecd77877] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://elastic:xxxxxx@elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2021-02-16T12:19:07,480][ERROR][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash][e25002d4133dd2b3a77cec256442389c808ff2646bf98b7e5e7f891fecd77877] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2021-02-16T12:19:08,678][ERROR][logstash.outputs.elasticsearch][main]
[2021-02-17T06:52:34,690][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2021-02-17T06:52:34,745][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-02-17T06:52:34,827][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x75d4012 run>"}
[2021-02-17T06:52:34,826][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x403069b4 run>"}
[2021-02-17T06:52:34,858][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-02-17T06:52:35,939][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-02-17T06:52:35,979][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2021-02-17T06:52:36,255][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-02-17T06:52:36,293][INFO ][logstash.inputs.tcp      ][main][40848b26c4d562852d9b5a7e2ee0ec7b64d9958d1d7b7fe8ba56895dc3804264] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
[2021-02-17T06:52:36,332][INFO ][org.logstash.beats.Server][main][874b08719fbf5e54e9130fcc65d8060ef10482454569146825a4eff69f0f0822] Starting server on port: 5044
[2021-02-17T06:52:36,355][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2021-02-17T06:52:36,788][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

I don'k know the meaning, its ok?

Cheers

That seems more relevant.

And

Means Logstash has started.

So unless you have more of that error after 2021-02-17T06:52:36, then it's not an issue.

1 Like

Thanks for your reply!

It has helped me, I am grateful.

Cheers

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.