Logstash is freaking out!

So I left the house for a little bit and I came back to seeing this on the screen. I am not sure what happened.

[2019-01-04T20:52:50,872][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-04T20:52:52,105][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-04T20:53:59,021][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-01-04T20:53:59,068][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-01-04T20:53:59,594][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-01-04T20:55:01,244][WSending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-01-04T20:52:35,914][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-01-04T20:52:36,134][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2019-01-04T20:52:47,235][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-01-04T20:52:48,298][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2019-01-04T20:52:49,079][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-01-04T20:52:49,300][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-01-04T20:52:49,306][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-01-04T20:52:49,469][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2019-01-04T20:52:49,682][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-01-04T20:52:49,776][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-01-04T20:52:50,519][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2110e6cf run>"}
[2019-01-04T20:52:50,755][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-01-04T20:52:50,872][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-01-04T20:52:52,105][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-04T20:53:59,021][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-01-04T20:53:59,068][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-01-04T20:53:59,594][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-01-04T20:56:05,667][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[INDEXNAME][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[INDEXNAME][0]] containing [31] requests]"})

Full Error: https://pastebin.com/jQjzmqQn

I have tried the following

  • Restarting
  • Restarting
  • I can connect to localhost:9200 and everything works just fine.
    name "azn9SLJ"
    cluster_name "elasticsearch"
    cluster_uuid "OIklYobCT6CW9Q9HhbwQDA"
    version
    number "6.5.4"
    build_flavor "default"
    build_type "deb"
    build_hash "d2ef93d"
    build_date "2018-12-17T21:17:40.758843Z"
    build_snapshot false
    lucene_version "7.5.0"
    minimum_wire_compatibility_version "5.6.0"
    minimum_index_compatibility_version "5.0.0"
    tagline "You Know, for Search"

Kibana says Kibana server is not ready yet

any help would be great.

Has your Elasticsearch server recovered yet? This is behavior for when ES is not ready to ingest.

whats the output from command

curl -X GET "localhost:9200/_cluster/health?pretty"

Here is my output

Everything seems to be fine now. Its on its 19th file. I just wish I could speed it up. Its taking 3 days to ingest 19 files that are roughly 300 meg csv files.

you can get stats in ingest like this

 curl -X GET 'localhost:9200/_nodes/stats/ingest?pretty'

maybe you need to make an ES cluster with more nodes to make it go faster

That would be awesome if I could but currently I am running on a dell 2950 server and its the only one I have.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.