Unable to push logs to elasticsearch after upgrade

I have upgraded the elasticsearch to 5.6.9 version and the logstash is still at the 2.4.1. After the upgrade the logstash is unable to push the logs to the elasticsearch. below are the errors i see in the logstash.log

{:timestamp=>"2018-07-05T13:10:17.121000+0000", :message=>"Cannot get new connection from pool.", :class=>"Elasticsearch::Transport::Transport::Error", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/base.rb:249:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/http/manticore.rb:67:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/sniffer.rb:32:in hosts'", "org/jruby/ext/timeout/Timeout.java:147:intimeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/sniffer.rb:31:in hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/base.rb:79:inreload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:instart_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:instart_sniffing!'", "org/jruby/RubyKernel.java:1479:in loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:instart_sniffing!'"], :level=>:error}

{:timestamp=>"2018-07-05T13:10:17.614000+0000", :message=>"Cannot get new connection from pool.", :class=>"Elasticsearch::Transport::Transport::Error", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/base.rb:249:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/http/manticore.rb:67:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/client.rb:128:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.1.0/lib/elasticsearch/api/actions/bulk.rb:93:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:inbulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:172:in safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:101:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:29:inmulti_receive'", "org/jruby/RubyArray.java:1653:in each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:28:inmulti_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/output_delegator.rb:114:inmulti_receive'",

It says that the elasticsearch is down but i see the elasticsearch up and running

[cloud_1IF-1IF_FACTORY root@LOG-0-1 logstash]# curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 6,
"active_shards" : 6,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

What does your Logstash config look like?

grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "(?m)%{TANUKI_LOG}"}
add_tag => [ "parsed" ]
}

date {
match => [ "tanuki_timestamp", "yyyy/MM/dd HH:mm:ss" ]
}

if "_grokparsefailure" not in [tags] {
mutate {
replace => [ "message", "%{tanuki_message}" ]
remove_field => ["tanuki_message","tanuki_timestamp"]
}
}
}
}
mutate {
uppercase => [ "level" ]
}
}
}

END TEMPLATE '/etc/puppet/modules/ntc_profile_centrallog/templates/filter.apache.erb'

START TEMPLATE: '/etc/puppet/modules/ntc_profile_centrallog/templates/filter.syslog.erb'

filter {
if [type] == "syslog" and "parsed" not in [tags]{
grok {
patterns_dir => "/etc/logstash/patterns"

Use special hostname pattern which allows underscores in hostnames, to match switches

Some example syslog messages:

Normal: Sep 8 15:26:51 BSC-0-1 crmd[3467]: notice: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]

HP DSW: Sep 8 14:37:37 corasat-4IF-HMMS_LOC1-DSW-1 %%10ARP/5/ARP_DUPLICATE_IPADDR_DETECT(l): -DevIP=172.20.0.1; Detected an IP address conflict.

Cisco ASW: Sep 8 14:40:40 asw-1 3329: corasat-4IF-HMMS_LOC1-ASW-1: Sep 8 14:36:52.996: %SYS-5-CONFIG_I: Configured from 172.20.0.42 by snmp

match => { "message" => "(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:syslog_timestamp}) (?:.*%{NTC_SYSLOG_HOSTNAME_SWITCH:syslog_hostname}:?|%{NTC_SYSLOG_HOSTNAME:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:slight_smile: %{GREEDYDATA:syslog_message}" }
add_tag => [ "parsed" ]
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}

syslog_pri { }

date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
}

if "_grokparsefailure" not in [tags] {
mutate {
replace => [ "host", "%{syslog_hostname}" ]
replace => [ "message", "%{syslog_message}" ]
}
}

# PLATO-2325: because of a bug in JRuby disabling this.

# Once upgrade to Logstash 5 we can try to re-enable this.

dns {

reverse => [ "host"]

action => "replace"

hit_cache_size => 20

failed_cache_size => 20

}

PLATO-2325 Manually parsing the hostnames of the switches.

REMOVE WHEN RE-ENABLING DNS ABOVE

mutate {
gsub => [
"host", "172.20.0.3", "ASW-1",
"host", "172.20.0.3", "ASW-2",
"host", "172.20.0.1", "DSW-1",
"host", "172.20.0.2", "DSW-2",
"host", "172.19.1.12", "DSW-1.dmavlan",
"host", "172.19.1.13", "DSW-2.dmavlan"
]
}

if "_grokparsefailure" not in [tags] {
mutate {
# Remove cruft from switches hostnames
add_field => [ "level", "%{syslog_severity}" ]
remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
}

mutate {
gsub => [
"level", "debug", "DEBUG",
"level", "informational", "INFO",
"level", "notice", "NOTICE",
"level", "warning", "WARN",
"level", "error", "ERROR",
"level", "alert", "ALERT",
"level", "critical", "CRIT",
"level", "emergency", "EMERG"
]
}
}
}

END TEMPLATE '/etc/puppet/modules/ntc_profile_centrallog/templates/filter.syslog.erb'

START TEMPLATE: '/etc/puppet/modules/ntc_profile_centrallog/templates/filter.removerawmessage.erb'

filter {
if "_grokparsefailure" not in [tags] and [raw_message] {
mutate {
remove_field => [ "raw_message" ]
}
}
}

END TEMPLATE '/etc/puppet/modules/ntc_profile_centrallog/templates/filter.removerawmessage.erb'

output {
elasticsearch {
hosts => [ "localhost" ]

sniffing => true

}
}Preformatted text

Actually we have many rules which are specified out of which the lostash output to elasticsearch plugin is configured as below

output {
elasticsearch {
hosts => [ "localhost" ]
sniffing => true
}
}

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Thanks done the same

I'm pretty sure you have to specify the Elasticsearch port as well as the hostname. That would be the first thing I would try.

I tried that indeed and the problem still persists

Ãlso to add on my elasticcluster is healthy enough and is serving the requests as well

curl -X GET http://localhost:9200/_cluster/health
{"cluster_name":"logstash","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":10,"active_shards":10,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":10,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}

Do you still have the same errors in logstash.log?

yes i see the same logs in the logs as well

No. It's not correct. For example you have:

output {
elasticsearch {
hosts => [ "localhost" ]
sniffing => true
}
}

While I'm expecting something like:

output {
  elasticsearch {
    hosts => [ "localhost" ]
    sniffing => true
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.