Post Upgrading ElasticSearch to 6.8.10 its not receiving data from Logstash(5.6.16)

Hello,

I am new in using ELK stack.

I recently Upgraded Logstash from 5.6.13 -> 5.6.16 ,ElasticSearch Nodes from 5.6.16 -> 6.8.10 and Kibana from 5.6.16 -> 6.8.10.

I did not Upgrade Logstash and FileBeats to version 6.8.10 since it had the Backward compatibility.

Post Upgrade ElasticSearch is not receiving any data from Logstash.Below is the configuration file that have in place.

**input-beats-logstash**
input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => [ "10.199.202.51:9200", "10.199.202.52:9200", "10.199.202.53:9200" ]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

input-syslog-logstash
input {
tcp {
port => 1514
type => syslog
}
udp {
port => 1514
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => [ "dfsyd1ela01:9200", "dfsyd1ela02:9200", "dfsyd1ela03:9200" ]
stdout { codec => rubydebug }
}
}

**output_elasticsearch_syd1**

output {
        elasticsearch {
                hosts => [ "10.199.202.51:9200", "10.199.202.52:9200", "10.199.202.53:9200"]
        }

}
``````````````````````````````````````
LogStash Debug Output:- 

root@dfsyd1log01:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f logstash-syslog.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties

Can someone please help in resolving this issue.

In Elasticsearch 6.x an index can only support a single document type as document types are being deprecated. I see you are specifying document type in your output, and this could cause indexing errors if it took different values or conflicted with an index template.

Thanks Chris for your suggestion.

I have now removed the document_type from the file.Please see below is the latest entries.

Is there a way I can test the ingestion to Elasticsearch nodes or can you suggest any better way of testing the data ingest to Elasticsearch?

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => [ "10.199.202.51:9200", "10.199.202.52:9200", "10.199.202.53:9200" ]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.