Initially, data sent directly to Elasticsearch was visible in Kibana Discover. However, after disabling direct Elasticsearch output and routing the data through Logstash, the data is no longer appearing in Discover.
pipeline
input {
beats {
port => 5044
}
}
output {
if [agent][type] == "metricbeat" {
elasticsearch {
hosts => [""]
data_stream => true
data_stream_type => "metrics" # Use "logs" for log data
data_stream_dataset => "metricbeat"
data_stream_namespace => "default"
user => ""
password => ""
ssl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/certs/http_ca.crt"
}
}
}
This is an mapping error, it means that you are receiving documents where the value of the system.uptime field is conflicting with the mapping defined in the template.
I'm using the same name for the data stream. pls refer the below image for same.
I checked the data again after a few days — the system.uptime metrics are getting indexed now. However, the system.filesystem data is currently being ignored.
Previously, the system.filesystem data was being indexed properly, and I had even created a dashboard based on it. But now, it's no longer working.
From what you shared you are not using the same datastream, in your logstash output you have it configured to write into a datastream named metrics-metricbeat-default and this naming pattern is not used by any tool from elastic.
When you use metricbeat to send data directly to Elasticsearch it writes into a datastream named metricbeat-version, which is what you had before as metricbeat-8.17.4, when you use elastic agent it would write into a datastream per metric type, like metrics-system.load-default.
You need to change the elasticsearch output in your Logstash configuration to use this:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.