i have an issue with metricbeat. When i send data from metricbeat to kafka and kafka send to logstash and logstash then in kibana.
i can display the datas in kibana but i do not see the graphics displays in kibana , i only see the logs.
When i send the datas directly to elasticsearc i see the grapfics.
So i have some questions.
is it possible to display the graphics even if we use kafka, if so can you refer me to a tutorial or a guide so i can do it myself
When you are sending the events from Logstash → Elasticsearch, are you writing to the same index as if you are sending from Metricbeat → Elasticsearch?
First What version of the Stack are you on, this is important as there some configuration differences.
What I usually suggest is a progression
Step 1
metricbeat -> Elasticsearch
Works! This is good you said it already works.
Complete
Step 2
Next get this working...
Metricbeat -> Logstash -> Elasticsearch
There are several steps to get this to work, and you need a proper logstash config especially if you are using metricbeat modules. If you tell us what version you are one I can share a sample config.
Get that to work....
Only After you get Step 2 to work proceed to Step 3.
Step 3
Metricbeat - Kafka -> Logstash -> Elasticsearch
You will need to make sure beats is correctly writing to Kafka and that Logstash is pulling the right messages from the topc
I see people spend a lot of time try to set it all up at once and spend a lot of frustrating time... this is just my experience and advice.
Hello @stephenb ,
i am using version 8.3.2 for metricbeat.
Metricbeat is correctly writing to kafka, i can see the logs coming in the console as a consumer. Logstash send everything to kibana, i can see these logs in Observability --logs--stream
I want to display proper graphic and maybe put some filter to display the logs in a better way
I do not send the logs directly from metric beats to elasticsearch. I send them to kafka and aftre logstash take it from kafka and send the log to elastiocsearch. My configurations are base on what is on the web site, nothing more.
Did you index your Metricbeat events to logs index/data stream? Because the log stream UI defaults to either filebeat-* or logs-*, so either you changed this setting or you are indexing into the wrong index.
And based on the same screenshot, it seems that the JSON are not parsed correctly. Did you use the JSON codec in Logstash when you consume the events from Kafka?
May be worthwhile to share your Logstash pipeline configurations.
@stephenb
I already had everything setup on my vmware. I already had a Kafka cluster running.
I am going to test what you said.
One more question, do i have to setup a conf file for each topic. I send metricbeat data on the metricbeat topic and send filebeat data on test topic. Do you have a suggestion or a link to share ?
thank you for your answers @hendry.lim , it is really helpful
For the user, when I use the logstash_system user to ingest data, it gave me an error not authorized. Since I am testing, so I use the elastic user.
Hello, I confugured the logstash to send the logs to my elk cluster, in that case i remove the kafka cluster so metricbeat communicate directly to logstash .
I putted the index in the logstash conf (output plugin ) as shown in your answer.
The dashboard still not receive the datas. Maybe i supected that kibana is pointing to anothe index.
Do you have a documents that can explain clearly what to do , or maybe a tutorial. I really need to use those dashboards
Thx
These are the steps / configuration that work / work for others.
Also, you're logstash config is not correct. You're not looking close at the elasticsearch output. You need to write to the data stream so it uses to correct mapping and then rolls over with the island. When you added the date to it, it wasn't doing that correctly.
I hadone what you said.
When i send the data corectly from metricbeat to elasticsearch, i have all the grafic. when i include logstash I change the logstash output to what you said and i send the data. i have some statistics but not all.
{
"took": 16,
"timed_out": false,
"_shards": {
"total": 2,
"successful": 1,
"skipped": 1,
"failed": 1,
"failures": [
{
"shard": 0,
"index": "metricbeat-1",
"node": "xD2iV5ovToG9T0iiO4pIqQ",
"reason": {
"type": "illegal_argument_exception",
"reason": "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default.
Please use a keyword field instead. Alternatively, set fielddata=true on [system.network.name] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
]
},
"hits": {
"total": 0,
"max_score": 0,
"hits": []
}
}
My other question. do we have to go through that process for each beat ?
That is not the correct index and thus the mapping and fields are not correct, thus the data types are not correct and thus the dashboards and graphics are not correct / will not work.
This means that you did not use the logstash config I provided above or you changed it. I provided you the exact config that will , if you changed it you will need to show.. and what ever you are running / changing is not working
You need to clean up.
and follow the steps again AND use the logstash conf I provided... I am giving you a working solution that has been used by many.
If you use the logstash config EXACTLY as I have it above it work AFTER you now clean up and run through the steps again
Cleanup
Run filebeat setup -e when configured to point to elasticsearch
Run filebeat and see data loaded properly
Stop Filebeat
Point filebeat output to logstash
Start Logstsash with the EXACT conf that I provided.
I want to thank you @stephenb . i can now receive all the metrics with logstash. Next step is to configure kafka.
I have another question. Do i have to do this for each server i want to monitor ?
do i have to configure them firs to send datas to elastic and after to logstash or is it ok for all the others ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.