Hello All,
i am using Filebeat version 5.5 to send logs to kafka topic. But I am not able to send the TOP command logs.
Is there any way to send TOP command logs to Kafka?
Thank You
Hello All,
i am using Filebeat version 5.5 to send logs to kafka topic. But I am not able to send the TOP command logs.
Is there any way to send TOP command logs to Kafka?
Thank You
Not sure what you mean by the TOP command logs. Are you interested in CPU and memory metrics? Best would be to just use Metricbeat in that case.
Yes, exactly I want to store CPU ,Memory and Disk Utilization logs to Kafka.
I'd recommend using Metricbeat with the Kafka output.
Thanks for quick reply.
I want to collect data of system resource utilization, such as which process consumes CPU, memory.
And also how much total cpu, memory & disk is utilize and how much is free.
Can you please help in this @tudor
This is what Metricbeat's system module does. Give it a try.
Thanks for reply.
I gave a try and got output as needed, but I didn't got output for disk space usage like how much disk is used and how much it is free. It gave me output for disk I/O only.
The fileseystem metricset should include that information. Look for the system.filesystem.available
, for example.
Thanks @tudor ,
My issue is solved
All values are shown in bytes,
So Is there any way to get value in MB or GB?
If you are using Kibana and you loaded the provided index patterns using the import_dashboards
tool then the fields containing bytes will be rendered using KB/MB/GB/TB depending on the value. Kibana uses http://numeraljs.com/.
https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-sample-dashboards.html
Thanks @andrewkroh I got it.
I set 3 modules (load, memory & fsstat) and got output for all 3 modules in 3 sets.
So my sir said that output is too long and told me to find that how to minimize output.
[
{
"@timestamp": "2017-07-31T10:22:30.825Z",
"beat": {
"hostname": "kafkatest",
"name": "kafkatest",
"version": "5.5.1"
},
"metricset": {
"module": "system",
"name": "memory",
"rtt": 699
},
"system": {
"memory": {
"actual": {
"free": 1677877248,
"used": {
"bytes": 230150144,
"pct": 0.1206
}
},
"free": 1040068608,
"swap": {
"free": 1073737728,
"total": 1073737728,
"used": {
"bytes": 0,
"pct": 0
}
},
"total": 1908027392,
"used": {
"bytes": 867958784,
"pct": 0.4549
}
}
},
"type": "metricsets"
},
{
"@timestamp": "2017-07-31T10:22:30.825Z",
"beat": {
"hostname": "kafkatest",
"name": "kafkatest",
"version": "5.5.1"
},
"metricset": {
"module": "system",
"name": "fsstat",
"rtt": 936
},
"system": {
"fsstat": {
"count": 30,
"total_files": 2640058,
"total_size": {
"free": 21922209792,
"total": 24112676864,
"used": 2190467072
}
}
},
"type": "metricsets"
},
{
"@timestamp": "2017-07-31T10:22:30.826Z",
"beat": {
"hostname": "kafkatest",
"name": "kafkatest",
"version": "5.5.1"
},
"metricset": {
"module": "system",
"name": "load",
"rtt": 812
},
"system": {
"load": {
"1": 0,
"5": 0.01,
"15": 0.05,
"norm": {
"1": 0,
"5": 0.0006,
"15": 0.0031
}
}
},
"type": "metricsets"
}
]
Can this output be shorten?
You can use the drop_fields
processor to remove unwanted fields. See the docs here.
drop_fields processor will only exclude module data.
Here is kind of example for output I want,
[
{
"@timestamp": “2017-07-31T10:22:30.825Z”,
“beat”: {
“hostname”: “kafkatest”,
“name”: “kafkatest”,
“version”: “5.5.1”
},
“metricset”: {
“module”: “system”,
“name”: “memory”,
“rtt”: 699
},
.
.
.
“metricset”: {
“module”: “system”,
“name”: “fsstat”,
“rtt”: 936
},
.
.
.
“metricset”: {
“module”: “system”,
“name”: “load”,
“rtt”: 812
},
.
.
.
]
Just want to exclude this lines which are coming for each metricset module
{
"@timestamp": “2017-07-31T10:22:30.826Z”,
“beat”: {
“hostname”: “kafkatest”,
“name”: “kafkatest”,
“version”: “5.5.1”
},
This topic was automatically closed after 21 days. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.