Hi,
is there a way to make the JSON output (e.g. via file output) less redunant?
I am searching for a windows module configuration to write all performance counters e.g. of category "memory" into one single JSON record (currently I have got 25 records).
It is possible to drop specific fields from the JSON objects (or conversely include only specific fields). Check out the processor documentation.
processors:
- drop_fields:
fields:
- metricset
- beat
I am interested in all 25 performance counters of category "memory" but I would need them in one JSON line/record. e.g. ....
{
"@timestamp": "2016-05-23T08:05:34.853Z",
"beat": { ... },
"metricset": {
"module": "windows",
"name": "perfmon",
"rtt": 115
},
"type": "metricsets",
"windows": {
"perfmon": {
"memory": {
"perfCounter1": 0,
"perfCounter2": 23.123,
...
...
"perfCounter24": 12,
"perfCounter25": 999.12,
}
}
}
}
How to configure that in windows.yml ?
What you described was our first approach. Put all counters into one event. This works as long as your instance is unique like '\Processor Information(_Total)\% Processor Time'
. So but what about a query which returns multiple results like wildcard queries '\PhysicalDisk(*)\Disk Writes/sec'
. So here you have a 1:n
relationship. So that's why we decided to put every result into one event. With that you can assign every result an unique identifier which is the instance_label
config. Why you need them all in one JSON record?
Thx for reply!
I think in case of wildcard queries, accumulation could also be done per instance key (e.g. as config option).
Currently I am working on a solution to collect performance counters via metricbeat within JSON files.
Logstash will parse this files from time to time and writing data into a CrateDB instance.
So performance counter reading is decoupled from storing into DB. When Logstash or CrateDB is down, there is no data loss.
For this usecase I would like to write timestamp and counter values with one single insert operation into DB. That's why I need all counters within one single event. Currently in Logstash it's hard for me to find out, which event belongs together.
Another aspect is that the JSON files are growing quite rapidly because there is so much overhead information.
One approach would be aggregation within one event (as mentioned).
Maybe another approach would be to use zipped JSON or B(J)SON?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.