Multiple server resource metrics stats

Hi All ,

I am trying to get the resource metrics data from 5 machines forming a cluster to have a one view dashboard on kibana.
There are five different config files are created for each individual machines and are initiate through individual logstash instance(1 command line initiation for one logstash instance i.e 5 instances).

Problem statement- Not able to see the data from 5th logstash instance also only 4 sincedb files are getting created when started together.

Version -5

Please suggest how to start multiple instances and fetch the data in real time

Regards,
Prateek

Why do it that way?

What does your config on the 5th instance look like?

I have to collect metrics from all the machines .As per my understanding Logstash is a single pipeline so either I have to apply logical condition(If Tags== XYZ) in a single config file or start that many individual logstash cmd instances.
Please suggest if I am missing anything.

Its not any specific error because of config file as they are working fine when tried posting data running single logstash config file at a time. I am able to see the data on to elasticsearch and kibana correspondingly.

PFB the config file sample for one of the machine-

input {
file {
path => "C:/PerfLogs/PerfmonLogs/*/DataCollector01.csv"
start_position => "beginning"
}
}

filter {
csv {

	columns => ["LogTime", "APPPOOLWASTotal_CurrentWorkerProcesses", "APPPOOLWASTotal_MaximumWorkerProcesses", "APPPOOLWASTotal_Recent_WorkerProcessFailures", "APPPOOLWASTotalTotal_WorkerProcessFailures", "LogicalDiskTotal_DiskTime", "LogicalDiskTotal_IdleTime", "MemoryAvailable_MBytes", "MemoryCommitted_Bytes", "MemoryPage_Readssec", "MemoryPage_Writessec", "PhysicalDiskTotal_DiskTime", "PhysicalDiskTotal_IdleTime", "PhysicalDiskTotal_AvgDiskQueueLength", "ProcessorTotal_IdleTime", "ProcessorTotal_ProcessorTime"]
	add_field => { "MachineName" => "AppServer" }
	separator => ","
}
mutate {
convert => [ "APPPOOLWASTotal_CurrentWorkerProcesses", "float" ]
convert => [ "APPPOOLWASTotal_MaximumWorkerProcesses", "float" ]
convert => [ "APPPOOLWASTotal_Recent_WorkerProcessFailures", "float" ]
convert => [ "APPPOOLWASTotalTotal_WorkerProcessFailures", "float" ]
convert => [ "LogicalDiskTotal_DiskTime", "float" ]
convert => [ "LogicalDiskTotal_IdleTime", "float" ]
convert => [ "MemoryAvailable_MBytes", "float" ]
convert => [ "MemoryCommitted_Bytes", "float" ]
convert => [ "MemoryPage_Readssec", "float" ]
convert => [ "MemoryPage_Writessec", "float" ]
convert => [ "PhysicalDiskTotal_DiskTime", "float" ]
convert => [ "PhysicalDiskTotal_IdleTime", "float" ]
convert => [ "PhysicalDiskTotal_AvgDiskQueueLength", "float" ]
convert => [ "ProcessorTotal_IdleTime", "float" ]
convert => [ "ProcessorTotal_ProcessorTime", "float" ]

}
grok {
    
        match => { "message" => "%{DATESTAMP:timestamp}" }
        }
	date {
        match => [ "timestamp", "MM/dd/yy HH:mm:ss.SSS" ]
        }

}

output {
elasticsearch {
action => "index"
index => "resourceindex-%{+YYYY.MM.dd}"
}
}

Please suggest if I am missing anything.

Regards,
Prateek

So why not use conditionals? It'll be a lot more efficient.

But how to add different input file locations in the "input{}" section.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.