Send perfmon csv file information to graphite as TSD via logstash

We have this requirement to gather performance counter information in a dump and then have the information sent to graphite.

We do not have the option of sending the information real-time. I have tried the powershell solution and it works, sadly cannot use it.

I have tried the approach of sending the performance counter logs to elasticsearch via logstash and then used grafana to create graphs.
Has anyone used the graphite output plugin in logstash to do the same? Would appreciate some pointers.

I am working on Windows environment. Have used a VM for hosting the graphite database.

You want to send CSV files to Logstash and then to Graphite? How does Elasticsearch fit into this scenario?

We do not have the option of sending the information real-time.

So... you need the data to be buffered somewhere, or what do you mean by real time not being an option?

Sorry if my post was not clear. Here are the steps for generating data.
Step 1: Generate performance counters log via logman and have the files generated as csv
Step 2: The generated files are copied manually onto another server, which has logstash installed. We will use logforwarder e.g.nxlog to forward the csv files to logstash, Logstash need to send this data the metrics data to graphite.

Okay, sounds reasonable except for the part about manually copying CSV files. Don't use a human to do a computer's job.

I was able to send the data to graphite. However i have scenarios where say i am measuring the Process(*)/Processor Time. I do not know, how many columns will be there, how can i take care of the scenario? Also, i would like to specify how the key should be defined.

I'm afraid I don't understand either of those two questions. Perhaps you can give an example?

My csv file, which is a perfmon output, has structure like this:
"(PDH-CSV 4.0) (Central Standard Time)(360)",\ALLEGROT3-PC\Process(csrss#1)% Processor Time","\ALLEGROT3-PC\Process(wininit)% Processor Time","\ALLEGROT3-PC\Process(winlogon#1)% Processor Time",\ALLEGROT3-PC\Process(csrss#1)% Privileged Time","\ALLEGROT3-PC\Process(wininit)% Privileged Time","\ALLEGROT3-PC\Process(winlogon#1)% Privileged Time"

The first column has the timestamp, followed by %processor time, %privilege time for processes running on the machine. This means, the numbers of columns will vary in the csv file.
What i want as the end result is send following metrics to graphite with following key names.

client.ALLEGROT3-PC.Process.csrss#1.PercentProcessorTime
client.ALLEGROT3-PC.Process.wininit.PercentProcessorTime
client.ALLEGROT3-PC.Process.winlogon#1.PercentProcessorTime
client.ALLEGROT3-PC.Process.csrss#1.PercentPrivilegeTime
client.ALLEGROT3-PC.Process.wininit.PercentPrivilegeTime
client.ALLEGROT3-PC.Process.winlogon#1.PercentPrivilegeTime

i.e. client.machinename.metriccategory.processname.metricname

Any ideas on how i can create these keys from the first row in csv file and then pass it on to graphite?

-Madhu

Use the csv filter to extract each column into fields with any names and you should be able to use the metrics option of the graphite output to select metric names and connect each metric to the field containing the value. Something like this:

filter {
  csv {
    columns => [..., "csrss_cpu_time_pct", ...]
  }
}
output {
  graphite {
    ...
    metrics => {
      "client.%{host}.Process.csrss#1.PercentProcessorTime" => "%{csrss_cpu_time_pct}"
      ...
    }
  }
}

I can have variable number of columns and the names can also changes as the number of processes running on a machine can vary each time. hence i cannot explicitly specify the column names.
I would have to write some code to parse the first row and extract fields from it.

Yes, a ruby filter could be helpful here. However, with the currently available stock plugins you can't parse the first line of the file and "remember" those columns. See the issue below.

You could also use a conditional with a regexp match to try to figure out which columns are present and select a csv filter based on that.