Logstash shipping results

Hi,
I'm trying to ship LocustIO results with logstash which look like the following:

10:09:31 Name # reqs # fails Avg Min Max | Median req/s
10:09:31 --------------------------------------------------------------------------------------------------------------------------------------------
10:09:31 GET / 683 0(0.00%) 222 212 392 | 220 9.10
10:09:31 POST /modules 5 0(0.00%) 555 539 567 | 550 0.00
10:09:31 GET /ports 62 0(0.00%) 548 536 572 | 550 0.10
10:09:31 GET /ports/ 250 0(0.00%) 215 212 283 | 210 4.60
10:09:31 --------------------------------------------------------------------------------------------------------------------------------------------
10:09:31 Total 1000 0(0.00%) 13.80

10:09:33 Name # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100%
10:09:33 --------------------------------------------------------------------------------------------------------------------------------------------
10:09:33 GET / 683 220 220 220 220 220 330 330 340 392
10:09:33 POST /modules 5 550 570 570 570 570 570 570 570 567
10:09:33 GET /ports 62 550 550 560 560 560 560 570 570 572
10:09:33 GET /ports/ 250 210 220 220 220 220 220 220 230 283

I'm trying to find the best way to do this. I have the following configs setup for logstash.
input {
file {
path => [ "C:\Program Files (x86)\Jenkins\jobs\Locust2\builds*\log"]
start_position => "beginning"
}
}

filter {
csv { columns => ["Name", "# reqs", "# fails", "Avg", "Min", "Max", "Median", "req/s"] }
}

output {
stdout { }
elasticsearch {
template => "C:\Program Files\Logstash\logstash-2.3.4\bin\locust-mapping.json"
template_name => "locust"
template_overwrite => "false"
hosts => ["localhost:9200"]
user => "admin"
password => "password"
index => "locust-%{+YYYY.MM.dd}"
}
}

And the following mapping:

{
"mappings": {
"properties": {
"Name": {
"type": "long"
},
"# reqs": {
"type": "integer"
},
"# fails": {
"type": "integer"
},
"Avg": {
"type": "integer"
},
"Min": {
"type": "integer"
},
"Max": {
"type": "integer"
},
"Median": {
"type": "integer"
},
"req/s": {
"type": "integer"
}
}
}
},
"settings": {
"index.refresh_interval": "5s"
},
"template": "locust-*"

Please let me know any suggestions you may have.

Is this working as it is and you just want improvement suggestions, or is something not working?

Its not working as it is. No metrics are being shipped.

Logstash is probably tailing the input file. While testing you might want to set sincedb_path => "nul" to make sure Logstash starts from the top of the input file each time. See the file input's documentation. You might also need to adjust its ignore_older option.

Thanks for the suggestion! I'll give it a try.

It's still not shipping to ES. This is the results of the debug.

{:timestamp=>"2016-08-16T12:55:59.076000-0500", :message=>"Pushing flush onto pipeline", :level=>:debug, :file=>"/Program Files/Logstash/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb", :line=>"458", :method=>"flush"}
{:timestamp=>"2016-08-16T12:55:59.338000-0500", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x3000e682 @operations_mutex=#Mutex:0x3ec70ef2, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x3af88df2, @submit_proc=#<Proc:0x4879c942@c:/Program Files/Logstash/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:57>, @logger=#<Cabin::Channel:0x4ecffe21 @metrics=#<Cabin::Metrics:0x30a66d17 @metrics_lock=#Mutex:0x1fd5fb86, @metrics={}, @channel=#<Cabin::Channel:0x4ecffe21 ...>>, @subscriber_lock=#Mutex:0x19e002e7, @level=:debug, @subscribers={12644=>#<Cabin::Subscriber:0x2ba2c7c3 @output=#<Cabin::Outputs::IO:0x1b9b00fa @io=#<File:locust.log>, @lock=#Mutex:0x2f5ce54>, @options={}>}, @data={}>, @last_flush=2016-08-16 12:55:58 -0500, @flush_interval=1, @stopping=#Concurrent::AtomicBoolean:0x5404c0bf, @buffer=[], @flush_thread=#<Thread:0x69a59923 run>>", :interval=>1, :level=>:debug, :file=>"/Program Files/Logstash/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}
{:timestamp=>"2016-08-16T12:56:00.344000-0500", :message=>"Flushing buffer at interval", :instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x3000e682 @operations_mutex=#Mutex:0x3ec70ef2, @max_size=500, @operations_lock=#Java::JavaUtilConcurrentLocks::ReentrantLock:0x3af88df2, @submit_proc=#<Proc:0x4879c942@c:/Program Files/Logstash/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:57>, @logger=#<Cabin::Channel:0x4ecffe21 @metrics=#<Cabin::Metrics:0x30a66d17 @metrics_lock=#Mutex:0x1fd5fb86, @metrics={}, @channel=#<Cabin::Channel:0x4ecffe21 ...>>, @subscriber_lock=#Mutex:0x19e002e7, @level=:debug, @subscribers={12644=>#<Cabin::Subscriber:0x2ba2c7c3 @output=#<Cabin::Outputs::IO:0x1b9b00fa @io=#<File:locust.log>, @lock=#Mutex:0x2f5ce54>, @options={}>}, @data={}>, @last_flush=2016-08-16 12:55:59 -0500, @flush_interval=1, @stopping=#Concurrent::AtomicBoolean:0x5404c0bf, @buffer=[], @flush_thread=#<Thread:0x69a59923 run>>", :interval=>1, :level=>:debug, :file=>"/Program Files/Logstash/logstash-2.3.4/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}

"Pushing flush onto pipeline" messages are not interesting.

A few additional things to look into:

  • Use forward slashes instead of backslashes in the filename pattern.
  • Look for log messages about "discover" in the log. They'll tell you whether the filename pattern expands to the correct file(s).

Changing the backslashes to forward slashes made an error show itself. It didn't like me putting in the template location. After commenting that out it built the index. However, the metrics are being shipped because they aren't comma separated in the logs.

I'm trying this instead
input {
file {
path => ["C:/Program Files (x86)/Jenkins/jobs/Locust2/builds/*/log"]
start_position => "beginning"
sincedb_path => "nul"
}
}

filter {
grok {
match => ["message", "(?.{68})(?<# reqs>.{7})(?<# fails>.{13})(?.{8})(?.{9})(?.{7})(?<req/s>.{7})"]
}
}
mutate {
strip => {
"Name",
"# reqs",
"# fails",
"Avg"
"Min",
"Max",
"Median"
"req/s"
}
}
However,
I get this error
reason=>"Expected one of #, input, filter, output at line 14, column 5 (byte 323) after ",

Your braces are off. You're closing the filter block after the grok filter instead of after the mutate filter.

The following almost works however they aren't coming thru as numbers. Is there a way to configure that with these settings?

filter {
grok {
match => ["message", "(?.{64})(?.{10})(?.{13})(?.{8})(?.{8})(?.{8})(?.{9})(?<req\s>.{5})"]
}

mutate {
	strip => ["Name", "reqs", "fails", "Avg", "Min", "Max", "Median", "req\s"]
    }
}	

I tried this but it doesn't seem to read the spaces.

filter {
grok {
match => ["message", "(?.{64})(?[0-9].{10})(?.{13})(?[0-9].{8})(?[0-9].{8})(?[0-9].{8})(?[0-9].{9})(?<req\s>[0-9].{5})"]
}

mutate {
	strip => ["Name", "reqs", "fails", "Avg", "Min", "Max", "Median", "req\s"]
    }
}

If you format your configuration as code with the </> button you won't find that important details in your regular expressions are stripped off.

Anyway, you can use the mutate filter's convert option to turn strings into numbers.