Http input plugin - multiple log entries group into one in Kibana

Hi

I am fairly new to Logstash so excuse me in advanced.

Using the http input plugin to send logs from a application to logstash. They are sent on a schedule, 5-20 logs at a time. Each log entry is sent with its own (curl) request. However, when I look in Kibana all log entries have been bundled into one, with comma separated values. I.e:

@timestamp: July 21st 2016, 14:45:13.249
extIDs: 26, 27, 28, 29, 30, 31, 32, 33, 34, 35

Here's the configuration:
input {
http {
#host => "127.0.0.1"
user => "***"
password => "***"
port => 31311
tags => "***"
}
}

I'm thinking logstash might group them together because they all get the same timestamp - although I find it somewhat weird. Anyone know how I could solve this?

It sounds like you are doing a single curl request to the logstash http input? Each request is treated as a single "message", and pushed through the pipeline to elasticsearch. You could either break the requests apart, or do some further processing in logstash to try to break up the original message. Other options might be to use filebeat to ship a log file, line by line, to logstash.

That's the strange part. I am making one request per log entry that I want to stash. But they are grouped together anyway. The requests to logstash are scheduled to run every one minute. When the schedule runs it makes, lets say, five requests one after another to logstash. All those logs are grouped. If I however limit the system to only send one log (make one request) every minute everything works fine, but I fear that it would to slow, with a growing number of 'un-stashed' logs.

There is also a five-or-so minute delay between the request is made to logstash before they show up in Kibana. My growing theory is that logstash simply groups together logs that 'arrive' very close to each other - although I do not understand why it would to that.

I'm thinking that maybe this could be prevented by forcing a 'custom' document_id on each entry, a id built from something out of the original log entry. Don't really know how to do that though.

Can you post the details on your curl request / script?

It appears that the reason for this was filter configuration for another logging input to logstash. The server is configured to read all .conf files in a directory on the server. One of them is the one I posted above, for the http input plugin. Another reads a text file with logs from a different application. In the second conf file the following filter was present:

filter {
multiline {
pattern => "^(%{TIMESTAMP_ISO8601})"
negate => true
what => "previous"
}
grok {
match => { "message" => "^%{TIMESTAMP_ISO8601:logtime}%[...]}
}
if "_grokparsefailure" not in [tags] {
mutate {
replace => { "message" => "%{logmessage}" }
remove_field => [ "logmessage" ]
}
}
}

I simply added a if-statement around everything in the filter-part - and everything works fine. It would however be very helpful if someone could explain why this filter affected the input from my application?

Thank you!