This configuration seems to be the simplest but this example does not combine into one log message:
[beat-logstash-some-name-832-2015.11.28] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:566)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
at org.elasticsearch.action.admin.indices.delete.TransportDeleteIndexAction.checkBlock(TransportDeleteIndexAction.java:75)
In order to narrow down the scope of your problem, I just tested your configuration without Logstash in the mix. Doing that I was able to get multiline messages into Elasticsearch as expected.
Do you mind testing without Logstash in the mix as well on your end and confirming? To do this, comment out:
# output.logstash
# hosts: ["localhost:5044"]
And make sure that output.elasticsearch is configured correctly.
If this "solves" the issue, then could you post your Logstash pipeline configuration as well?
There seem to be a few syntax errors in the Logstash pipeline configuration you posted. I've cleaned them up and re-posted the configuration below. See my comments starting with ###
input {
beats {
port => [5044]
# codec => multiline {
# pattern => "^\["
# what => "previous"
# }
}
}
# } ### You probably meant to delete this line?
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logtime}" } ### There was a } missing in the grok pattern
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "filebeat-testing"
}
}
In the future, you can run ./bin/logstash -t -f /path/to/your/pipeline.conf to check the syntax of your pipeline configuration.
I tried sending the sample multiline message using filebeat + the (syntactically valid) logstash pipeline configuration. I'm getting a grok parse failure error. So I would suggest commenting out the filter section until you've got the multiline parsing and ingestion working as expected.
To easily test what documents will be indexed into Elasticsearch, without actually indexing them, you can temporarily comment out the elasticsearch output section and instead add a stdout output section that looks like this:
output {
stdout {
codec => rubydebug
}
}
You can use this to debug any issues until you get the event structure just like you want it. After that you can remove (or comment out) the stdout output and put back the elasticsearch output.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.