Stuart,
No problem. The configs I pasted for you were actually my old setup/config
I actually converted to Filebeat modules + Ingest pipelines to eliminate Logstash for the most part. You can basically do the same as above, it just requires some slight modifications for your purposes:
Filebeat output config:
output.elasticsearch:
hosts: ["<eshost>:9200", "<eshost>:9200", "<eshost>:9200", "<eshost>:9200"]
indices:
- index: "syslog-%{[beat.version]}-%{+YYYY.MM.dd}"
pipeline: filebeat-6.6.0-system-syslog-pipeline
when.contains:
fileset.name: "syslog"
- index: "syslog-auth-%{[beat.version]}-%{+YYYY.MM.dd}"
pipeline: filebeat-6.6.0-system-auth-pipeline
when.contains:
fileset.name: "auth"
- index: "nginx-%{[beat.version]}-%{+YYYY.MM.dd}"
pipeline: filebeat-6.6.0-nginx-access-default
when.contains:
event.dataset: "nginx.access"
- index: "nginx-error-%{[beat.version]}-%{+YYYY.MM.dd}"
pipeline: logs_pipeline
when.contains:
event.dataset: "nginx.error"
A trick from the Docs you can use is a pipeline -> pipeline workflow; look at the nginx error pipeline, it points to logs_pipeline, which I currently have configured as:
Log of logs pipeline:
{
"logs_pipeline" : {
"description" : "A pipeline of pipelines for log files",
"version" : 1,
"processors" : [
{
"dot_expander" : {
"field" : "event.dataset"
}
},
{
"pipeline" : {
"if" : "ctx.event?.dataset == 'system.auth'",
"name" : "filebeat-6.6.0-system-auth-pipeline"
}
},
{
"pipeline" : {
"if" : "ctx.event?.dataset == 'system.syslog'",
"name" : "filebeat-6.6.0-system-syslog-pipeline"
}
},
{
"pipeline" : {
"if" : "ctx.event?.dataset == 'nginx.access'",
"name" : "filebeat-6.6.0-nginx-access-default"
}
},
{
"pipeline" : {
"if" : "ctx.event?.dataset == 'nginx.error'",
"name" : "filebeat-6.6.0-nginx-error-pipeline"
}
}
],
"on_failure" : [
{
"set" : {
"field" : "error.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}
]
}
}
Nginx error pipeline:
{
"filebeat-6.6.0-nginx-error-pipeline" : {
"description" : "Pipeline for parsing the Nginx error logs",
"processors" : [
{
"grok" : {
"ignore_missing" : true,
"field" : "message",
"patterns" : [
"""%{DATA:nginx.error.time} \[%{DATA:nginx.error.level}\] %{NUMBER:nginx.error.pid}#%{NUMBER:nginx.error.tid}: (\*%{NUMBER:nginx.error.connection_id} )?%{GREEDYDATA:nginx.error.message}"""
]
}
},
{
"remove" : {
"field" : "message"
}
},
{
"rename" : {
"field" : "@timestamp",
"target_field" : "read_timestamp"
}
},
{
"date" : {
"field" : "nginx.error.time",
"target_field" : "@timestamp",
"formats" : [
"YYYY/MM/dd H:m:s"
]
}
},
{
"set" : {
"field" : "pipeline_processor",
"value" : "filebeat-6.6.0-nginx-error-pipeline"
}
},
{
"remove" : {
"field" : "nginx.error.time"
}
}
],
"on_failure" : [
{
"set" : {
"field" : "error.message",
"value" : "{{ _ingest.on_failure_message }}"
}
}
]
}
}
I added this field to help with debugging/testing and so I knew if a pipeline was hit:
{
"set" : {
"field" : "pipeline_processor",
"value" : "filebeat-6.6.0-nginx-error-pipeline"
}
}
The key to getting these to work is ensuring that your index templates + field mappings are setup correctly. How to do that is outside the scope of this post though.
You can also do some basic field enrichment by using a processor plugin for Filebeat.
I hope this helps.