Problem with pipeline, grok and dashboards

Good day!
I had following task. There was necessary to add additional field of client ip address from nginx logs.
I composed next pipeline: filebeat -> logstash (beats pipeline, frok) -> Elasticsearch.
Grok pattern works fine... But after passing through logstash there is no some fields in documents. And some dashboards Filebeat-Nginx don't work.
Please, help. How can I add necessary field for dashboards?
Sorry, my English is not very good.
Best regards, roonick

Are you using Kibana to make your dashboards? Be sure to refresh your index patterns if you are adding new fields and are trying to make dashboards.

I loaded dashboards from filebeat through "filebeat setup". And now I want these dashboards working after logs pass through logstash. That's wrong way?

Can you check discover page, are those extra fields visible there?
If no, can you please share your log example and grok filter configuration?

If you can see those fields in discover, but you cant find them in dashboards. Creating visualizationss is different topic

I think, it's not a grok problem, but problem between filebeat, logstash and Elasticsearch. I turned off grok filtering and had the same result - without needed fields (~170-180 lines json).
If I configure filebeat output directly to Elasticsearch, there is few extra fields in json needed for dashboards (~310-310 lines json). For example, "response": {"status_code": 200,"body": {"bytes": 1450}} (not only this, it's one of many).
Probably, these fields are not parsed from "event {"original"}" field in logstash json.
Most likely, I need to make logstash parsing these data. But I don't now how

What does logstash conf look like?

If you want to use the nginx ingest pipeline in Elasticsearch That was setup by filebeat you will need to set it in the Elasticsearch output section of your logstash conf

The code below will determine if there is a pipeline defined in the filebeat like nginx and then makes sure it executed.

Obviously you can add more to this conf if you want.

################################################
# beats->logstash->es default config.
################################################
input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "http://localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      pipeline => "%{[@metadata][pipeline]}" 
      user => "elastic"
      password => "secret"
    }
  } else {
    elasticsearch {
      hosts => "http://localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      user => "elastic"
      password => "secret"
    }
  }
}

My logstash pipeline conf:

input {
beats {
port => 5045
}
}

output {
elasticsearch {
hosts => ["http://10.10.10.6:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
pipeline => "%{[@metadata][pipeline]}"
}
}

As you can see, I already tried to change conf with "pipeline =>". After that I saw "right" index name in document. But there was no fields needed for dashboards...

Exactly what filebeat setup command did you run?

I asked this because a lot of folks just run setup with the –-dashboards and don't realize other assets need to be loaded. In fact, the most important ones need to be loaded,

You should just run

filebeat setup -e

My suggestion is to always get filebeat directly to Elasticsearch up and running and working first... And ONLY after this works ...then introduce Logstash.

So in short

Make this architecture work first

Filebeat -> Elasticsearch

Then get this architecture to work

Filebeat -> Logstash -> Elasticsearch

Here are the steps I would follow this example happens to be metricbeat but the same pattern applies for filebeat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.