What is the easiest way to create indexes for different nginx logs in logstash

Hello. I have more than one nginx logs in folder which filebeat send to logstash and want to make index for each of them depends of the log name.

My logstash.conf looks like this:

input {
  beats {
    port => 5044
  }
}

filter {
  grok {
    match => { "message" => "%{IP:remote_address} - \[%{HTTPDATE:timestamp}\] %{HOSTNAME:host} %{WORD:http_method} %{URIPATHPARAM:request} %{NOTSPACE:http_version} %{NUMBER:request_time} %{DATA:upstream_response_time} %{POSINT:request_length} %{POSINT:status} %{POSINT:bytes_sent} %{DATA:http_referer} \[%{GREEDYDATA:http_user_agent}\]" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    hosts => "es01:9200"
    ssl_enabled => true
    cacert => "/usr/share/logstash/config/certs/ca/ca.crt"
    user => "elastic"
    password => "password"
    index => "%.log"
  }
  stdout { codec => rubydebug }
}

What is the best way to reach the goal to send logs to different indexes depends of logs file name? Thx

filebeat will include the source filename in [log][file][path]. You can parse that using a dissect or grok filter and reference the result in the index option of the elasticsearch filter using a sprintf reference.

Putting every file into its own index may create too many indexes/shards and negatively impact performance. Check the documentation.

1 Like

Ohh, thank you so much, I will check it rn :pray:

Hello, is it possible to get something like this [log][file][path] in logstash.conf but to id of filestream type?
изображение

I don't think there is an option to add the filestream id as metadata.

Sad to hear, looks like I should regex path field instead. Thank you for answering :pray: