Creating index depends on source

I am new to Elastic Stack.Using version :6.5.4

Data flow like : Filebeat -> Logstash -> ES-> Kibana

Configured Filebeat on SIT and UAT environment to common Log server.

Would like to create separate index for SIT and UAT.

have written 30-elasticsearch-output.conf in Logstash like below
output {
if [@fields][env] == "SIT" {
elasticsearch {
hosts => ["******:9200"]
manage_template => false
index => "sit-%{+YYYY.MM.dd}"
}
} else if [@fields][env] == "UAT" {
elasticsearch {
hosts => ["******:9200"]
manage_template => false
index => "uat-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["******:9200"]
manage_template => false
index => "my-%{+YYYY.MM.dd}"
}
}
stdout { codec => rubydebug }
}

Logstash output

"message" => "04-Feb-2019 12:59:03.954 INFO [Thread-6] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["ajp-nio-9999"]",
"beat" => {
"name" => "deala0",
"hostname" => "deala0",
"version" => "6.5.4"
},
"fields" => {
"env" => "SIT"
}
}

Always going to else part and creating index my.

Would be helpful if anyone can help on this.

Thanks

Hi @Kanthasamyraja,
To achieve this I have done the following.

In Filebeat I set some fields like

- type: log
  paths:
    - /var/log/sensu/*.log
  encoding: plain
  fields:
    log_prefix: dc
    log_idx: sensu-logs
  fields_under_root: false

In Logstash I create @metadata fields based on these.

# Adding @metadata needed for index sharding to Filebeat logs
mutate {
  copy => {
   "[fields][log_prefix]" => "[@metadata][log_prefix]"
   "[fields][log_idx]" => "[@metadata][index]"
  }
}

And my Elasticsearch output if Logstash can then be just

  elasticsearch {
        hosts => ["10.1.1.1:9200", "10.1.1.2:9200", "10.1.1.3:9200"]
        index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{+YYYY.MM.dd}"
  }

You can drop one of the @metadata fields if you just need the one "switch".

Hope that helps,
AB

it has worked. Thanks @A_B

I also applied same but i am getting %{[@metadata][log_prefix]}-%{[@metadata][index]}-2019.02.28 as a index.

i am using filebeat 6.5.4 version.

Any help ?

I am using below logstash configuration .

input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
if [fileset][module] == "mysql" {
if [fileset][name] == "error" {
grok {
match => { "message" => ["%{LOCALDATETIME:[mysql][error][timestamp]} ([%{DATA:[mysql][error][level]}] )?%{GREEDYDATA:[mysql][error][message]}",
"%{TIMESTAMP_ISO8601:[mysql][error][timestamp]} %{NUMBER:[mysql][error][thread_id]} [%{DATA:[mysql][error][level]}] %{GREEDYDATA:[mysql][error][message1]}",
"%{GREEDYDATA:[mysql][error][message2]}"] }
pattern_definitions => {
"LOCALDATETIME" => "[0-9]+ %{TIME}"
}
remove_field => "message"
}
mutate {
rename => { "[mysql][error][message1]" => "[mysql][error][message]" }
}
mutate {
rename => { "[mysql][error][message2]" => "[mysql][error][message]" }
}
date {
match => [ "[mysql][error][timestamp]", "ISO8601", "YYMMdd H:m:s" ]
remove_field => "[mysql][error][time]"
}
}
else if [fileset][name] == "slowlog" {
grok {
match => { "message" => ["^# User@Host: %{USER:[mysql][slowlog][user]}([[^]]+])? @ %{HOSTNAME:[mysql][slowlog][host]} [(IP:[mysql][slowlog][ip])?](\sId:\s %{NUMBER:[mysql][slowlog][id]})?\n# Query_time: %{NUMBER:[mysql][slowlog][query_time][sec]}\s* Lock_time: %{NUMBER:[mysql][slowlog][lock_time][sec]}\s* Rows_sent: %{NUMBER:[mysql][slowlog][rows_sent]}\s* Rows_examined: %{NUMBER:[mysql][slowlog][rows_examined]}\n(SET timestamp=%{NUMBER:[mysql][slowlog][timestamp]};\n)?%{GREEDYMULTILINE:[mysql][slowlog][query]}"] }
pattern_definitions => {
"GREEDYMULTILINE" => "(.|\n)*"
}
remove_field => "message"
}
date {
match => [ "[mysql][slowlog][timestamp]", "UNIX" ]
}
mutate {
copy => {
"[fields][log_prefix]" => "[@metadata][log_prefix]"
"[fields][log_idx]" => "[@metadata][index]"
}
}
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

and this is my filebeat.yml

Paths that should be crawled and fetched. Glob based paths.

paths:
- /home/centos/*.log
document_type: db_log
fields:
log_prefix: dc
log_idx: sensu-logs
fields_under_root: true

It is very difficult to read your config when it is not properly formatted. Could you try to select the config parts and click </> from the tools please.

If you set fields_under_root then you will not have a field called "[fields][log_prefix]", it will just be [log_prefix].

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.