Is it possible to create multiple index with multiple folder name

Am monitoring logs files in different folders. I have 3 folders, folder1,folder2,folder3. Is it possible to Create different index name with name of folders using logstash.?

Yes, it is possible. Use gsub to extract folder name, then add in [@metadata][dir]
The best is to provide with dir name samples.

output {
   elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "indexname_%{[@metadata][dir]}"
   } 
}

Other possibly is to set with IFs which is more strict and more usable in some cases.

output {
 if ([@metadata][dir]=="dir1"){
   elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "indexname_%{[@metadata][dir]}"
   } 
 }
 else  if ([@metadata][dir]=="dir2"){
    elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "indexname_%{[@metadata][dir]}"
   } 
 }
 else  if ([@metadata][dir]=="dir3"){
    elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "indexname_%{[@metadata][dir]}"
   } 
 }

You can use grok to extract a directory name from a path. Examples are here and here. Depending on your versions and ECS compatibility the path of the file will likely be in [path] or, more recently [log][file][path]

2 Likes

Hi buddy, thanks for your reply.

My directories structure

/apps/app1/logs/SystemOut.log
apps/app2/logs/SystemOut.log
apps/app3/logs/SystemOut.log

I tired with following pipeline. But its not working

input {
file {
path => "/folder1/logs/server1/SystemOut.log"
start_position => "beginning"
}
}

filter {
if [path] == "/folder1/logs/server1/SystemOut.log" {
grok {
match => { "message" => [
"[%{DATA:timestamp1}]%{SPACE}%{WORD:value1}%{SPACE}%{WORD:value2}%{SPACE}%{WORD:value3}%{SPACE}%{TIMESTAMP_ISO8601:timestamp2}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{WORD:value4}:%{NUMBER:value5}%{SPACE}-%{SPACE}%{GREEDYDATA:value6}"
] }
}
}

}

output {
if [path] == "/folder1/logs/server1/SystemOut.log" {
opensearch {
hosts => ["https://url:443"]
index => "ee"
user => "in"
password => "dm3"
}
}

OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

Hi buddy, thanks for your reply.

My directories structure

/apps/app1/logs/SystemOut.log
apps/app2/logs/SystemOut.log
apps/app3/logs/SystemOut.log

I tired with following pipeline. But its not working

input {
file {
path => "/folder1/logs/server1/SystemOut.log"
start_position => "beginning"
}
}

filter {
if [path] == "/folder1/logs/server1/SystemOut.log" {
grok {
match => { "message" => [
"[%{DATA:timestamp1}]%{SPACE}%{WORD:value1}%{SPACE}%{WORD:value2}%{SPACE}%{WORD:value3}%{SPACE}%{TIMESTAMP_ISO8601:timestamp2}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{WORD:value4}:%{NUMBER:value5}%{SPACE}-%{SPACE}%{GREEDYDATA:value6}"
] }
}
}

}

output {
if [path] == "/folder1/logs/server1/SystemOut.log" {
opensearch {
hosts => ["https://url:443"]
index => "dev-logs"
user => "in"
password => "dm3"
}
}

Orelse Is it possible to add tag in new field with folder name???

This has been working on my simplified version with files as output. Try something like this:

input {
  file {
   path => "/apps/app*/logs/SystemOut.log"
   start_position => beginning
   sincedb_path => "/dev/null" # change to keep records or do not use NUL in prod
   mode => "tail"
  }
}

filter {

  grok {  match => { "[log][file][path]" => "%{GREEDYDATA:dir}/%{DATA:[@metadata][dest]}/%{DATA}/%{GREEDYDATA}" } }

}

output {
 stdout {codec => rubydebug { metadata => true} } # test to see results, latter remove or comment

    elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "indexname_%{[@metadata][dest]}"
   }
}
1 Like

Thanks rios, let me try this and update here.
One more question in this.
Is it possible to add tag or extract a new field with folder name.?
If you give any example for this i will try that one also.

Yes, but I don't see point of tag because every tag will be recorded in index, except if you want 1 single index with extra tag or field to know the source, for example app1 directory.
Save the space in big data :slight_smile: use @metadata

      mutate {
        add_tag => [ "%{[@metadata][dest]}" ]
      }

Might be i have added wrong question, right now i have one field name "path" ( /apps/folder1/logs). My question is, can we add that "folder1" name in new filed???

mutate {
        add_field => {
          "%{[@metadata][dest]}" => "%{[@metadata][dest]}" # v1
          "source_%{[@metadata][dest]}" => "%{[@metadata][dest]}" # v2
        }
}

or do not use temp [@metadata][dest], use just source: %{DATA:source}

1 Like

Let me try rios, thanks mate.

Also you can use this to avoid grok, rarely rare cases.

may i try like this ??

input {
file {
path => "/folder1/logs/server1/SystemOut.log"
start_position => "beginning"
sincedb_path => "nul"
}
}

filter {
if [path] == "/folder1/logs/server1/SystemOut.log" {
grok {
mutate { copy => { "[log][file][path]" => "[@metadata][path]"}}
mutate { split => { "[@metadata][path]" => "/" } }
mutate { add_field => { "folder1" => "%{[@metadata][path][4]}"}}
}
}

}

output {
opensearch {
hosts => ["https://url:443"]
index => "dev-logs"
user => ""
password => ""
}
}

This will not work because it is [log][file][path], not just path
if [log][file][path] == "/folder1/logs/server1/SystemOut.log"
But...since you have change the position in path, should use this:

input {
 file {
 path => "/folder1/logs/server1/SystemOut.log"
 start_position => "beginning"
 sincedb_path => "nul"
 }
}

filter {

  grok {  match => { "[log][file][path]" => "%{GREEDYDATA:dir}/%{DATA:[@metadata][dest]}/%{GREEDYDATA}" } }

  mutate { add_field => { "folder1" => "%{[@metadata][dest]}"}}

}

output {
opensearch {
hosts => ["https://url:443"]
index => "dev-logs"
}
}
1 Like

Thanks buddy, its working fine.

Good. :+1:

  1. "NUL" on Windows or "/dev/null" on Linux it to avoid tracking. Set a file, to keep record which lines has been read otherwise you might have duplicated documents. Set something like this:
    sincedb_path => "/path/file.db"

  2. Remove dir from grok, it's useless, I left from old code.
    grok { match => { "[log][file][path]" => "%{GREEDYDATA}/%{DATA:[@metadata][dest]}/%{GREEDYDATA}" } }

  3. Debugger not need, remove the line stdout {codec => rubydebug...

1 Like

Sure rios, let me do like this

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.