Now I want to ingest 2 files instead of one in the same logstash file and direct the output to two different indices. The resultant logstash config file would looks something like below:
I think, the easiest way to achive what you want is to create a new field who is gonna take the name of the file you are currently reading and put that file name in the index.
input {
...
}
filter {
grok {
match => {
# Take the value between a slash and the extention
# and put this value in the field filename
"path" => "^%{GREEDYDATA}/{DATA:filename}[.]{WORD}$"
# In grok, greedydata take all the values until the last value that follow it
# So here, it take all the values until the last /
}
}
}
output {
file {
path => "/opt/gtal/ital/elasticsearch/logs/mqa/rubydebug.txt"
codec => rubydebug
}
elasticsearch {
hosts => [ "xxxxxxx:43045","xxxxxxxx:43045","xxxxxxxx:43045","xxxxxxxx:43045" ]
user => "elastic"
password => "xxxxxxxx"
#Adding the filename to the index
index => "demo-csv-%{[filename]}-%{+YYYY.MM.dd}"
doc_as_upsert => true
action => "update"
document_id => "%{my_fingerprint}"
}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.