Hi, I want to extract the folder and file name out of log.file.path filed we have in kibana. SO for example-
C:\Program Files (x86)\xxx SST\DispatcherAPP\logs\dispatcher.log
In this file path, I want to create a field which just shows DispatcherApp\dispatcher.log.
And I have multiple files under multiple prospectors in filebeat.yml so this field I could add will be for all different folder and file name.
You can use a scripted field and the substring Java function in order to achieve this, but I'd recommend you do it from ingest time, either in FIlebeat, Logstash or in an Elasticsearch pipeline processor as it's done only once. If you do it with a scripted field it will be done every single time you query the data.
How can it be done from filebeat or logstash?
This is the guide on what Filebeat can do in regards to data processing:
https://www.elastic.co/guide/en/beats/filebeat/7.5/filtering-and-enhancing-data.html
As for logstash, the Grok and the dissect filter: https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
For more info on them, the forums regarding those 2 products will be of more help.
I have data being processed how the documentation mentions. But my error is coming when I write an if statement on log file path to apply grok filter on. Something like this-
if [fields][tags] == "obapp-dotnet" {
grok {
break_on_match => false
match => ["path","%{GREEDYDATA}/%{GREEDYDATA:filename}\.log"]
if [path] = "dispatcher.log" {
grok{
match => {
"message" => [ %{DATESTAMP:timestamp}%{SPACE}%{NONNEGINT:code}%{GREEDYDATA}%{LOGLEVEL}%{SPACE}%{NONNEGINT:anum}%{SPACE}%{GREEDYDATA:logmessage}]
}
}
else {
match => {
"message" => [\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{GREEDYDATA}%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}%{GREEDYDATA}%{SPACE}%{JAVACLASS:javaClass}
]
}
}
}