Can ELK meets my need: make the subdirectory name a index key?

I am doing a reasearch on Logstash+ElasticSearch+Kibana to make sure they meet my needs .Below is the usage scenario:
I want to use them for my Hadoop application logs search in a WEB UI. Since a hadoop application can be executed on any machine , so , check the application log is always cumbersome. A new application will create a directory under /home/log/hadoop/logs/userlogs/ of the machine it is running on and create three log files: syslogsyserrsysout.
Currently, to check the log of a application , user get the applicationId(for example it is 123123123-223-1), and then goes to directory /home/log/hadoop/logs/userlogs/123123123-223-1/ of each machine to check the logs files. Of cource , only the machine the application running on contains its logs.

So ,after deployed Logstash+ElasticSearch+Kibana ,I wonder if user can query the log for a specific applicaiton Id like this:
For example , user input the applicationId 123123123-223-1as the query condition, ElasticSearch will return the log content for this application, that is to see, the file content under /home/log/hadoop/logs/userlogs/123123123-223-1 which may locate on any machine of my cluster. That is to say, can the subdirectory name(the applicationId) be a index key so , user can just search its own application log?

Yes. You can use a grok filter to extract a portion of the file path and e.g. use it in the name of the ES index or just a field value in each event.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.