The service_name, app_name, vnf_id, instance_id and component_name is used in logstash configuration for creating the directory.
.................................................................................................................................
Logstash.conf file is as below:
output {
if [service_name] == "ops" and [component_name] == "yz" {
file{
path => "/var/log/store/%{vnf_id}/%{instance_id}/%{service_name}/%{app_name}_%{+yyyy-MM-dd-HH}.log.gz"
codec => line { format => "%{message}, %{vnf_id}, %{instance_id}, %{component_name}, %{app_name}" }
gzip => true
}
}
There are two log directory created at the /var/log/store/ path:
#cd /var/log/store/ #ls -l
%{vnf_id}/
ABC/ -- this value i.e.. vnf_id is filled after some time the filebeat started and filebeat is restarted.
Can you please help why the junk %{vnf_id} is getting created.
Let me explain what are we doing as part of process.
-> during the bootup of the system the filebeat.yml is copied with empty data of vnf_id parameter and other dummy parameters in the filebeat.yml file.
-> Disabled and masked the filebeat service at bootup untill configuration is completed.
-> once the system is up we have written script which takes input for vnf_id, instance_id, component name and fill the filebeat.yml and restart and unmask the service.
-> during this process logstash is still active and waiting for input from filebeat.
Is this parameter is not properly sent from filebeat hence the logstash is writing the junk directory.
if yes, how do we handle this issue as some of the logs are going to junk directory and rest to actual. Its being difficult to integrate the logs.
When vnf_id is not set on an event that Logstash processes, where do you want the logs to go? Similar is true for the other attributes that are part of the path you're specifying (e.g., instance_id, service_name, app_name, etc.)?
So, its the filebeat who is sending the event without vnf_id filled. Can you please suggest how can we overcome this issue.
As i conveyed before the filebeat is up with some default configuration as vnf_id as empty and logstash server ip is not yet configured.
Later point while system boot up we configure the proper logstash ip in filebeat and vnf_id is filled with value example "ONEDS-1" and restart the filebeat service.
If you're looking to figure out how to handle events with missing fields in Logstash (perhaps by specifying a default for the field if it is missing using the Mutate Filter Plugin's coerce directive), you would add something like this to your filters for each field:
If, however, you're looking to prevent Filebeat from sending events until it has been configured, that question may be out of scope for the Logstash forums and would better be addressed in the Filebeat forum.
In principal, I would suggest looking at your service manager configuration for filebeat (e.g. upstart, initd) and change the conditions on which it starts to not start on system startup, but instead to have your configuration helper start the service after it has been fully configured.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.