Hi ELK team,
i have successfully installed Logstash in RHEL. but what issue is we want to run Config file placed under config.d folder when we run "sudo initctl start logstash ". it is not reading those config files placed under config.d folder. OR how we need to run " logstash -f simple.conf " command in RHEl to start logstash and run config file at a time???
Thanks in advance. any reference on this will be great help!!
What config.d directory are you talking about, /etc/logstash/conf.d? With what arguments are Logstash started (check with ps aux | grep logstash)? What's in logstash.yml?
Hi @magnusbaeck
i tried with placing config file under /etc/logstash/conf.d directory.
and changed log mode to debug mode...
logstash is working fine but it is not able to create index by taking server log as input.
following are the lines of Logstash log file:
Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3c05e288@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
_globbed_files: /home/nakn/logfile/server.log: glob is: []
we are trying send serverlog data from file system and error log data from database.so we placed config files in "/etc/logstash/conf.d/ " , for filesystem we are using file input plugin and for database we are using jdbc input plugin. so when we start logstash using command "sudo /usr/bin/systemctl start logstash" its working but data in indexes created are not correct. for config file which needs to read from database ,it is reading from file system path specified in other config files.
so why it is behaving like that, do we need to make any changes in logstash.yml or pipeline.yml for our Scenario to work proprly??
All configuration files in /etc/logstash/conf.d will be concatenated. If you want to have isolated event streams you need to use conditionals or switch to using multiple pipelines.
This is an extremely frequently asked question so please excuse my brevity.
so according that documentation, we need to create different pipelines with unique id and in the path we need to specify which config file to select , right??
so according that documentation, we need to create different pipelines with unique id and in the path we need to specify which config file to select , right??
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.