I have my ES cluster in 3 nodes(i.e.Each node in each VM) , i had enabled the slowlogs in all the node and keep slowlogs in the PATH: /var/es/slowlogs.log . So , what i had did is i writtten a logstash script to parse the logs that are taking > 15ms .
Scenario:
I need to run the logstash script in cron job in such a way it will give all the logs to me at prescribed time. For that i had installed logstash in NODE1 and kept the input path as
What i want is, How can i get all the slowlogs that are present in all the VM's at one shot than running the same script in all the VM's(i.e.In a node 1 if i run the script it will get the parsed data from all the three nodes some thing like this path => "/var/es/all node logs.log" SO that i will get the all parsed data from all the logs in all VM's
I need to run the logstash script in cron job in such a way it will give all the logs to me at prescribed time.
I suggest you run Logstash all the time and process the logs continuously. Better results and easier to set up.
How can i get all the slowlogs that are present in all the VM's at one shot than running the same script in all the VM's
Logstash's file input can only read files in locally mounted file systems, i.e. you'd have to make the log files on the other machines available via e.g. NFS. This isn't recommended. I suggest you either
use Filebeat to ship the log files to a single Logstash server, or
Thanks @magnusbaeck[quote="magnusbaeck, post:2, topic:85641"]
I suggest you run Logstash all the time and process the logs continuously
[/quote]
How can i run the logstash all the time ?
Problem:
Because if i run the logstash script once it is producing one .sincedb* file , so if i am running it again i need to delete the .sincedb* file and run it again then only it is working correctly.
Once filebeat is configured properly & enabled it will automatically transfer the logs to single logstash server right? Or i have to run it manually to do that?
Because if i run the logstash script once it is producing one .sincedb* file , so if i am running it again i need to delete the .sincedb* file and run it again then only it is working correctly.
This is only necessary if you want to process the exact same file again, but that should be a highly unusual use case.
Once filebeat is configured properly & enabled it will automatically transfer the logs to single logstash server right?
But this is happening every time when i am running the same config file again .
How can i run the logstash all the time ? Because normally we will run the config file like this logstash -f config.conf manually every time to run the logstash
But this is happening every time when i am running the same config file again .
You mean Logstash is processing the same file over and over again? If so it sounds like Logstash is having problems saving the sincedb file, or the file you're reading is updated in a weird way. Are you modifying the source log file in any way between each run?
How can i run the logstash all the time ?
Run it as a service. This is described in the documentation.
I have a logstash config file when i run it for the first time it is working fine and produces a sincedb file. If i run the same config file again it will not show any output it remains in blinking state without out showing any output , but if i delete the sincedb file and run it again then it is working fine.
Do i have to delete the sincedb file everytime when running the same config logstash file?
In documentation they didnt mentioned how to run as a service in windows.
I can help you to run as a service on Windows as it's what I've done:
It says to use Powershell to run nssm, but you can simply launch it in command line: nssm install MyServiceName and then configure it.
Here a small recap that I've made on my PC:
* Open a Shell
* Go where nssm.exe is
* Type: nssm install Logstash
* A GUI will appear:
* On application tab:
* Path: C:\elk\logstash\bin\logstash.bat (in my case)
* Startup directory: C:\elk\logstash\bin
* Arguments: -f c:\elk\logstash\config\logstash.conf -r
* If you have multiple .conf files, you can do *.conf to include them all
* -r make Logstash wary of changes in conf files and restart to apply them will not be necessary anymore
* On Details tab:
* Display name: Logstash
* Description: Logstash Service
* Startup type: Automatic
* On Dependencies tab (not mandatory)
* set: elasticsearch-service-x64 (or whatever elasticsearch service name was)
* You can now click on Install Service
* If you need to modify the service later, type on a shell: nssm modify Logstash
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.