I have configured syslog server and ELK in a single Windows machine. I'm getting logs via port 514 without issues, which creates a new text file everyday.
The logstash's config output creates new indices every day as per the date. There is another config file that checks for palo alto logs, parses the fields to csv, and sends it to elastic search as 4 indices(traffic, threat, system, config).
New logstash indices are not creating for the last 2 days, as per the logstash config file. The indices of palo alto config file are not growing as well.
Following is the output from logstash logs.
[logstash.inputs.syslog ] syslog listener died {:protocol=>:udp, :address=>"0.0.0.0:514", :exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use: bind>
Since syslogd service was also listening on 514, I changed the input port for logtstash to 1514, still issue persists.
I did change the port. After which around 1MB of data got sent. But its pretty much stuck after that. It is not generating new logs either. But the status of logstash service is running.
I should also mention that I have not made any changes to configuration files (logstash and elasticsearch) after my initial configuration. Yet indices were not getting created on a daily basis as my log files were created. It was erratic, getting created only once in 2 or 3 days.
Please advise me as to what I could do further to troubleshoot this.
This is the last snippet from the latest log. Still, new indices not getting created in elasticsearch (there is a new syslog file pushed since yesterday), and existing indices are not growing.
Where exactly should I check this output? In command line or config file? I have a windows based setup, so please let me know where exactly should I run the suggested command?
In the same server, there is a syslog server listening and collecting the logs. There has been no problem with receiving the logs everyday continuously without missing.
Well, in the same server, both syslog and logstash are listening to the same data(syslog from firewall). Both were working fine on port 514 for 1 week. After which I started seeing the following error in logstash logs and changed its port to 1514.
[logstash.inputs.syslog ] syslog listener died {:protocol=>:udp, :address=>"0.0.0.0:514", :exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use: bind>
You cannot have two programs listening on the same port. If syslog listens on 514 then logstash would have to listen on some other port. Now 1514 is OK, but that requires that something be writing data to that port.
You can probably configure your syslog that listens on 514 to forward a copy of every message it receives to port 1514, but how to do that is a syslog question, not a logstash question.
Is there a way to configure logstash to keep reading a folder in the same server? A folder where my syslog files get added? Can you help me with a sample input for that because I'm not sure what plugin to use.
The first 10 days that logstash was working, it was listening in the same 514 port as syslog. How was that possible?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.