Logstash not pushing logs to ElasticSearch

I have configured syslog server and ELK in a single Windows machine. I'm getting logs via port 514 without issues, which creates a new text file everyday.

The logstash's config output creates new indices every day as per the date. There is another config file that checks for palo alto logs, parses the fields to csv, and sends it to elastic search as 4 indices(traffic, threat, system, config).

New logstash indices are not creating for the last 2 days, as per the logstash config file. The indices of palo alto config file are not growing as well.

Following is the output from logstash logs.
[logstash.inputs.syslog ] syslog listener died {:protocol=>:udp, :address=>"", :exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use: bind>

Since syslogd service was also listening on 514, I changed the input port for logtstash to 1514, still issue persists.

If you are still getting a message saying it cannot bind to port 514 then you did not change the input port.

I did change the port. After which around 1MB of data got sent. But its pretty much stuck after that. It is not generating new logs either. But the status of logstash service is running.
I should also mention that I have not made any changes to configuration files (logstash and elasticsearch) after my initial configuration. Yet indices were not getting created on a daily basis as my log files were created. It was erratic, getting created only once in 2 or 3 days.
Please advise me as to what I could do further to troubleshoot this.

What do you see in the logstash log?

This is the last snippet from the latest log. Still, new indices not getting created in elasticsearch (there is a new syslog file pushed since yesterday), and existing indices are not growing.

[2019-07-17T12:47:23,851][INFO ][logstash.inputs.syslog ] Starting syslog udp listener {:address=>""}
[2019-07-17T12:47:23,773][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>""}
[2019-07-17T12:47:23,851][INFO ][logstash.inputs.syslog ] Starting syslog tcp listener {:address=>""}
[2019-07-17T12:47:24,226][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"", :receive_buffer_bytes=>"65536", :queue_size=>"2000"}
[2019-07-17T12:47:24,710][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-07-17T12:47:24,726][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-07-17T12:47:24,960][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

What does the complete configuration look like? Do you have monitoring enabled? If so, do you see events flowing to the elasticsearch output?

You could add a stdout output and see if any events flow to it.

output { stdout { codec => rubydebug } }

Where exactly should I check this output? In command line or config file? I have a windows based setup, so please let me know where exactly should I run the suggested command?

Also, how do I enable monitoring?

By monitoring I am referring to X-Pack monitoring.

I do not run logstash on Windows so I am not sure where the stdout of a Windows service is written to (I assume you are running as a service).

This is the latest log from Logstash. Please help me as I'm still seeing no progress (No new indices getting created, existing indices not growing).

Any help would be much appreciated.

[2019-07-22T16:09:44,748][INFO ][logstash.inputs.syslog ] Starting syslog udp listener {:address=>""}
[2019-07-22T16:09:44,763][INFO ][logstash.inputs.syslog ] Starting syslog tcp listener {:address=>""}
[2019-07-22T16:09:44,810][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>""}
[2019-07-22T16:09:44,842][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-07-22T16:09:44,826][INFO ][logstash.inputs.http ] Starting http input listener {:address=>"", :ssl=>"false"}
[2019-07-22T16:09:44,826][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-07-22T16:09:44,810][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-07-22T16:09:44,810][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"", :ssl_enable=>"false"}
[2019-07-22T16:09:44,904][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"", :receive_buffer_bytes=>"65536", :queue_size=>"2000"}
[2019-07-22T16:09:45,658][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Following is the output for all indices created so far, all have been started and current state is unassigned.


index shard prirep state docs store ip node
logstash-2019.07.13 0 p STARTED 1001 1.4mb ELKTEST
logstash-2019.07.13 0 r UNASSIGNED

What is sending data to logstash?

Firewall is sending syslog messages to logstash.

In the same server, there is a syslog server listening and collecting the logs. There has been no problem with receiving the logs everyday continuously without missing.

So the firewall is writing to syslog on port 514? What is sending messages to port 1514?

Well, in the same server, both syslog and logstash are listening to the same data(syslog from firewall). Both were working fine on port 514 for 1 week. After which I started seeing the following error in logstash logs and changed its port to 1514.

[logstash.inputs.syslog ] syslog listener died {:protocol=>:udp, :address=>"", :exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use: bind>

You cannot have two programs listening on the same port. If syslog listens on 514 then logstash would have to listen on some other port. Now 1514 is OK, but that requires that something be writing data to that port.

You can probably configure your syslog that listens on 514 to forward a copy of every message it receives to port 1514, but how to do that is a syslog question, not a logstash question.

1 Like

Thanks, that is helpful. :slight_smile:

I have 2 queries.

  1. Is there a way to configure logstash to keep reading a folder in the same server? A folder where my syslog files get added? Can you help me with a sample input for that because I'm not sure what plugin to use.
  2. The first 10 days that logstash was working, it was listening in the same 514 port as syslog. How was that possible?

You could use a file input to read whatever files syslog is logging to.

I cannot think of a way logstash and syslog could both have been listening.

Still no change. Config test returns all OK. Still, no index created in elasticsearch. This is the input part of my config :

        path=>"C:\Program Files (x86)\Syslogd\Logs\SyslogCatchAll-2019-07-25.txt"
	    ignore_older => 0

You cannot use backslash in the path option of a file input on Windows. Change them to forward slash.

ignore_older => 0 says to ignore any files more than zero seconds old, which means it ignores all files. Remove it.

1 Like

Are you adding a tag in the file filter?