Connection Refused

Hi all,

I need to create a solution that amends syslogs to be RFC3164 compliant and ultimately send them on to a QRADAR at a customer's site.

To test this I have created two linux VMs both of which can ping each other etc. I'm using filebeat with Logstash on both and they each have the logstash user created and are a member of the logstash group.

The Sender server [192.168.1.2] grabs a log file which has aggregated logs from switches and servers and should send them on to the receiver [192.168.1.3].

My Sender server Filebeat config:

filebeat.inputs:
- type: filestream
  id: filestream-id

  paths:

  - /tmp/logserverlog/logfilelogac.log

output.logstash:
   hosts: ["192.168.1.2:5044"]

My Sender server logstash pipeline config:

input {
   beats {
        port => "5044"
        type => "syslog"
        ssl => false
        }
}

output {
  syslog {
        host => "192.168.1.3"
        port => 5044
        protocol => "tcp"
        rfc => "rfc3164"
        }
}


My Receiver server's Filebeat [192.168.1.3] config:

filebeat.inputs:

- type: syslog
  format: rfc3164
  protocol.tcp:
  host: "192.168.1.2:5044"

#- type: tcp
#  format: rfc3164
#  log_errors: true
#  add_error_key: true
#  max_message_size: 90MiB
#  host: "192.168.1.2:5044"

output.logstash:
  hosts: ["localhost:5045"]

My Reciever server's logstash pipeline:

input {
   beats {
        port => "5045"
        type => "syslog"
        ssl => false
        }
}


output {
        file {
         path => "/tmp/testfile.log"
        }
}

I am getting this ECONNREFUSED: Connection Refused error and was wondering if you could offer some advice on what I am missing please:

[2022-09-01T08:32:42,596][WARN ][logstash.outputs.syslog  ][main][f8c5578f97a0c19fc8375501f2f65281904c8955e4c362fdcc98018a7cd3059d] syslog tcp output exception: closing, reconnecting and resending event {:host=>"192.168.1.3", :port=>5044, :exception=>#**<Errno::ECONNREFUSED: Connection refused - connect(2) for "192.168.1.3" port 5044>**, :backtrace=>["org/jruby/ext/socket/RubyTCPSocket.java:134:in `initialize'", "org/jruby/RubyIO.java:876:in `new'", "/usr/share/logstash/vendor/local_gems/54adea01/logstash-output-syslog-3.0.5/lib/logstash/outputs/syslog.rb:209:in `connect'", "/usr/share/logstash/vendor/local_gems/54adea01/logstash-output-syslog-3.0.5/lib/logstash/outputs/syslog.rb:177:in `publish'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-codec-plain-3.1.0/lib/logstash/codecs/plain.rb:59:in `encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:48:in `block in encode'", "org/logstash/instrument/metrics/AbstractSimpleMetricExt.java:65:in `time'", "org/logstash/instrument/metrics/AbstractNamespacedMetricExt.java:64:in `time'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:47:in `encode'", "/usr/share/logstash/vendor/local_gems/54adea01/logstash-output-syslog-3.0.5/lib/logstash/outputs/syslog.rb:147:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105:in `block in multi_receive'", "org/jruby/RubyArray.java:1821:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:143:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:300:in `block in start_workers'"], :event=>#<LogStash::Event:0x1195d12d>}

Hello,

From your logstash server what's the output of

echo > /dev/tcp/192.168.1.3/5044 && echo "OK"

Many thanks for replying. I realised that I didn't really need the Filebeat on the receiving server and managed to get a connection working. So of course, the answer to your help is "OK" as the connection is working.

Thank you for your help.

I'm now hitting an invalid frame type on the receiving server, but slowly I'm getting there.

Actually a better answer would be, do I need the other instance of Filebeat?

I'm thinking the flow should go Filebeat -> Logstash -> ........... -> Filebeat -> Logstash.

Am I right?

Hi,

If i understand this correctly you have a syslog server which receives logs from a variety of sources and you want them to be RFC3164 compliant in order to feed external Qradar ?

Hi Grumo,

Yes that is correct. As I do not have access to the Qradar server, I want to use my second (receiving) server as a stand-in.

My intention is to take a log file, alter it so that it is all RFC3164 compliant and have it sent to the receiving server.

I may be over-complicating my process.

You might be overcomplicating it,

I dont understand what do you mean about a stand-in server ?

Hi Grumo,

Yes I was overcomplicating it. I had created a server to receive the messages so that I could see what was being sent and compare it to what was arriving. This was the stand-in server which was meant to represent the Qradar that I have no access to.

As it happens, I didn't need the receiving server to have filebeat running. I just configured the pipeline.conf file to receive via TCP so:

input {
  tcp {
     port => "5047"
     }
}

This worked a treat and my receiving server started picking up the logs from my sending server.

Thank you for your help Grumo. Now to try to decipher the mess of different logs and convert them all to RFC3164. If you have any advice on that I'd be grateful.

Altogether, my aggregated log file is a mix of 8 or 9 different log formats.

1 Like

Hey !

Well i have few questions to begin with because i do not understand why you're having this problem.

As i understand it you're having a hard time trying to sort a log file on a log server.

-> How do you collect & aggregate those logs into one file ?

-> Would you be able to filter logs depending on their sources and then having each of them ?

You need to makes things easier you should try to collect logs with syslog and filter them into seperates log file

  • server.log ( special random format )
  • switch.log ( another special random format )

Now you want to process, using filebeat and logstash or just logstash instance that picks up the file on the same server ( i need to understand your architecture in order to give best advices )

But you could listen on different ports for servers or switches from logstash or parse the filename of the filebeat picking file and then know if it's from /var/log/server.log or /var/log/swtich.log ...

You have a lots of solutions just need to know if you dispose compute power on the server that picks up logs or do you have any network constraints ....

I gave you some ideas here but could be helpful to see the idea of your project.

Thanks Grumo,

I've come in late to this project, so much of the architecture can not be changed easily so I'm trying to put something in place that can fix this problem. The problem is this:

A customer is complaining that the some of the log entries appear on Qradar to have come from the log server and not the original items. I can see that Wazuh appears to be adding it's own header to each line in much the same way as I've discovered Logstash does.

The set up is like so:

There is one server that is being used to aggregate syslogs and other logs from lots of different servers, switches and firewalls.

These are being aggregated by Wazuh and Ossec into one log file, so each line in the log file is different. The customer is using Qradar which I do not have access to, nor do I have the ability to create a testbed with Qradar. I have been told that the customer wants everything in RFC3164 format.

I could not work out an easy way on Wazuh to resolve this problem directly, but after asking around, I was told Logstash can do it by means of grok etc. So what I propose is that I put logstash inbetween the Wazuh server and the Qradar server, so it can 'fix' each log line.

Some examples of how some of the log lines look:


2022 Sep 02 11:24:38 FIREWALL->xxx.xxx.xx.x Sep  2 11:24:38 FIREWALL 1,2022/09/02 11:24:38, MESSAGE INFORMATION

2022 Sep 02 11:24:36 LOGSERVER->xxx.xxx.xx.x 2022-09-02T11:24:36Z SERVER2 process[pid]: MESSAGE INFORMATION  **These are the ones specifically that appear like they're coming from the Logserver**

2022 Sep 02 11:24:36 (FTPServer) xxx.xxx.xx.x->/var/log/syslog Sep  2 11:24:35 FTPServer process[pid]: MESSAGE INFORMATION

So you can see, I have my work cut out for me.

Oh you're in a very specific use case.

Try to make the problem into smallers operation and find ways of doing them via the Logstash / Elastic Ecosystem.

Remember to keep it simple :wink:

Good luck

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.