For the past few weeks I've been setting up logging for my local network infrastructure. I have managed to get all the pieces set up but would like to move from running the tar version to docker versions to ease upgrading and maintenance. Currently forwarding Pfsense logs to ELK.
Input Configuration:
input {
syslog {
port => 5514
grok_pattern => "<%{POSINT:priority}>%{SYSLOGTIMESTAMP:timestamp} %{GREEDYDATA:message}"
}
}
Output Configuration:
output {
rabbitmq {
host => ["rabbitmq.localdomain"]
exchange => "logging_syslog_input"
exchange_type => "topic"
persistent => false
user => "ingest"
password => "abc123"
}
}
On the bare metal instances of LS the above plugin listens and forwards the messages to rabbitmq without problems.
When I have switched to docker instance, the syslog input no longer responds to syslog servers sending messages to Logstash. I can open a telnet connection and send messages that make it through all the pipelines into Logstash.
Further updates:
I set up a second Logstash container with the above configuration on my laptop and started doing some testing with a Logstash instance with the below configuration to see if I could get messages passed:
input {
generator {
count => 1500
}
}
output {
syslog {
host => "<hostname>"
port => 5514
}
}
Where hostname has been the currently existing pipeline IP/hostname (worked just fine); localhost and the IP of my laptop. When set to localhost or the IP of my laptop there are no messages that are passed through, although I can telnet to localhost port 5514 and still send messages via telnet.
Container settings:
logstash.yml:
config.reload.automatic: true
http.host: "0.0.0.0"
http.port: "9600"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: [ "http://<es_hostname>:9200" ]
jvm.options:
## JVM configuration
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
#-Duser.language=en
# Set the locale country
#-Duser.country=US
# Set the locale variant, if any
#-Duser.variant=
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
#-Djna.nosys=true
# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
Pipelines.yml
- pipeline.id: syslog_listener
path.config: "/usr/share/logstash/pipeline/syslog_listener/"
Docker Run Command
docker run -v /ELK/listener/:/usr/share/logstash/config/ -v /ELK/logstash_pipelines/:/usr/share/logstash/pipeline/ -p 5514:5514 -p 5044:5044 -p 9600:9600 docker.elastic.co/logstash/logstash:6.4.2
Final update:
It turns out that syslog by default sends information via UDP and I wasn't opening the UDP port with the container.