Morning, I am having a issue with the localhost server populating Kibana. I have been trying for two days now and stepping through the config files double checking everything. The localhost OS is Fedora 24. Would someone be able to help me out with this?
Thanks
T
can u post your configuration file ?
Which one buddy?
T
logstash configuration file , which you created to index your logs
input {
udp {
host => "localhost"
port => 10514
codec => "json"
type => "rsyslog"
}
}
This is an empty filter block. You can later add other filters here to further process
your log lines
filter { }
This output block will send all events of type "rsyslog" to Elasticsearch at the configured
host and port into daily indices of the pattern, "rsyslog-YYYY.MM.DD"
output {
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
}
these are the conf files in the /etc/logstash/conf.d/ directory
02-beats-input.conf logstash.conf
10-syslog-filter.conf logstash.conf.json
30-elasticsearch-output.conf logstash.conf.old
beats-dashboards-1.1.0 logstash.conf.old.v1
filebeat-index-template.json out.conf
filter.conf
ls -alF
total 44
drwxrwxr-x. 3 root root 4096 Oct 27 08:57 ./
drwxr-xr-x. 3 root root 20 Oct 20 10:15 ../
-rw-r--r--. 1 logstash logstash 193 Oct 24 11:29 02-beats-input.conf
-rw-r--r--. 1 logstash logstash 456 Oct 24 11:12 10-syslog-filter.conf
-rw-r--r--. 1 logstash logstash 210 Oct 24 11:22 30-elasticsearch-output.conf
drwxr-xr-x. 5 root root 157 Jan 28 2016 beats-dashboards-1.1.0/
-rw-r--r--. 1 root root 991 Oct 27 06:36 filebeat-index-template.json
-rw-r--r--. 1 root root 160 Oct 27 07:49 filter.conf
-rw-r--r--. 1 logstash logstash 803 Oct 27 07:42 logstash.conf
-rw-r--r--. 1 logstash logstash 793 Oct 24 07:49 logstash.conf.json
-rw-r--r--. 1 root root 347 Oct 21 05:09 logstash.conf.old
-rw-r--r--. 1 root root 348 Oct 24 07:53 logstash.conf.old.v1
-rw-r--r--. 1 root root 186 Oct 27 08:57 out.conf
[root@localhost etc]# cat rsyslog.conf
rsyslog configuration file
For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
MODULES
The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imjournal # provides access to the systemd journal
$ModLoad imklog # provides kernel logging support (previously done by rklogd)
$ModLoad immark # provides --MARK-- message capability
Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
By default, all system logs are read from journald through the
imjournal module. To read messages from the syslog socket, the
imuxsock module has to be loaded and a path to the socket specified.
$ModLoad imuxsock
The default path to the syslog socket provided by journald:
$SystemLogSocketName /run/systemd/journal/syslog
GLOBAL DIRECTIVES
Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
File syncing capability is disabled by default. This feature is usually not required,
not useful and an extreme performance hit
#$ActionFileEnableSync on
Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
File to store the position in the journal
$IMJournalStateFile imjournal.state
If there is no saved state yet, don't read in the whole bulk of messages.
This means some of the older messages won't be collected by rsyslog,
but it also prevents a potential huge spike in resource utilization.
$IMJournalIgnorePreviousMessages on
RULES
Log all kernel messages to the console.
Logging much else clutters up the screen.
#kern.* /dev/console
Log anything (except mail) of level info or higher.
Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
The authpriv file has restricted access.
authpriv.* /var/log/secure
Log all the mail messages in one place.
mail.* -/var/log/maillog
Log cron stuff
cron.* /var/log/cron
Everybody gets emergency messages
.emerg :omusrmsg:
Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
Save boot messages also to boot.log
local7.* /var/log/boot.log
### begin forwarding rule
The statement between the begin ... end define a SINGLE forwarding
rule. They belong together, do NOT split them. If you create multiple
forwarding rules, duplicate the whole block!
Remote Logging (we use TCP for reliable delivery)
An on-disk queue is created for this action. If the remote host is
down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#. @@remote-host:514
### end of the forwarding rule
### Templates for Security Engineering
$template TmplAuth, "/var/log/rsyslog_custom/%HOSTNAME%/%PROGRAMNAME%.log"
$template TmplMsg, "/var/log/rsyslog_custom/%HOSTNAME%/%PROGRAMNAME%.log"
#authpriv.* TmplAuth
#*.info,mail,none,authpriv,none,cron.none ?TmplMsg
you posted a lot of things .
your problem is logstash is not sending logs to kibana ..if this is the problem , lets figure out what is the real problem
add below line to output {} section in logsatsh configuration file
stdout { codec => rubydebug }
run the configuration file using /opt/logstash/bin -f /path/to/conf/file and see in terminal whether you are able to index the recieved logs in that udp port.
Secondly if you are recieving logs and able to index ,then problem may be in elastic server , because of which kibana is not able to fetch logs .
you try the first solution , if everything is ok then we ll figure out about second one .
Roger - thanks / trying it now
output {
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "localhost:9200" ]
}
stdout { codec => rubydebug }
}
}
should it look like this sir?
/opt/logstash/bin -f /path/to/conf/file
bash: /opt/logstash/bin: Is a directory
trying to get the local logs to work. ..ugh
Output filter is right...comand is /opt/logstash/bin/logstash -f /path/to/conf/file
srry , i left logtash..try to run with this command
/opt/logstash/bin/logstash -f /path/to/conf/file
No config files found: /path/to/conf/file
Can you make sure this path is a logstash config file? {:level=>:error}
this is the error that was thrown
i think you wrote input , filter and output configuration files in different files. Do one thing. I saw that you want to parse syslogs . so create a single conf file with input,output and filter in one file , let syslog.conf under /etc/logstash/conf.d/
then run the conf file with below command
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf
Above command we use to run a logstash conf manually.
No conf file found : its because you wrote different configuration files for input , output and filter.
Try the above method and let me know if any problem.
Example file : i did this to parse syslogs
input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
target => "syslog_timestamp"
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
.
And one more thing.. what i saw in your input configuration is you are using type => "rsyslog"
But i did not find any rsyslog inpiut plugin in logstash https://www.elastic.co/guide/en/logstash/current/input-plugins.html
if you want to parse syslogs then use syslog plugin like above configuration.
if you want to use syslog input plugin then use udp input port as 514 . Becasue its by default syslog input port.
[root@localhost conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf
Settings: Default pipeline workers: 1
Could not start TCP server: Address in use {:host=>"0.0.0.0", :port=>514, :level=>:error}
Pipeline aborted due to error {:exception=>"Errno::EADDRINUSE", :backtrace=>["org/jruby/ext/socket/RubyTCPServer.java:118:in initialize'", "org/jruby/RubyIO.java:871:in
new'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-3.0.6/lib/logstash/inputs/tcp.rb:244:in new_server_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-3.0.6/lib/logstash/inputs/tcp.rb:79:in
register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:330:in start_inputs'", "org/jruby/RubyArray.java:1613:in
each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:329:in start_inputs'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:180:in
start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in
start_pipeline'"], :level=>:error}
stopping pipeline {:id=>"main"}
I just got this error
Error is saying that on port 514 some process is running. find the pid of the process running on 514 using
netstat -anp|grep 514
and kill the process using
kill -9 PID
then try to run logstash command
[root@localhost conf.d]# netstat -anp|grep 514
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 4375/rsyslogd
tcp6 0 0 :::514 :::* LISTEN 4375/rsyslogd
udp 0 0 0.0.0.0:514 0.0.0.0:* 4375/rsyslogd
udp6 0 0 :::514 :::* 4375/rsyslogd
unix 3 [ ] STREAM CONNECTED 35146 4898/gnome-settings
unix 2 [ ] DGRAM 55144 5110/dbus-daemon
If I kill the rsyslogd wont that stop all syslog functions?