Example files for sending nginx, tomcat, postgresql files (from server A) to ELK server (server B)

Hello,

I just installed ELK server (on server B) and want to send log file for tomcat (catalina.out), nginx (access.log, error.log) and posgresql logs (from server A) using filebeat

I tested it with logs like syslog and similar format

I tested just adding new files like tomcat logs, I receveive the data to ELK server but the format is not correct and filtering is not good

do you have any example of config file for filebeat AND logstash to handle this new entries

thx a lot

G

Please show your Logstash configuration and some examples of the events that Logstash is receiving. Please use a stdout { codec => rubydebug } output so we can see clearly exactly what the events look like.

logstash.conf is:

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

and filebeat.yml is:

filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
- /var/log/postgresql/postgresql-9.4-main.log
- /var/log/nginx/access.log
- /var/log/tomcat8/catalina.out
# - /var/log/*.log

  input_type: log

  document_type: syslog

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["backup01.antalios.com:5044"]
bulk_max_size: 1024

tls:
  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

#logging:

files:

rotateeverybytes: 10485760 # = 10MB

logging:
level: warning
to_files: true
files:
path: /var/log/filebeat
name: beat.log
keepfiles: 7
rotateeverybytes: 10485760 # 10 MB
level: debug
selectors: ["*"]

actually, since I add the log from catalina, postgres,... logstash crashed in addition to the bad format of the logs
{:timestamp=>"2016-07-05T18:50:48.407000+0200", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}

otherwise, it is ok but without parsing correctly the logs

thx

Well, you haven't defined any filters for anything except syslog so it shouldn't be surprising that the other logs aren't parsed. So what do the other events look like? See my previous question.

I modified the logstash.conf with adding your lines, is it correct?
root@backup01:/etc/logstash/conf.d# more logstash.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
stdout { codec => rubydebug }
}

I restarted the logstash process, but nothing in the log
root@backup01:/etc/logstash/conf.d# more logstash.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
stdout { codec => rubydebug }
}

I modified the logstash.conf with adding your lines, is it correct?

Yes. So what do the events that end up in the Logstash logs look like?

{
"message" => "---------- FETCHDATASERVICE LISTE nb_msg_recues--------***** 10",
"@version" => "1",
"@timestamp" => "2016-07-06T07:50:50.382Z",
"beat" => {
"hostname" => "yoda",
"name" => "yoda"
},
"source" => "/var/log/tomcat8/catalina.out",
"count" => 1,
"fields" => nil,
"offset" => 477072,
"type" => "syslog",
"input_type" => "log",
"host" => "yoda",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"syslog_severity_code" => 5,
"syslog_facility_code" => 1,
"syslog_facility" => "user-level",
"syslog_severity" => "notice"
}
{
"message" => "Jul 5 17:01:02 sd-50775 /usr/bin/filebeat[16586]: registrar.go:146: Write registry file: /var/lib/filebeat/registry",
"@version" => "1",
"@timestamp" => "2016-07-05T15:01:02.000Z",
"source" => "/var/log/syslog",
"offset" => 9101529542,
"type" => "syslog",
"input_type" => "log",
"count" => 1,
"fields" => nil,
"beat" => {
"hostname" => "sd-50775",
"name" => "sd-50775"
},
"host" => "sd-50775",
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"syslog_timestamp" => "Jul 5 17:01:02",
"syslog_hostname" => "sd-50775",
"syslog_program" => "/usr/bin/filebeat",
"syslog_pid" => "16586",
"syslog_message" => "registrar.go:146: Write registry file: /var/lib/filebeat/registry",
"received_at" => "2016-07-06T07:50:55.099Z",
"received_from" => "sd-50775",
"syslog_severity_code" => 5,
"syslog_facility_code" => 1,
"syslog_facility" => "user-level",
"syslog_severity" => "notice"
}
{
"message" => "///////////FETCHDATASERVICE : tout le message-----------*--------- RestroomRawAmqpMessage{deviceIdentifier='121', satisfactionCounter=SatisfactionCounter{identifier='sc_121_2016-07-05T15:52:52Z', deviceIdentifier='121', cycleIdentifier='121_2016-07-05T15:52:52Z', veryGood=317, good=273, bad=274, veryBad=272, begin=2016-07-05T15:52:52Z, end=2016-07-05T15:52:52Z}, peopleCounter=PeopleCounter{identifier='pc_121_2016-07-05T15:52:52Z', deviceIdentifier='121', cycleIdentifier='c_121_2016-07-05T15:52:52Z', inputCount=316, outputCount=270, begin=2016-07-05T15:52:52Z, end=2016-07-05T15:52:52Z}, serialNumberButton='00001648a8a1', cleaning=Cleaning{identifier='cl_121_2016-07-05T15:52:52Z', badgeIdentifier='00001648a8a1', deviceIdentifier='121', cycleIdentifier='121_2016-07-05T15:52:52Z', begin=Tue Jul 05 17:22:52 CEST 2016, end=Tue Jul 05 17:52:52 CEST 2016}, remainingTime=0, cleaningDuration=1800, gps=com.antalios.dal.localisation.GpsLocation@31971b0, isCleaningStarted=false, isCleaningEnded=false}",
"@version" => "1",
"@timestamp" => "2016-07-06T07:50:50.382Z",
"input_type" => "log",
"count" => 1,
"offset" => 477188,
"type" => "syslog",
"beat" => {
"hostname" => "yoda",
"name" => "yoda"
},
"source" => "/var/log/tomcat8/catalina.out",
"fields" => nil,
"host" => "yoda",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"syslog_severity_code" => 5,
"syslog_facility_code" => 1,
"syslog_facility" => "user-level",
"syslog_severity" => "notice"

{
"message" => "Jul 5 16:48:21 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 "@timestamp": "2016-07-05T14:48:10.268Z",#012 "beat": {#012 "hostname": "yoda",#012 "name": "yoda"#012 },#012 "count": 1,#012 "fields": null,#012 "input_type": "log",#012 "message": "Jul 5 16:16:38 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 \"@timestamp\": \"2016-07-05T14:16:26.023Z\",#012 \"beat\": {#012 \"hostname\": \"yoda\",#012 \"name\": \"yoda\"#012 },#012 \"count\": 1,#012 \"fields\": null,#012 \"input_type\": \"log\",#012 \"message\": \"Jul 5 15:55:36 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 \\\"@timestamp\\\": \\\"2016-07-05T13:55:30.318Z\\\",#012 \\\"beat\\\": {#012 \\\"hostname\\\": \\\"yoda\\\",#012 \\\"name\\\": \\\"yoda\\\"#012 },#012 \\\"count\\\": 1,#012 \\\"fields\\\": null,#012 \\\"input_type\\\": \\\"log\\\",#012 \\\"message\\\": \\\"Jul 5 15:38:17 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 \\\\\\\"@timestamp\\\\\\\": \\\\\\\"2016-07-05T13:38:05.450Z\\\\\\\",#012 \\\\\\\"beat\\\\\\\": {#012 \\\\\\\"hostname\\\\\\\": \\\\\\\"yoda\\\\\\\",#012 \\\\\\\"name\\\\\\\": \\\\\\\"yoda\\\\\\\"#012 },#012 \\\\\\\"count\\\\\\\": 1,#012 \\\\\\\"fields\\\\\\\": null,#012 \\\\\\\"input_type\\\\\\\": \\\\\\\"log\\\\\\\",#012 \\\\\\\"message\\\\\\\": \\\\\\\"Jul 5 15:24:36 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 \\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\"2016-07-05T13:24:26.477Z\\\\\\\\\\\\\\\",#012 \\\\\\\\\\\\\\\"beat\\\\\\\\\\\\\\\": {#012 \\\\\\\\\\\\\\\"hostname\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\"yoda\\\\\\\\\\\\\\\",#012 \\\\\\\\\\\\\\\"name\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\"yoda\\\\\\\\\\\\\\\"#012 },#012 \\\\\\\\\\\\\\\"count\\\\\\\\\\\\\\\": 1,#012 \\\\\\\\\\\\\\\"fields\\\\\\\\\\\\\\\": null,#012 \\\\\\\\\\\\\\\"input_type\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\"log\\\\\\\\\\\\\\\",#012 \\\\\\\\\\\\\\\"message\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\"Jul 5 15:14:10 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@timestamp\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"2016-07-05T13:13:56.810Z\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"beat\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": {#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"hostname\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"yoda\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"name\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"yoda\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"#012 },#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"count\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": 1,#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"fields\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": null,#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"input_type\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"log\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"message\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\": \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"Jul 5 15:06:28 sd-79181 /usr/bin/filebeat[10795]: publish.go:109: Publish: {#012 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"@timestamp\

logs files are so big, one day more than 20GB with so few info...

Start by changing your Filebeat configuration to not send all logs with the "syslog" type, because then you don't know at the Logstash end what kind of events they are and how they should be parsed. Split your single prospector into multiple prospectors that monitor different files and have different document types.

thx, do you have an example of file ? filebeat and logstash ?
thx

Best is to check the guide here. See note on - is a prospector: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration.html

hello,

I already checked the docs, but seems incredible to not find any examples of common log files used by apache, nginx, tomcat, etc...
more easy to start and customize

https://www.elastic.co/guide/en/logstash/current/config-examples.html contains an Apache example that probably works with Nginx and Tomcat too,

This topic was automatically closed after 21 days. New replies are no longer allowed.