Filebeat logs to logstash to kibana

I am using ELK and filebeat of version 6 and above. I have configured everything but struggling with seeing filebeat logs in kibana. If i put output as elasticsearch, i can see logs. But when I put as logstash in filebeat.yml, I am unable to see syslogs in kibana. I am trying for first time. I have ensured all ports are open. Please advise.

when i do

curl http://localhost:9200/filebeat-*/_count?pretty
{
"count" : 267888,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
}
}

and

root@ip-xxxxxx:/usr/share/filebeat/scripts# ./import_dashboards -dir /etc/kibana/filebeat
Initialize the Elasticsearch 6.1.1 loader
Elasticsearch URL http://127.0.0.1:9200
For Elasticsearch version >= 6.0.0, the Kibana dashboards need to be imported via the Kibana API.

Please advise

my filebeat.yml

filebeat:
prospectors:
- input_type: log
paths:
- /var/log/syslog
document_type: syslog
registry_file: /var/lib/filebeat/registry
setup.dashboards.enabled: true
output:
logstash:
hosts: ["0.0.0.0:5044"]
bulk_max_size: 1024

beats.conf
input{
beats{
port => "5044"
}
}

filter{
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}"}
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output{
elasticsearch{
hosts => ["localhost:9200"]
}
}

Hi @firdaus,

Could you please share the logs you get from Filebeat? Also, logstash logs could help here

Best regards

hi,

i somehow found in forums that we need to push dashboards so that we can read logs in kibana dashboard. and I followed same but I get below error when i run

root@ip-xxxxxx:/usr/share/filebeat/bin# ./filebeat setup -E setup.elasticsearch.output=enabled "setup.dashboards.directory=/usr/share/filebeat/kibana"
Exiting: Template loading requested but the Elasticsearch output is not configured/enabled

Advise please

these are the logs

filebeat

root@ip-xxxxxx:/usr/share# tail /var/log/filebeat/filebeat
2018-01-17T09:48:12Z INFO Stopping filebeat
2018-01-17T09:48:12Z INFO Stopping Crawler
2018-01-17T09:48:12Z INFO Stopping 0 prospectors
2018-01-17T09:48:12Z INFO Dynamic config reloader stopped
2018-01-17T09:48:12Z INFO Crawler stopped
2018-01-17T09:48:12Z INFO Stopping Registrar
2018-01-17T09:48:12Z INFO Ending Registrar
2018-01-17T09:48:12Z INFO Total non-zero values: beat.info.uptime.ms=1193939 beat.memstats.gc_next=4194304 beat.memstats.memory_alloc=1333528 beat.memstats.memory_total=9273104 filebeat.harvester.open_files=0 filebeat.harvester.running=0 libbeat.config.module.running=0 libbeat.config.reloads=1 libbeat.output.type=elasticsearch libbeat.pipeline.clients=0 libbeat.pipeline.events.active=0 registrar.states.current=1 registrar.writes=1
2018-01-17T09:48:12Z INFO Uptime: 19m53.939785564s
2018-01-17T09:48:12Z INFO filebeat stopped.

logstash logs:

root@ip-172-31-0-150:/usr/share# tail /var/log/logstash/logstash.log
{:timestamp=>"2018-01-16T11:54:58.065000+0000", :message=>#<LogStash::PipelineReporter::Snapshot:0x5276e99 @data={:events_filtered=>1914, :events_consumed=>1914, :worker_count=>2, :inflight_count=>21, :worker_states=>[{:status=>"sleep", :alive=>true, :index=>0, :inflight_count=>12}, {:status=>"sleep", :alive=>true, :index=>1, :inflight_count=>9}], :output_info=>[{:type=>"elasticsearch", :config=>{"hosts"=>"localhost", "index"=>"influxCSVData", "document_type"=>"influxCSV_data_document_type"}, :is_multi_worker=>true, :events_received=>1914, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:419037665 [<LogStash::Outputs::ElasticSearch hosts=>["localhost"], index=>"influxCSVData", document_type=>"influxCSV_data_document_type", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>, <LogStash::Outputs::ElasticSearch hosts=>["localhost"], index=>"influxCSVData", document_type=>"influxCSV_data_document_type", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>]>, :busy_workers=>2}, {:type=>"stdout", :config=>{}, :is_multi_worker=>false, :events_received=>1893, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:-535213015 [<LogStash::Outputs::Stdout codec=><LogStash::Codecs::Line charset=>"UTF-8", delimiter=>"\n">, workers=>1>]>, :busy_workers=>0}, {:type=>"elasticsearch", :config=>{"hosts"=>["13.59.198.44:9200"], "index"=>"%{[@metadeta][beat]}-%{+YYYY.MM.dd}", "document_type"=>"%{[@metadata][type]}"}, :is_multi_worker=>true, :events_received=>1893, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:1703971105 [<LogStash::Outputs::ElasticSearch hosts=>["13.59.198.44:9200"], index=>"%{[@metadeta][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>, <LogStash::Outputs::ElasticSearch hosts=>["13.59.198.44:9200"], index=>"%{[@metadeta][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5>]>, :busy_workers

And when i ran below command i am getting

root@ip-xxxxxxx:/usr/share/filebeat/bin# ./filebeat setup -c /etc/filebeat/filebeat.yml -E "setup.dashboards.directory=/usr/share/filebeat/kibana"
filebeat2018/01/17 10:21:42.928649 beat.go:635: CRIT Exiting: error unpacking config data: more then one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')
Exiting: error unpacking config data: more then one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')

It looks like you have several outputs configured. Could you please paste the output of: filebeat export config. Please paste it as preformatted text, so it doesn't loss indenting

Thank you so much for your help!! for some reason my system got crashed and i have to start again. Now when i am starting again, what versions do you suggest ? ELK and filebeat??
Awaiting for your reply..

Hi,

I would encourage you to use the latest release of each software.
Looking forward to your configurations!

Hi,

I have used ELK with 2 version and was successful in fetching logs. I have one more query,
Can we monitor influxdb data in ELK?? if yes, then please advise any link for it. I have searched everywhere and couldnt find influxdb as input plugin. I would be very thankful to you. I had raised a query as well, but nobody responded..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.