I am not getting the logs in Kibana

Hi..Guys..

==> /var/log/logstash/logstash-plain.log <==
elk_1 | [2018-04-23T10:04:15,350][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
elk_1 | [2018-04-23T10:04:15,357][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
elk_1 | [2018-04-23T10:04:15,362][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
elk_1 | [2018-04-23T10:04:15,363][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
elk_1 | [2018-04-23T10:04:15,398][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"2bcb9d7e-6c80-42ae-b635-9a306043e2bc", :path=>"/opt/logstash/data/uuid"}
elk_1 | [2018-04-23T10:04:16,390][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
elk_1 | [2018-04-23T10:04:16,393][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1 | [2018-04-23T10:04:16,573][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
elk_1 | [2018-04-23T10:04:16,574][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
elk_1 | [2018-04-23T10:04:16,781][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
elk_1 | [2018-04-23T10:04:17,328][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
elk_1 | [2018-04-23T10:04:17,413][INFO ][logstash.pipeline ] Pipeline main started
elk_1 | [2018-04-23T10:04:17,517][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:5000"}
elk_1 | [2018-04-23T10:04:17,525][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
elk_1 | [2018-04-23T10:04:17,553][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"0.0.0.0:5000", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
elk_1 | [2018-04-23T10:04:17,562][INFO ][org.logstash.beats.Server] Starting server on port: 5044

Logstash config file :--
input {
udp {
type => "json-docker"
port => 5000
codec => json
}
}

filter {
if [docker][name] =~ "goofy_wing" or [docker][image] =~ "twitterapp" {
grok {
break_on_match => false
match => [ "message", "(?(?<=\s\sMS:\s)([\S]*))" ]
tag_on_failure => []
}
}
}

filter {
if [docker][name] =~ "goofy_wing" or [docker][image] =~ "twitterapp" {
grok {
break_on_match => false
match => [ "message", "(?(?<=\s|\sAPI:\s)([\S]*))" ]
tag_on_failure => []
}
}
}
Output :--
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
}
}

Please help me

Hi,

Does hitting http://localhost:9200/_cat/indices?v give you an index with name "logstash-2018-04-23"? If yes, then it will have docs.count field against it. Make sure that you have some documents inside that index. If everything is fine there then you can check Kibana. It might be an issue of the index pattern that you have created in kibana.

Also, you don't need to add multiple filters in your configuration file.

Hi,
@MariumHassan thanks for the reply
when i am hitting the link http://localhost:9200/_cat/indices?v getting like this

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana XzSeONyKTZuJFM3tSISlsA 1 1 1 0 3.2kb 3.2kb

my output file i am mention index is index => "logstash-%{+YYYY.MM.dd}", then why it is show's .kibana?
i think .kibana index is the default one, can u tell me how to set my kibana index.

.kibana is the index used by kibana. It stores visualizations etc in it and doesn't have the logs you send. You should have an index "logstash" with date in elasticsearch that should be created by:

output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"
}
}

If it does not exist, that means that your elasticsearch is not receiving data, and you need to check your configuration file. If the index is there with the data then it might be kibana issue.

Also, make sure that your grok pattern is right using grok debugger.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.