Not creating a logstash index

Hello, i have configured a logstash input to receive syslog on port 5000. Below is some output when starting logstash.

I cannot create an index in Kibana for logstash-* because it can't find any indicies that match that pattern?

[2018-08-15T09:56:43,339][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-15T09:56:44,609][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-15T09:56:44,649][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-15T09:56:45,189][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-15T09:56:45,371][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-15T09:56:45,382][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-15T09:56:45,430][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-15T09:56:45,492][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-15T09:56:45,628][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-15T09:56:46,865][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4772fd67 run>"}
[2018-08-15T09:56:46,963][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:5000"}
[2018-08-15T09:56:46,975][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:5000"}
[2018-08-15T09:56:47,145][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-15T09:56:47,798][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}```

That is all normal. Are you sure it is receiving any data? Can you add

output { stdout { codec => rubydebug } }

and see if any events are logged to stdout...

Ok will do.. Will i see the stdout if i am running logstash as systemd or will I only see that output when running logstash from the cmdline?

Sorry all, my bad.. The issue was i hadn't allowed port 5000 through the firewall.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.