Index not appearing in Kibana via logstash

Hello once again...

Being new to ELK, i cannot see the index "truck" in Kibana.
There is no error in logstash, i dont know if its going to elastic search, if yes i dont know y its not reaching kibana...

pleasee help!

Steps tried:
Dev tools --> GET _cat/indices = No "truck" index found

http://localhost:9200/_cat/indices?v = No "truck" index found

kibana --> kibana index management --> create index--> truck not found

kibanaindex

Logstash config

input {
    file{
    path => "Z:/GTF_Hack/Trucks.csv"
    start_position => "beginning"
   
    }
}
filter {
    csv {
        separator => ","
        columns => ["data","freq","period","parked","light","collision"]
        }
}
output {
    elasticsearch{
        hosts => ["localhost:9200"]
        index => "truck"
        document_type => "t1"
    }
    
}

logstash cmd

Z:\GTF_Hack>logstash -f logstash_temp.conf
Sending Logstash logs to Z:/logstash-7.6.2/logstash-7.6.2/logs which is now configured via log4j2.properties
[2020-08-08T21:34:31,644][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-08-08T21:34:31,801][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.6.2"}
[2020-08-08T21:34:35,623][INFO ][org.reflections.Reflections] Reflections took 47 ms to scan 1 urls, producing 20 keys and 40 values
[2020-08-08T21:34:36,918][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"truck", id=>"721ebf606c2a52b2ad720e7e36bd7ea9b3aef1ba0ebbf8ca1ab8648f98aa62f5", hosts=>[//localhost:9200], document_type=>"t1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_6af9bfea-a853-4ce3-8692-0f14f09a60b0", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2020-08-08T21:34:37,966][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2020-08-08T21:34:38,447][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2020-08-08T21:34:38,564][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-08-08T21:34:38,576][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-08-08T21:34:38,681][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2020-08-08T21:34:38,770][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-08-08T21:34:38,926][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-08-08T21:34:38,965][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["Z:/GTF_Hack/logstash_temp.conf"], :thread=>"#<Thread:0x3f6f2fef run>"}
[2020-08-08T21:34:38,994][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-08-08T21:34:41,655][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"Z:/logstash-7.6.2/logstash-7.6.2/data/plugins/inputs/file/.sincedb_cbb297be2f426c80c677f034e937fe3c", :path=>["Z:/GTF_Hack/Trucks.csv"]}
[2020-08-08T21:34:41,746][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-08-08T21:34:41,888][INFO ][filewatch.observingtail  ][main] START, creating Discoverer, Watch with file and sincedb collections
[2020-08-08T21:34:41,899][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-08-08T21:34:42,823][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

It appears that the truck index does not exist. Has logstash created any documents?

I dont know where can i find the documents?

I think you're having trouble getting logstash set up. I see a testcases index - are you able to view its data? Thats unrelated to logstash setup but might be useful to verify.

Try this.

Also you can start logstash with the -r switch and then leave logstash running and each time you edit / save the conf file it will reload and run the pipeline again much fast dev / test cycle.

My suspicion is the sincedb thinks this file has already been read.... once read it will never read it again.

input {
    file{
    path => "Z:/GTF_Hack/Trucks.csv"
    start_position => "beginning"
    # This read the file each time, logstash remembers if it already read the file
    # if if the start_position is set to beginning
    sincedb_path => "/dev/null"
   
    }
}
filter {
    csv {
        separator => ","
        columns => ["data","freq","period","parked","light","collision"]
        }
}
output {
    elasticsearch{
        hosts => ["localhost:9200"]
        index => "truck"
        # New version of Elasticsearch 7.x and do not support this setting
        # document_type => "t1"
    }
   # This will print output to console 
   stdout { codec => rubydebug }
    
}

I was able see" testcases index "when i had run it 1st time.. But when i restarted my laptop and again connected elastic search, kibana and logstash through cmd.. I can just c the index not the data.. Its health is yellow

Hi @Umang_Mahant

Did you try to run the configuration I sent you above?

What version of elasticsearch are you running?

If it is a version 7.X or above you should not set the document type as it is deprecated

# document_type => "t1"

The cluster health is Yellow because the testcases index has a repllica which requires another node but you only have 1 node... this / Yellow is OK for testing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.