Logstash don't send data to elasticsearch

I parsed a simple xml file . in first time the logstash parsed the file and sent data to elasticsearch . After that i deleted the index and i tried to parse it again , but logstash do not send data since i have deleted the firt index .

here is my config file :

input {
 file {
  path => "/home/safaa/Documents/nessus/validate.xml"
  start_position => beginning
  codec => multiline
  {
   pattern => "^<\?xmldata .*\>"
   negate => true
   what => "previous"
  }
 }
}

filter {
  xml {
   store_xml => false
   source => "message"
   xpath =>
   [
    "/xmldata/head1/id/text()", "id",
    "/xmldata/head1/date/text()", "date",
    "/xmldata/head1/key1/text()", "key1"
   ]
}
 
date {
    match => [ "date" , "dd-MM-yyyy HH:mm:ss" ]
    timezone => "Europe/Amsterdam"
}
 
}

output {
 stdout { codec => rubydebug }
 elasticsearch {
  index => "logstash-xml"
  hosts => ["127.0.0.1:9200"]
  document_id => "%{[id]}"
  document_type => "xmlfiles"
 }
}

the xml file :

<xmldata>
 <head1>
  <key1>Value1</key1>
  <key2>Value2</key2>
  <id>0001</id>
  <date>01-01-2016 09:00:00</date>
 </head1>
 <head2>
  <key3>Value3</key3>
 </head2>
</xmldata>

the logstash logs :

[2018-10-19T10:18:56,885][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=&gt;"LogStash::Outputs::ElasticSearch", :hosts=&gt;["//[127.0.0.1:9200](http://127.0.0.1:9200/)"]}

[2018-10-19T10:18:56,946][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=&gt;nil}

[2018-10-19T10:18:56,993][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=&gt;{"template"=&gt;"logstash-*", "version"=&gt;60001, "settings"=&gt;{"index.refresh_interval"=&gt;"5s"}, "mappings"=&gt;{"_default_"=&gt;{"dynamic_templates"=&gt;[{"message_field"=&gt;{"path_match"=&gt;"message", "match_mapping_type"=&gt;"string", "mapping"=&gt;{"type"=&gt;"text", "norms"=&gt;false}}}, {"string_fields"=&gt;{"match"=&gt;"*", "match_mapping_type"=&gt;"string", "mapping"=&gt;{"type"=&gt;"text", "norms"=&gt;false, "fields"=&gt;{"keyword"=&gt;{"type"=&gt;"keyword", "ignore_above"=&gt;256}}}}}], "properties"=&gt;{"@timestamp"=&gt;{"type"=&gt;"date"}, "@version"=&gt;{"type"=&gt;"keyword"}, "geoip"=&gt;{"dynamic"=&gt;true, "properties"=&gt;{"ip"=&gt;{"type"=&gt;"ip"}, "location"=&gt;{"type"=&gt;"geo_point"}, "latitude"=&gt;{"type"=&gt;"half_float"}, "longitude"=&gt;{"type"=&gt;"half_float"}}}}}}}}

[2018-10-19T10:18:58,202][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=&gt;"/var/lib/logstash/plugins/inputs/file/.sincedb_99a3e33a61dc7e95f10a1def06b56338", :path=&gt;["/home/safaa/Documents/nessus/validate.xml"]}

[2018-10-19T10:18:58,264][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=&gt;"main", :thread=&gt;"#&lt;Thread:0x6b1d09b3 run&gt;"}

[2018-10-19T10:18:58,426][INFO ][logstash.agent           ] Pipelines running {:count=&gt;1, :running_pipelines=&gt;[:main], :non_running_pipelines=&gt;[]}

[2018-10-19T10:18:58,553][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections

[2018-10-19T10:18:59,062][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=&gt;9600}

elasticsearch logs:

[2018-10-19T10:18:00,632][INFO ][o.e.p.PluginsService     ] [BqyunZ6] loaded module [x-pack-upgrade]

[2018-10-19T10:18:00,632][INFO ][o.e.p.PluginsService     ] [BqyunZ6] loaded module [x-pack-watcher]

[2018-10-19T10:18:00,633][INFO ][o.e.p.PluginsService     ] [BqyunZ6] no plugins loaded

[2018-10-19T10:18:05,410][INFO ][o.e.x.s.a.s.FileRolesStore] [BqyunZ6] parsed [0] roles from file [/etc/elasticsearch/roles.yml]

[2018-10-19T10:18:06,288][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/15813] [Main.cc@109] controller (64 bit): Version 6.4.2 (Build 660eefe6f2ea55) Copyright (c) 2018 Elasticsearch BV

[2018-10-19T10:18:06,787][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security

[2018-10-19T10:18:07,161][INFO ][o.e.d.DiscoveryModule    ] [BqyunZ6] using discovery type [zen]

[2018-10-19T10:18:08,305][INFO ][o.e.n.Node               ] [BqyunZ6] initialized

[2018-10-19T10:18:08,307][INFO ][o.e.n.Node               ] [BqyunZ6] starting ...

[2018-10-19T10:18:08,541][INFO ][o.e.t.TransportService   ] [BqyunZ6] publish_address {[127.0.0.1:9300](http://127.0.0.1:9300/)}, bound_addresses {[::1]:9300}, {[127.0.0.1:9300](http://127.0.0.1:9300/)}

[2018-10-19T10:18:11,686][INFO ][o.e.c.s.MasterService    ] [BqyunZ6] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {BqyunZ6}{BqyunZ6SQ-Sl1KTH_JCs1Q}{6FeROFyGSEya5ot1AYbhbg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4125638656, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}

[2018-10-19T10:18:11,695][INFO ][o.e.c.s.ClusterApplierService] [BqyunZ6] new_master {BqyunZ6}{BqyunZ6SQ-Sl1KTH_JCs1Q}{6FeROFyGSEya5ot1AYbhbg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4125638656, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {BqyunZ6}{BqyunZ6SQ-Sl1KTH_JCs1Q}{6FeROFyGSEya5ot1AYbhbg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=4125638656, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])

[2018-10-19T10:18:11,784][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [BqyunZ6] publish_address {[127.0.0.1:9200](http://127.0.0.1:9200/)}, bound_addresses {[::1]:9200}, {[127.0.0.1:9200](http://127.0.0.1:9200/)}

[2018-10-19T10:18:11,785][INFO ][o.e.n.Node               ] [BqyunZ6] started

[2018-10-19T10:18:12,386][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [BqyunZ6] Failed to clear cache for realms [[]]

[2018-10-19T10:18:12,465][INFO ][o.e.l.LicenseService     ] [BqyunZ6] license [f2f7d3aa-fdfc-408f-aae8-15d95982a157] mode [basic] - valid

[2018-10-19T10:18:12,481][INFO ][o.e.g.GatewayService     ] [BqyunZ6] recovered [1] indices into cluster_state

[2018-10-19T10:18:12,759][INFO ][o.e.c.r.a.AllocationService] [BqyunZ6] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.kibana][0]] ...]).

my elasticstack version: 6.4.2

Logstash file input uses a sincedb file to keep track of which files that have been processed. If this is still in place the same file will not be reprocessed even if the index is removed.

Thank you so much @Christian_Dahlqvist . i added : sincedb_path => "/dev/null" to the input filter so that the sincedb file is reseted each time when i delete an index.

That will cause the files to be reread whenever you restart Logstash, not when you delete an index.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.