i am trying to create index based on quarter and each document is unique in elasticsearch via logstash. the data is coming from multiple xml files. there seems to be no error in configuration file and pipeline is started but i am unable to see index created in elasticsearch.
Now the index is created but all the xml is not split into multiple documents but a single document is created and the entire xml is copied to the message field.
The first problem is your multiline codec. It produces a single event that only contains
"message" => "<?xml version=\"1.0\"?>",
There is no "<?xml..." line at the end of the file to trigger the rest of the file to be flushed to the pipeline. If you add auto_flush_interval => 1 then it will get flushed after a 1 second timeout.
Personally I would ingest the entire file as a single event by using a pattern that does not match plus a timeout.
Thanks @Badger , but in the logstash console , the output looks fine but when i view the index in kibana , only the last document(1 document) in the last file among the list of files is visible. could you help with that?
Also can you tell me why below part of the code does not work :-
It does not work because element names are case sensitive. bill cannot be used to refer to BILL, which is what your document has. Also, you probably want force_array => false on that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.