Logstash is tailing the file. Set sincedb_path to "/dev/null" if you want Logstash to unconditionally process a file from the top. The file input documentation explains how this works.
Thanks a lot Magnus. Now its work.
Now my question is When we import the data from csv file, How to mapping a schema? Because the all document types are stored as string format in elastic server.
i.e: The jdbc plugin automatically get the database column type.
In this case any predefined method? or where we can specify the datatype of an document in the config file?
The keyword here is "mappings". Either set the mappings when you create the index or use an index template to automatically apply at set of mappings for all new indexes that match a particular pattern.
Absent explicit mappings, ES will attempt to guess what data type each field has.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.