grok {
match => {"message" => "%{DATA:product}\u0001%{DATA:startdate}\u0001%{DATA:amount}\u0001%{DATA:style}\u0001%{DATA:order}\u0001%{DATA:browers}\u0001%{DATA:enddate}"}
}
Under filebeat's directory, I have different .txt file to harvest, for example, the file named JGXX.txt to match format 1 , YGXX.txt to match format 2 and so on. Would you mind tell me how to deal with this?
3.I have the create tables sql file and I'd like to know is there a wheel or tool which can convert sql to mapping or template so I can load into elasticsearch?
Under filebeat's directory, I have different .txt file to harvest, for example, the file named JGXX.txt to match format 1 , YGXX.txt to match format 2 and so on. Would you mind tell me how to deal with this?
I can think of a couple of options.
On the Filebeat side, use different prospectors and set a field or a tag to indicate what kind of file it is, then use Logstash conditionals to process them differently.
If the filename itself indicates the file type you can use Logstash conditionals that inspect the field containing the path of the source file.
Thank you for your reply.And I will try both methods.
I am building a system which can analysis transaction data(database export file), is there any example I can follow?I am a fresh new in this architechure(filebeat,logstash,elasticsearch,kibana).
Thanks again for your kindess reply.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.