I run ELK Stack (one machine) with filebeat (second machine) as log shipper.
I've got txt file with 4228 rows where each row is one log in follow form:
Jul 4 13:56:17 vMMR mmr-core[29839]: GtsAwegAOMTbez_1562241377271986.mt npdbProfiling-end: pid[29839] table[npdbcz] operation[SELECT] duration[6.27 ms] error sql[SELECT carrier,validity,now()>validity as now_valid FROM npdbcz WHERE range IN ('606339842','60633984','6063398','606339','60633','6063') ORDER BY now_valid DESC,validity DESC]
How can I send the txt file with these logs to the elasticsearch and see them in Kibana? Which configuration files should I edit and how? Is it possible to send each row in the file as one log message in Kibana?
Now it goes further but for each row in the .txt file it shows this WARN:
[WARN ] 2019-07-10 15:55:52.020 [[main]>worker0] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ssh_auth-2019.07", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x64e46377], :response=>{"index"=>{"_index"=>"ssh_auth-2019.07", "_type"=>"_doc", "_id"=>"aYKZ3GsBKh5qbaH2ItLV", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}
The index where I want to send it "ssh_auth-2019.07" is already created and connected with Filebeat. Could output part in logstash config file create new index for the .txt file?
OK, don't know if it is right solution but I somehow created new index (my_index) which I found in Index Patterns in Kibana. Then without any filters I change output index for my_index.
After I ran sudo bin/logstash -f /etc/logstash/conf.d/loadfile.conf I ended up with the same output as in beginning:
[INFO ] 2019-07-10 12:49:43.061 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601}
Although this output showed up I can see the rows from my file as logs in Kibana interface in Discover bar.
%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:hostname} %{DATA:type} %{SPACE} %{DATA:file_id} %{DATA:file_name} %{DATA:syslog_pid} .*table\s*\[%{WORD:table}\] .*operation\s*\[%{WORD:operation}\] %{GREEDYDATA:rest}
{
"rest": "duration[6.27 ms] error [] sql[SELECT carrier,validity,now()>validity as now_valid FROM npdbcz WHERE `range` IN ('606339842','60633984','6063398','606339','60633','6063') ORDER BY now_valid DESC,validity",
"syslog_pid": "pid[29839]",
"file_name": "npdbProfiling-end:",
"type": "mmr-core[29839]:",
"hostname": "vMMR",
"syslog_timestamp": "Jul 4 13:56:17",
"file_id": "GtsAwegAOMTbez_1562241377271986.mt",
"operation": "SELECT",
"table": "npdbcz"
}
I have problem to get parsed duration When I try to skip the word duration as I did it before with operation or table grok shows me an error that it doesn't match
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.