I am sending syslog messages from barracuda mail spammer directly into Elasticsearch database.
The index got populated with data but the data are not parsed correctly. The log attributes are all shown in the message field. Please attached sample for the normalized data.
The short answer is you can solve this using index mapping by utilizing runtime fields which will parse and create the fields you want. But that is not the best way since each one of those fields and records will be parsed every time the data is called.
The best way is to have an ingest processor do the parsing and then indexing the data already parsed and never need to touch it again.
I am not able to parse the barracuda email spammer log in a correct way. If you already have can you please share it with me. Another question, how the ingest pipeline will know which index to be executed on. This process is executed automatically, or there is additional steps of configuration.
I am assuming you are using beats to send the data. There is a setting in the Elasticsearch output where you can set the pipeline ID. That means send the data to ES and run this pipeline which will parse the data.
The above query requires username and password to be able to write to a specific index. In addition, The index name is changing every 24h, meaning I have to manually run the command on daily basis.
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "action [indices:admin/create] is unauthorized for user [anonymous_user]"
}
],
"type" : "security_exception",
"reason" : "action [indices:admin/create] is unauthorized for user [anonymous_user]",
"caused_by" : {
"type" : "illegal_state_exception",
"reason" : "There are no external requests known to support wildcards that don't support replacing their indices"
}
},
"status" : 403
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.