Morning all !
i have a question,
if i have a json data source like :
{
"log_url" => "http://127.0.0.1/log.txt",
"key" => "22op3dfe",
"raw_msg" => "404.19 – Denied by filtering rule",
"MD5" => "2c5cddf13ab55a1d4eca955dfa32d245",
"syntax" => "text",
"@version" => "1",
"SHA256" => "766be5c99ba674f985ce844add4bc5ec423e90811fbceer5ec84efa3cf1624f4",
"user" => "user",
"URL" => "http://127.0.0.1",
"YaraRule" => [
[0] "no_match"
],
"expire" => "0",
"size" => 107,
"source" => "localhost",
"Msg" => "404 OK",
"filename" => "log.txt",
"@timestamp" => 2020-01-07T13:59:04.000Z
}
and all of this is processed and send to index1 but i want to split off :
"MD5" => "2c5cddf13ab55a1d4eca955dfa32d245"
and
"@timestamp" => 2020-01-07T13:59:04.000Z
into index 2
does this mean i have to ;
A) Run an ingest process twice on the same data source and in process one.. drop MD5 and Timestamp and push remaining to index 1 and then rerun the ingest and drop everything except MD5 and Timestamp into index 2
B) use IF statements on the output for fields; so IF field name is MD5 then > index 2
C) Config a complex conf that processes the json source, then pushes MD5 and Timestamp to @Metadata then at the end, read metadata and push it to Index 2
D) do some other uber process that i'm not aware of yet to accomplish this much easier leaving time to drink my warm coffee before it gets stone cold.
If this is something that can or needs to be done with Filebeat for example, please let me know, i'm hoping it can all be done in logstash.
Thanks