Ok if I understand, Logstash will not transmit data to Elasticsearch but will create a file. And then thanks to the bulk API Elasticsearch will read the file and index all document in this file. If I'm right what will do the update API ?
OK, the use case where I did this was parsing SiteMinder trace logs, where every line has a correlation id and one piece of information about the request. I needed to gather all the information about one request into a single document. I did this by doing a bulk update using doc_as_upsert. One update for each input line.
So, provided that you can use the 'ID appelSVI' as the document id, what you could do is something like
Yes it's better but now it is the same problem with the URI param, I tried to write it "Uri" like in the Invoke-Webrequest doc but it's the same. I tried to put back slashes instead of slashes in the path but the result is the same.
This is really turning into a PowerShell question rather than a logstash question and I am not able to test it With curl, the reason you use -data-binary rather than -d is to tell it not to strip the newlines from the file. I do not know what the equivalent is in PowerShell.
The JSON looks OK, except you should not have the blank lines.
I founded a solution. The get-content "function" got a parameter that is called -Delimiter. By default this value is the return tu line char (\n). That means that get-content will return each line separately. I read in the get-content doc that if the delimiter that the user set does not exist in the file, get-content will return the entire file as a single undelimited object and that is what we want. I set the delimiter to "?!@" to be certain that nothing will match with this string and it works.