Sorry but I did not have time to test it out as I would like.
As you want to query from logstash your SOAP server, http_poller is definitely the plugin to use. But I'm not familiar with it and there is no example of a POST request, so you are on your own for the moment.
I will keep you posted when I have time to make a solid test, hopefully soon
@rkhapre just in case you did not find a solution already,
I admit that the available documentation does not make it direct on how to execute a POST request with body.
The http_poller says that it supports all options from Manticore::Client but I could not find a complete POST request example there neither.
Here is my configuration against the first online SOAP endpoint I found online:
This is working perfectly as you stated above. I was missing the "body=>"
But i see there are two issue
If you open the logstash command running then it is keep loading the records in Elasticsearch.
This records are duplicate, but it is getting loaded in ES because of @timestamp field.
Any ways to do only incremental load?
2.The data getting loaded in Elastic Search is in XML response, i thought it will split the xml tags and load the response like each xml tag as field in ES.
I think i will have to use the XML filter plugin to define add fields for each xml tag
Hi @wiibaa
If the xml response holds 10 records, how i will split into 10 different records.
Currently all the 10 records are coming as 1 record in the form of "message" in ES as this is xml response.
Is there any way, i can break the SOAP response which is a xml output to a json format where each xml tag will act as a column in the Elastic Search/Kibana
we should be able to get the XML into the event with the xml filter indeed, then if you need to split records, the split filter should help.
Can you post a sample request message and I can have a look
the duplicate handling between 2 executions remains a complex issue. one raw idea would be to either use the elasticsearch filter to check if your records are already in, or use upsert mode of elasticsearch output with specific document_Id
here is my quick try at it, I have put your XML on one-line for testing with stdin, but should work the same as is, tested with LS 2.4 but should work as long as you have the latest xml filter in your install.
input {
stdin {
}
}
filter {
# brutal extract of the ns1:Value element from the soap message
xml {
source => "message"
target => "xmldata"
store_xml => "false"
remove_namespaces => true
xpath => ["//Value","value"]
remove_field => "message"
}
# Split the array in several events
split {
field => "value"
}
# Parse the remaining XML string to populate the event with fields
xml {
source => "value"
target => "json-value"
force_array => false
remove_field => "value"
}
}
output {
stdout { codec => rubydebug}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.