No more errors in logstash and I managed to see the "one" record in the index populated in Kibana.
However, I still facing the 401 request error in logstash. ID and api key and settings are the same as in API Software called Postman. No issues with postman!
401: Unauthorized access. An issue occurred during authentication. This can indicate an incorrect key, id, or other invalid authentication parameters.
I was thinking if the authentication parameters which is the headers in logstash support Palo Alto requirement.
Are you using the same headers without the hyphens in Postman?
There is a better solution for the headers names, put under quotes it will support hyphens, check this
Also try to add ecs_compatibility => "disabled" ,
It requires "headers" and "request_data". Thanks to Rios for discovering this!
Question is does "http poller" support "request_data"?
Cannot seems to find any examples or documentation on this
a string url (which will be issued as an HTTP GET).
a sub-hash containing many useful keys provided by the Manticore backend:
url: the String url
method: (optional) the HTTP method to use (defaults to GET)
user: (optional) the HTTP Basic Auth user. The user must be under an auth sub-hash for Manticore, but this plugin also accepts it either way.
password: (optional) the HTTP Basic Auth password. The password must be under an auth sub-hash for Manticore, but this plugin accepts it either way.
headers: a hash containing key-value pairs of headers.
body: a string (supported only on POST and PUT requests)
possibly other options mentioned in the Manticore docs. Note that Manticore options that are not explicitly documented above are not thoroughly tested and therefore liable to break in unexpected ways if we replace the backend.
I am able to see data after removed the "index =>ecs-logstash-xdrusalerts", reboot and used the default logstash index.
Q1: 13 char date should be "UNIX_MS"? What's the difference with "UNIX"?
Q2: I need to get the field "detection_timestamp" to fit into "@timestamp". Exploring the "target =>" option
Q3: The difference between Data stream and Indices. In this case of HTTP Poller API post request, I should be using "Data Stream"? If yes, how do I specify a target data Stream in logstash Output{}?
There is no reason for a blockade to insert data in any index if you have proper rights.
UNIX time is epoch time. If you have readable, you have to convert with the date plugin.
Your time is 1665629477915 in milliseconds -10/13/2022, 4:51:17 AM, should use UNIX_MS
date {
match => [ "detection_timestamp", "yyyy-MM-dd HH:mm:ss" ] # change to your format
target=> "@timestamp"
timezone => "Europe/Berlin" # this is optional
}
Data streams are more for logs or data in general which will not be updated - append-only. Here is a sample.
Need some guide how to parse the json object
You will access inside LS as [replay][alers][0] ...
Create fields as you wish inside LS with mutate add_field
Can you share the plain text response you get from your HTTP request?
It is pretty hard to understand what is your issue with all those screenshots, some of them are pretty hard to read, avoid sharing screenshots if your issue is not related to any visual feature of the stack.
From what I was able to get it seems that you make a request to some API and your response is a json where you have a field named reply.alerts which is an array.
It looks like that each element in this array is a different alert, if so, a simple split filter in the field reply.alerts would work as it would create one new document for each element in the array.
Something like this:
filter {
split {
field => "[reply][alerts]"
}
}
But as I said, it is hard to understand what is the issue with all those screenshots, if you share an example of the result you are getting as a plain text using the preformatted button would be easier.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.