Logstash http_pollar Rest API push more than 1000 records

Logstash http_pollar Rest API push more than 1000 records

As we are using HTTP_POLLAR to execute the rest API and push the response in elastic index in one go. But default only 1000 records are pushing in elastic.

How we can increase the limit or push all records in index?
How we can set the offset & limit in URL and execute the same API in multiple times to push the all response item ?

You mean the http_poller input?

I don't think that there is any limits related to the number of records, there is nothing about this in the documentation, this seems to be a limit on your API endpoint, not on the poller input.

With the http_poller input you can't, it does not support pagination.

Thanks for reply.

I have checked Rest API in postman tool it returns 1000 + items in response, but by http_poller input only push 1000 records. also not set limit in Rest API URL & parameter.

It is exactly the same request on Postman?

You need to share your Logstash pipeline and also the request you made on Postman.

I don't see anything in the documentation that would indicate any kind of limit on the number of returned records.

It is exactly the same request on Postman?

RE: Yes, same Rest API body & parameter are using in Postman & Logstash Pipeline.

We are passing some token & confidential data in rest API. That by I will not send logstash pipeline details.

You can redact it, without looking at the pipeline is not possible to troubleshoot this.

Can we discuss in one-to-one chat or share the details in mail.
Actually, REST API is not working in your end, because its working in specific network only.

I can't at the moment, sorry.

If you want you can remove any sensitive information and share your Logstash pipeline.

As mentioned, there is nothing in the documentation that would make limit the amount of events returned by the http_poller input.

Also, you asked how can you set the offset, this would mean that your API paginate the response.

Logstash Pipeline :

input{
http_poller {
urls => {
qradar_rules_url => {
method => get
urls => {
qradar_rules_url => {
method => get
url => "BASE_URL"
headers => {
"Accept" => "application/json"
"Authorization" => "Bearer KEY"
"source-id" => "CC"
}
}
}
proxy => "PROXY_URL"
request_timeout => 60000
codec => "json"
schedule => { cron => "*/5 * * * * UTC"}
}
}
filter {

split { field => "data" }
ruby {
code => '
[ "data" ].each { |field|
event.get(field).each { |k, v|
event.set(k, v)
}
event.remove(field)
}
'
}

}

output {
elasticsearch {
hosts => HOST_NAME
document_id => "%{id}"
index => "INDEX_NAME"
}
}

So, you have a custom document id with document_id => "%{id}".

When you said that on Postman you had more than 1000+ items, did you validate that the you had more than 1000 unique ids on a single request?

After Remove the document_id => "%{id}" from pipeline and restart the Logstash. Getting same result in elastic index. (1000 items)

You need to provide some evidence, it is pretty hard to troubleshoot without evidences.

Since you don't want to share the endpoints you are using, even after redacting sensitive information, I suggest that you really check your API documentation to see if it does not paginate.

It seems that you are querying some QRadar API and if I'm not wrong QRadar API will paginate the response.

The http_poller does not limit the amount off records in the response, so this is not an issue with the input plugin.

You need also to check if this filter is correct:

filter {

split { field => "data" }
ruby {
code => '
[ "data" ].each { |field|
event.get(field).each { |k, v|
event.set(k, v)
}
event.remove(field)
}
'
}

I can't validate it because you didn't share any sample data, so it is not possible to know what will be the output of this.

Also, try to replicate the same request using a tool like curl.

It took me a while to understand that. It would be simpler to write it as

code => '
    event.remove("data").each { |k, v|
        event.set(k, v)
    }
'

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.