Logstash Http poller plugin

Hello ,
i am using logstash poller plugin to query JIRA API . it works okay .. i am trying to get the ticket status updated in last 5 mins .. the problem is when there is no update , it polls the API and retrives the fileds such as _doc , _version and _score ..etc ..hence storage is getting full .. how do i remove these fields and can i remove these fields ? i used remove_field but its not working as these fields are not from JIRA ..when there is no update , it should not return anything

If the API responds then the input will create an event, even if it contains no useful data. You can use conditionals to test whether the event contains information you want to retain, and drop the event if it does not.

Thanks for the response Badger .it works .

Hello Badger ,
I am facing an another issue .. While I am dropping the events that I dont want , some events are getting duplicated .. like when it polls the API , 3 or 4 times it sends the same ticket to ES .. Then I started to use fingerprint to de-duplicate which works but it deletes the previous events ..

Below is the code
input {
http_poller {
urls => {
JIRA => {
method => get
user => "test"
password => "test"
url => "jiraurl"
headers => {
Accept => "application/json"
}
}}

  request_timeout => 20

  schedule => { every =>"1m"}

  codec => "json"

}}

filter {
mutate {
convert => { "total" => "integer" }
}
if [total] > 0
{

    fingerprint {

    add_field => {

            "reporter" => "%{[issues][0][fields][reporter][displayName]}"

            "assignee" => "%{[issues][0][fields][assignee][displayName]}"

            "status" => "%{[issues][0][fields][status][name]}"

            "ticketId" => "%{[issues][0][key]}"

            }

  source => ["reporter", "assignee", "status",  "ticketId"]
  concatenate_sources => true
  target => "[@metadata][fingerprint]"
  method => "SHA256"                                                                          
  key => "test"
  base64encode => true                                                                            
       }

   mutate {

lowercase => [ "%{application}" ]
}
ruby {

    code => '

        event.to_hash.each { |k, v|

            if v == "" or v.to_s.start_with?("%{[issues]") or v.to_s.start_with?("%{application}")

                 event.remove(k)

            end

        }

    '

}

} else {

            drop {}

            }}

output

{

 elasticsearch {

 hosts => ["elasticsearch:9200"]

 index => "test-%{+YYYY.MM.dd}"

            document_id => "%{[@metadata][fingerprint]}"

}
}

The add_field option is part of event "decoration", which happens after the filter successfully executes (if it successfully executes). So the fields you are using for the fingerprint will not exist at the time they are used. Split that into a separate mutate filter before the fingerprint.

Hello Badger ,
I have tried this option before .
This approach does the below

1)Lets say I have ticket id called test-1234 status: open . Info gets updated in ES
2) when I have updated the ticket test-1234 , status : InProgress .. instead of updating the ticket status to InProgress it puts an another entry

so I have two entries in ES
test-1234 open
test-1234 Inprogress
I only need status to be updated instead of putting another entry ..

If the update has all of the fields for the ticket then just use the ticket id as the document id.

If it does not then you can use conditionals to determine whether you are processing a complete ticket or an update and then set the options on the elasticsearch output as needed. I think that would look like

filter {
    if some condition {
        mutate { add_field => { "[@metadata][action]" => "index" "[@metadata][upsert]" = false } }
    } else {
        mutate { add_field => { "[@metadata][action]" => "update" "[@metadata][upsert]" = true } }
    }
}
output {
    elasticsearch {
        action => "%{[@metadata][action]}"
        doc_as_upsert=> "%{[@metadata][upsert]}"
        ...
    }
}

Thanks Badger . i only updated the ticketid in the source fields instead of all .. its giving the results what I expected .. have to Monitor though for sometime