Elasticsearch - Could not index event to Elasticsearch status=>400

We are trying to poll the data from a device (PDU) through SNMP Input Plugin. The device MIB file has been imported to ELK logstash, as per SNMP input plugin | Logstash Reference [8.3] | Elastic.
When executing the snmp.conf by ( /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-snmp.conf) getting a Warning (Elasticsearch - Could not index event to Elasticsearch status=>400) and in Kibana we see 'No Data'.

Logstash-SNMP.conf file,

input {
  snmp {
    hosts => [{host => "udp:xx.xx.xx.xx/161" community => "public" version => "2c"  retries => 2  timeout => 1000}]
   tables => [ {"name" => "cpiPduTableCount" "columns" => ["1.3.6.1.4.1.30932.1.10.1.2.10"]}]
    mib_paths => [ "/etc/logstash/libsmi/" ]
  }
}
output {
 stdout
 {
 codec => rubydebug
 }
 elasticsearch {
 action => "index"
 hosts => ["xx.xx.xx.xx:9200"]
 index => "snmp"
 }
 }

The warning log,

[WARN ] 2022-08-01 06:18:47.884 [[main]>worker1] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snmp", :routing=>nil}, {"host"=>{"ip"=>"xx.xx.xx.xx"}, "@timestamp"=>2022-08-01T10:18:47.376378Z, "cpiPduTableCount"=>[{"index"=>"1.1.12.48.48.48.69.68.51.48.48.70.69.54.48", 
"iso.org.dod.internet.private.enterprises.cpi.products.unity.econnect.systeminfo.cpiPduTable.cpiPduEntry.cpiPduMac.12.48.48.48.69.68.51.48.48.70.69.54.48"=>"00:0E:D3:00:FE:60"}, :response=>
{"index"=>{"_index"=>"snmp", "_id"=>"TWzqWIIBcGeie2fsUvWV", "status"=>400, "error"=>{"type"=>"illegal_argument_exception",
 "reason"=>"Limit of mapping depth [20] has been exceeded due to object field [cpiPduTableCount.iso.org.dod.internet.private.enterprises.cpi.products.unity.econnect.systeminfo.cpiPduTable.cpiPduEntry.cpiPduHasOutletControl.12.48.48.48.69.68.51]"}}}}

elasticsearch has a limit on the depth to which objects can be nested inside objects. You could increase that by changing index.mapping.depth.limit, or you add an oid_path_length or oid_root_skip to determine which parts of the name cpiPduTableCount.iso.org.dod.internet.private.enterprises.cpi.products.unity.econnect.systeminfo.cpiPduTable.cpiPduEntry.cpiPduHasOutletControl.12.48.48.48.69.68.51 are kept.

1 Like

By using oid_path_length and I was able to cutdown the unwanted parts and indexing was successful without error. But when I added more OID's and multiple targets, getting an error

[WARN ] 2022-08-09 06:05:19.081 [[main]>worker6] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snmp", :routing=>nil}, {"cpiPduBranchPower.3.12.48.48.48.69.68.51.48.48.70.69.54.48" => 0
:response=>{"index"=>{"_index"=>"snmp", "_id"=>"nh4QgoIBizjuIP1-21th", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [1000] has been exceeded while adding new fields [169]"}}}}} 

I can't further use oid_path_length option because this will remove the necessary part of the name,

cpiPduBranchPower.3.12.48.48.48.69.68.51.48.48.70.69.54.48" => 0,

So I am trying to overwrite the fields limit from 1000 to 2000 by using the below command , but its doesn't make any changes on the fields limit value and its remain the value as 1000 and the error continues

curl -s -XPUT https://elasticsearchIP/snmp/_settings  -H 'Content-Type: application/json' -d '{"index.mapping.total_fields.limit": 2000}'

Logstash-snmp.conf

input { 
  snmp {
    walk => ["1.3.6.1.4.1.30932.1.10.1.2.10","1.3.6.1.4.1.30932.1.10.1.3.110","1.3.6.1.4.1.30932.1.10.1.7.100"]
    hosts => [{host => "udp:x.x.x.x/161" community => "public" version => "2c"},{host => "udp:x.x.x.x/161" community => "public" version => "2c"},{host => "udp:x.x.x.x/161" community => "public" version => "2c"}]
    mib_paths => "/etc/logstash/mibs/CPI-PDU-MIB.dic"
        oid_path_length => 15
    interval => 30
  }
}
output {
    stdout
    {
        codec => rubydebug
    }
     elasticsearch {
        action => "index"
        hosts => ["https://x.x.x.x:9200"]
        cacert => "/etc/logstash/certs/http_ca.crt"
        index => "snmp"
     user => "elastic"
     password => "password"
    }
}

You should ask about that in the elasticsearch forum. It is not really a logstash question.

Thank you, I will post this on Elasticsearch forum.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.