SNMP Input Plugin Integer Overflow

Hello everyone!

We are gathering our firewall data from our cisco ASA 5545 via snmp v3.

Now when we query ciscoMemoryPoolFree somewhere an overflow occurs.
Return value:
CISCO-MEMORY-POOL-MIB::ciscoMemoryPoolFree.1 = Gauge32: 4294967295 bytes

When this value gets saved into elasticsearch it is saved as -1.

The mapping in elasticsearch is:

"ciscoMemoryPoolFree": {
"properties": {
"1": {
"type": "long"
},
"6": {
"type": "long"
},
"7": {
"type": "long"
},
"8": {
"type": "long"
}

It looks like there is an overflow somewhere in the pipeline.

Maybe someone knows more.

Best regards

What else would you expect it to get saved as? 4294967295, or 0xFFFFFFFF, is -1 in a long field.

Thanks for your reply.
So if this is the normal behaviour, how can i change the mapping in logstash so that the correct values are written to elasticsearch?

As I asked before, what do you consider the "correct value" to be?

I would have expected it to get saved as 4294967295.
That is the number of free ram in bytes.

OK, so you want it to be integer, not long. Which makes sense, since the RFC defines Guage32 to be a non-negative integer...

The Gauge32 type represents a non-negative integer, which may
increase or decrease, but shall never exceed a maximum value, nor
fall below a minimum value. The maximum value can not be greater
than 2^32-1 (4294967295 decimal), and the minimum value can not be
smaller than 0. The value of a Gauge32 has its maximum value
whenever the information being modeled is greater than or equal to
its maximum value

But isnt integer defined as maximum value of 2^31 -1 ?
Like said here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/number.html

So it wouldnt fit into the integer?

Yes it would. It is exactly equal to the largest number you can fit in integer.

Hello!

i tried several methods of conversions:

filter {
mutate {
convert => {
"9.ciscoMemoryPoolMIB.ciscoMemoryPoolObjects.ciscoMemoryPoolTable.ciscoMemoryPoolEntry.ciscoMemoryPoolFree.1" => "integer"
}
}
}

grok { match => [ "9.ciscoMemoryPoolMIB.ciscoMemoryPoolObjects.ciscoMemoryPoolTable.ciscoMemoryPoolEntry.ciscoMemoryPoolFree.1", "%{NUMBER:testconversion:int}" ] }

Nothing works, i still get only -1.
Any tips?

I retract this. The number you have is the largest 32-bit unsigned integer. However, in elasticsearch the integer type is signed, so your value is too large to fit.

Well my solution for the moment is just.

if [9.ciscoMemoryPoolMIB.ciscoMemoryPoolObjects.ciscoMemoryPoolTable.ciscoMemoryPoolEntry.ciscoMemoryPoolFree.1] == -1 {
mutate {
replace => {
"9.ciscoMemoryPoolMIB.ciscoMemoryPoolObjects.ciscoMemoryPoolTable.ciscoMemoryPoolEntry.ciscoMemoryPoolFree.1" => 4294967295
}
}
}

Not beautiful, but the only thing that works.

It is indeed a bug in the snmp-input plugin.
I wrote a patch and an issue on github.

You can find it here: