In the above json this is the sub interface:
"sub-iface-state-name": "GigabitEthernet0/1/0/20.224" The .224 at the end denotes its a sub-interface.
Then all the fields with the word "sub" in the field name are the metrics for that sub-interface. For example, the below metrics belong to the sub-interface GigabitEthernet0/1/0/20.224
The physical interface GigabitEthernet0/1/0/20 has the following sub-interfaces:
GigabitEthernet0/1/0/20.221
GigabitEthernet0/1/0/20.222
GigabitEthernet0/1/0/20.223
GigabitEthernet0/1/0/20.224
In Kibana when I search for the specific router and physical interface I only see the last sub-interface for that physical interface display in the output. Seems like whichever sub-interface metric came last on the stream is what gets stored:
Correction to my previous statement. I was only able to get the forked pipeline working but not multiple aggregate stanzas in one pipeline. I tried using the below aggregate stanzas in one pipeline but was not able to generate bit rate fields (in-mbps, out-mbps)
Yes, sub-interface data comes in multiple separate events. I think I need to keep working with forked pipelines until I can create an individual document with metrics for the physical interface and separate documents with metrics for each sub-interface.
Problems with nil will often mean you are referencing a field's value without checking that it exists.
The first could be caused (just an example) by x = someArray[event.get("someField")] when there is no [someField] on the event. The second could be caused by y = event.get("a") - event.get("b") when there is no [a] field on the event.
You can fix these by testing whether the value is nil before using it
a = event.get("a")
b = event.get("b")
if a and b
y = event.get("a") - event.get("b")
end
event.to_hash.each { |k,v|
if k !~ /^sub/
unless map[k] # Over here I don't want any fields beginning with the word "sub"
map[k] = v
end
end
event.cancel
}
I'm getting closer to achieving my end result which is being able to store the physical interface and sub-interface data into separate documents inside the same index.
However, I don’t understand why I’m not able to generate the sub-in-mbps/sub-out-mbps bit rate from pipeline20 but the in-mbps/out-mbps bit rates are being generated from pipeline30.
I believe for some reason the aggregate filters in pipeline20 and pipeline30 are not working properly because if I uncomment out event.cancel() then no events get stored to elastic. Also, there are no events being generated with the _aggregatetimeout-bitrate-sub or _aggregatetimeout-bitrate-phy timeout tags.
You may have reached (or even passed) the limit of what it makes sense to do in an aggregate filter. Or four.
It might make sense to go back to the initial data set, and review whether the approach you have managed to piece together actually makes sense given how much you have learned about logstash in the last several weeks.
If you think it does then I would suggest sending the output from each of the three pipelines to files using a file output, which will dump data as JSON. Then review each file and make sure the data looks the way you think it does.
If there are two aggregate statements in a logstash.yml file then is it possible to prevent the output of the first aggregate statement from passing through the second aggregate statement? I'm noticing that the output from the second aggregate statement contains the timeout_tags from the first aggregate statement.
Currently, each metric (property-name) for a specific component-name comes in as a separate event. As you can see here there are two separate events for FPC4:NPU1, each one containing a different property-name.
I want to use the same code from above to aggregate all the field-values for a specific task_id inside one document and store it in elastic but it does not seem to work. The only way to get it to work is to manually map the fields.
This does not work:
if [component-name] and [property-name] and [device] {
aggregate {
task_id => "%{device}-%{component-name}"
timeout_task_id_field => "task_id"
timeout_timestamp_field => "@timestamp"
code => "
event.to_hash.each { |k, v|
unless map[k]
map[k] = v
end
}
event.cancel()
"
push_previous_map_as_event => true
}
}
Also, I don't why I had to do: map[event.get('property-name')] = event.get('property-value')
Instead of: map['property-name'] = event.get('property-value')
There's a ton more property-names in this document for FPC4:NPU1 but I couldn't screen-shot the all of them into one image. But this is what I was trying to accomplish with the cooler looking code stanza.
The second will create a field called [property-name] with a value like 27,952. The first will create a field called [mem-util-next-hop-exception-bytes-allocated] with that value.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.