Since the opening of this topic I think I understand now how to use the metrics logstash is writing. I added id flag to each filter plugin and I found the gathered data in the moinitoring index of logstash:
{
"id": "main",
"reloads": {
"failures": 0,
"successes": 0
},
"ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"hash": "06b99a6f6c1f469652fd3b3a68a4c58ecb820d1b050ece27b8c8b91ec0db2068",
"queue": {
"events_count": 0,
"type": "memory"
},
"vertices": [
{
"id": "ea6766da744c7c1136356c78f9d62b8b6721003996f0d0e8176efff59b74641f",
"queue_push_duration_in_millis": 973826,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 1383424
},
{
"id": "rpcrouter_ruby_multiline",
"duration_in_millis": 1521,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 112279,
"events_in": 1383424
},
{
"id": "rpcrouter_drop",
"duration_in_millis": 7220170,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 0,
"events_in": 1271145
},
{
"id": "rpcrouter_mutate_remove_date",
"duration_in_millis": 48,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 112279,
"events_in": 112279
},
{
"id": "delete_beats_tag_mutate",
"duration_in_millis": 55,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 112279,
"events_in": 112279
},
{
"id": "logstash_processing_end_time_date_1",
"duration_in_millis": 92,
"events_out": 112279,
"long_counters": [
{
"name": "matches",
"value": 112279
}
],
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_in": 112279
},
{
"id": "rpcrouter_mutate_replace",
"duration_in_millis": 203,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 112277,
"events_in": 112277
},
{
"id": "rpcrouter_date",
"duration_in_millis": 1123,
"events_out": 1383424,
"long_counters": [
{
"name": "matches",
"value": 1383422
}
],
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_in": 1383424
},
{
"id": "logstash_processing_start_time_ruby",
"duration_in_millis": 53938,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 1383424,
"events_in": 1383424
},
{
"id": "logstash_processing_end_time_date_2",
"duration_in_millis": 63,
"events_out": 112279,
"long_counters": [
{
"name": "matches",
"value": 112279
}
],
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_in": 112279
},
{
"id": "rpcrouter_grok_main",
"duration_in_millis": 2971,
"events_out": 1383424,
"long_counters": [
{
"name": "matches",
"value": 1383422
},
{
"name": "failures",
"value": 2
}
],
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_in": 1383424
},
{
"id": "logstash_processing_end_time_ruby",
"duration_in_millis": 1484,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 112279,
"events_in": 112279
},
{
"id": "elasticsearch_output_main",
"duration_in_millis": 319680,
"pipeline_ephemeral_id": "417d07f4-6144-4697-8f67-00e08a1d25de",
"events_out": 112279,
"events_in": 112279
}
],
"events": {
"duration_in_millis": 8522978,
"queue_push_duration_in_millis": 973826,
"out": 1383424,
"in": 1383424,
"filtered": 1383424
}
}
Here are my new questions:
-
I assumed to calculate the avg processing time per filter plugin call to be: duration_in_millis / events_in. Also I assumed that the sum of avg processing time per filter plugin call is something near my measured logstas.processing.filterTime. But my filterTime is much higher.
-
So where is the rest of my filterTime taken?
-
Why is does the drop filter (id=rpcrouter_drop) needs so long? Calculating duration_in_millis / events_in makes 5ms per drop. Feels incredibly long for me.