How to determine bottleneck?

I have current configuration for logstash

input {
azureblob {
storage_account_name => ""
storage_access_key => ""
container => ""
registry_path => ""
codec => "json"
interval => 20
}
}

filter {
split { field => "[value]" }
}

output {
elasticsearch {
hosts => [""]
index => ""
}
}

I tried to run get rate of documents per 1m with the following configuration

input {
azureblob {
storage_account_name => ""
storage_access_key => ""
container => ""
registry_path => ""
type => "generated"
}
}

filter {
  if [type] == "generated" {
    metrics {
      meter => "events"
      add_tag => "metric"
    }
  }
}

output {
  
  if "metric" in [tags] {
    stdout {
      codec => line {
        format => "rate: %{[events][rate_1m]}"
      }
    }
  }
}

This is the rate that I'm getting:
rate: 0.04980578857764925
rate: 0.04980578857764925
rate: 0.045823537597075144
rate: 0.042159689824745786
rate: 0.038788787145762074
rate: 0.038788787145762074
rate: 0.03568740696370409
rate: 0.03568740696370409
rate: 0.030208737803509546
rate: 0.030208737803509546
rate: 0.025571144482683064
rate: 0.02352658865697199
rate: 0.02164550648912867

But with generator input plugin i get this:
rate: 47137.93823682722
rate: 48248.462291468255
rate: 49477.902364928195
rate: 50633.044504364756
rate: 51357.98224743107
rate: 52174.88988258032
rate: 52953.85798199835
rate: 53693.7463419169
rate: 54395.856618437334
rate: 54745.9935909323
rate: 55282.01634769808
rate: 55689.00496126213

With those results it seems to me that the issue is with the azure blob plugin. I also noticed that upon first usage of config file, it took around 30 minutes for the registry file to get created in the blob. Could this be an issue with connection of my logstash to azure? How can I check where the issue is coming from?

Some information of my logstash
JVM STATS
"jvm" : {
"threads" : {
"count" : 112,
"peak_count" : 113
},
"mem" : {
"heap_used_percent" : 68,
"heap_committed_in_bytes" : 16979263488,
"heap_max_in_bytes" : 16979263488,
"heap_used_in_bytes" : 11617900784,
"non_heap_used_in_bytes" : 161118392,
"non_heap_committed_in_bytes" : 250851328,
"pools" : {
"survivor" : {
"peak_used_in_bytes" : 200605696,
"used_in_bytes" : 200605696,
"peak_max_in_bytes" : 200605696,
"max_in_bytes" : 200605696,
"committed_in_bytes" : 200605696
},
"old" : {
"peak_used_in_bytes" : 11883056344,
"used_in_bytes" : 10936568240,
"peak_max_in_bytes" : 15173353472,
"max_in_bytes" : 15173353472,
"committed_in_bytes" : 15173353472
},
"young" : {
"peak_used_in_bytes" : 1605304320,
"used_in_bytes" : 480726848,
"peak_max_in_bytes" : 1605304320,
"max_in_bytes" : 1605304320,
"committed_in_bytes" : 1605304320
}
}
},
"gc" : {
"collectors" : {
"old" : {
"collection_time_in_millis" : 27531,
"collection_count" : 166
},
"young" : {
"collection_time_in_millis" : 3758202,
"collection_count" : 32540
}
}
},
"uptime_in_millis" : 170491217

PROCESS STATS
"process" : {
"open_file_descriptors" : 98,
"peak_open_file_descriptors" : 100,
"max_file_descriptors" : 65536,
"mem" : {
"total_virtual_in_bytes" : 31645831168
},
"cpu" : {
"total_in_millis" : 174860860,
"percent" : 3,
"load_average" : {
"1m" : 0.52,
"5m" : 0.63,
"15m" : 0.67
}

PIPELINE STATS
"pipelines" : {
"main" : {
"workers" : 32,
"batch_size" : 125,
"batch_delay" : 50,
"config_reload_automatic" : false,
"config_reload_interval" : 3000000000,
"dead_letter_queue_enabled" : false
}

Please let me know if you need more information.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.