When I use logstash-input-cloudwatch plugin, it will log info level information and for each query.
It spam the log file.
Can we enhance this plugin to reduce the size or setup different log value.
[2019-01-15T15:19:45,426][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.053792 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:45 UTC,end_time:2019-01-15 07:19:45 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:45,626][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.190503 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:45 UTC,end_time:2019-01-15 07:19:45 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:45,840][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.199356 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:45 UTC,end_time:2019-01-15 07:19:45 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:45,892][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.048458 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:45 UTC,end_time:2019-01-15 07:19:45 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:46,074][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.179882 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:45 UTC,end_time:2019-01-15 07:19:45 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:46,279][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.203297 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:46 UTC,end_time:2019-01-15 07:19:46 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:46,437][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.153956 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:46 UTC,end_time:2019-01-15 07:19:46 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:46,613][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.173233 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:46 UTC,end_time:2019-01-15 07:19:46 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:46,781][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.1655 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:46 UTC,end_time:2019-01-15 07:19:46 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])
[2019-01-15T15:19:47,002][INFO ][logstash.inputs.cloudwatch] [Aws::CloudWatch::Client 200 0.219327 0 retries] get_metric_statistics(namespace:"AWS/EBS",metric_name:"VolumeTotalWriteTime",start_time:2019-01-15 07:09:46 UTC,end_time:2019-01-15 07:19:46 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"VolumeId",value:"[FILTERED]"}])