Problem with indexing Cisco Logfiles

Hello Community,

i am quite new to ELK-Stack and i ran into some problems with indexing Cisco Logfiles to make them searchable in Kibana.

Background informations:
Server A: xx.xxx.xxx.102 (Loghost)
Running syslog-ng and receiving logs from about 400 cisco devices
Path for Logfiles:
/var/log/remotelogs/switches/[switch-ip-address]/[switch-ipaddress]-[yyyy.mm.dd].log
syslog-ng creates a new logfile every day at 00:00
Using filebeat to send logfiles to Server B.

Filebeat config:

filebeat.inputs:
- type: filestream
  enabled: true
  id: cisco-switch
  paths:
    - /var/log/remotelogs/switches/*/*.log
  tags: [cisco-switch] 
    
- type: filestream
  enabled: true
  id: cisco-asa
  paths:
    - '/var/log/remotelogs/ASA/*/*.log'
  tags: [cisco-asa]

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

output.logstash:
  hosts: ["xx.xx.xx.103:5044"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded

Server B: xx.xx.xx.103 (V-Host with 8 Cores @ 2.6GHz and 32BG RAM)
running ELK Stack 8.x and receiving logs from filebeat via logstash

Logstash Config: (Input)

input {
  beats {
    port => 5044
  }
}

Logstash Config Filter Switch:

filter {
	if "cisco-switch" in [tags] and "pre-processed" not in [tags] {
		mutate {
			add_tag => [ "pre-processed", "Switch", "Cisco IOS/NXOS" ]
		}
		
		grok {
			patterns_dir => ["/etc/logstash/patterns/*"]
			match => [
				"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{CISCOTIMESTAMPTZ1:logtime}: %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
				"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{CISCOTIMESTAMPTZ2:logtime}: %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
				"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{TZ:logtime}: %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
				"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{CISCOTIMESTAMPTZ2:logtime}: %{DATA:facility}: %{GREEDYDATA:log_message}",
				"message", "%{CISCOTIMESTAMP:logtime} %{SYSLOGHOST:device} %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
				"message", "%{CISCOTIMESTAMP:logtime} %{SYSLOGHOST:device} %{GREEDYDATA:log_message}"
			]
		}
		
		syslog_pri { }
		
		date {
			match => [
				"logtime",
 
				# IOS
				"MMM  d YYYY HH:mm:ss.SSS ZZZ",
				"MMM  d YYYY HH:mm:ss ZZZ",
				"MMM  d YYYY HH:mm:ss.SSS",
				"MMM dd YYYY HH:mm:ss.SSS ZZZ",
				"MMM dd YYYY HH:mm:ss ZZZ",
				"MMM dd YYYY HH:mm:ss.SSS",
				 
				# Nexus
				"YYYY MMM  d HH:mm:ss.SSS ZZZ",
				"YYYY MMM  d HH:mm:ss ZZZ",
				"YYYY MMM  d HH:mm:ss.SSS",
				"YYYY MMM dd HH:mm:ss.SSS ZZZ",
				"YYYY MMM dd HH:mm:ss ZZZ",
				"YYYY MMM dd HH:mm:ss.SSS",
				
				# Logdate without year as timestamp
				"MMM dd HH:mm:ss",
				"MMM  d HH:mm:ss",

				"ISO8601"
			]
			target => "@timestamp"
		}	
	
	}
} 

Logstash Filter ASA:

filter {
	if "cisco-asa" in [tags] and "pre-processed" not in [tags] {
		mutate {
			add_tag => [ "pre-processed", "ASA" ]
		}
		
		grok {
			patterns_dir => ["/etc/logstash/patterns/*"]
			match => [
				"message", "%{CISCOTIMESTAMP:logtime} %{SYSLOGHOST:device} %%{DATA:facility}-%{INT:severity_level-}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}"
			]
		}
		
		syslog_pri { }
		
		grok {
				patterns_dir => ["/etc/logstash/patterns/*"]
				match => [
					"log_message", "%{CISCOFW106001}",
					"log_message", "%{CISCOFW106006_106007_106010}",
					"log_message", "%{CISCOFW106014}",
					"log_message", "%{CISCOFW106015}",
					"log_message", "%{CISCOFW106021}",
					"log_message", "%{CISCOFW106023}",
					"log_message", "%{CISCOFW106100}",
					"log_message", "%{CISCOFW110002}",
					"log_message", "%{CISCOFW302010}",
					"log_message", "%{CISCOFW302013_302014_302015_302016}",
					"log_message", "%{CISCOFW302020_302021}",
					"log_message", "%{CISCOFW305011}",
					"log_message", "%{CISCOFW313001_313004_313008}",
					"log_message", "%{CISCOFW313005}",
					"log_message", "%{CISCOFW402117}",
					"log_message", "%{CISCOFW402119}",
					"log_message", "%{CISCOFW419001}",
					"log_message", "%{CISCOFW419002}",
					"log_message", "%{CISCOFW500004}",
					"log_message", "%{CISCOFW602303_602304}",
					"log_message", "%{CISCOFW710001_710002_710003_710005_710006}",
					"log_message", "%{CISCOFW713172}",
					"log_message", "%{CISCOFW733100}",
					"log_message", "%{WORD:action} %{WORD:protocol} %{CISCO_REASON:reason} from %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port} to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}; %{GREEDYDATA:dnssec_validation}",
					"log_message", "%{CISCO_ACTION:action} %{WORD:protocol} %{CISCO_REASON:reason}.*(%{IP:src_ip}).*%{IP:dst_ip} on interface %{GREEDYDATA:interface}",
					"log_message", "Connection limit exceeded %{INT:inuse_connections}/%{INT:connection_limit} for input packet from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} on interface %{GREEDYDATA:interface}",
					"log_message", "TCP Intercept %{DATA:threat_detection} to %{IP:ext_nat_ip}/%{INT:ext_nat_port}.*(%{IP:int_nat_ip}/%{INT:int_nat_port}).*Average rate of %{INT:syn_avg_rate} SYNs/sec exceeded the threshold of %{INT:syn_threshold}.#%{INT}",
					"log_message", "Embryonic connection limit exceeded %{INT:econns}/%{INT:limit} for %{WORD:direction} packet from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} on interface %{GREEDYDATA:interface}"
				]
			}
	
		date {
			match => [
				"logtime",
 
				# IOS
				"MMM dd HH:mm:ss",
				"MMM  d HH:mm:ss",
				"MMM  d YYYY HH:mm:ss.SSS ZZZ",
				"MMM  d YYYY HH:mm:ss ZZZ",
				"MMM  d YYYY HH:mm:ss.SSS",
				"MMM dd YYYY HH:mm:ss.SSS ZZZ",
				"MMM dd YYYY HH:mm:ss ZZZ",
				"MMM dd YYYY HH:mm:ss.SSS",
				 
				# Nexus
				"YYYY MMM  d HH:mm:ss.SSS ZZZ",
				"YYYY MMM  d HH:mm:ss ZZZ",
				"YYYY MMM  d HH:mm:ss.SSS",
				"YYYY MMM dd HH:mm:ss.SSS ZZZ",
				"YYYY MMM dd HH:mm:ss ZZZ",
				"YYYY MMM dd HH:mm:ss.SSS",
				 
				"ISO8601"
			]
			target => "@timestamp"
		}	
	
	}
}

Logstash Config Output:

output {
	if "cisco-switch" in [tags] {
		if "_grokparsefailure" in [tags] {
			file {
				path => "/var/log/remotelogs/grokfail/grokfail-logparse.log"
			}
		}
		else {
			elasticsearch {
				hosts => ["https://localhost:9200"]
				index => "cisco-switch-%{+yyyy.MM.dd}"
				user => "elastic"
				password => "******************"
				ssl => true
				ssl_certificate_verification => false
			}
		}
	}
	
	if "cisco-asa" in [tags] {
		elasticsearch {
			hosts => ["https://localhost:9200"]
			index => "cisco-asa-%{+yyyy.MM.dd}"
			user => "elastic"
			password => "******************"
			ssl => true
			ssl_certificate_verification => false
		}
	}
	 
}

Now there are multiple problems:

  1. i have a constant datastream of about 3Mbit/sec between the Loghost and Elk-Stack

  2. Server B reads logfiles multiple times from Server A and add them multiple times to the index
    and they appear multiple times in the Discovery


    even this line appeared only one time in the logfile.
    (Remark: yes a colleague made a loop by accident)

  3. Daily indices (cisco-switch-yyyy.mm.dd and cisco-asa-yyyy.mm.dd) are geting very big, up to 10GB even the combined size of available logfiles on the loghot are 100-200MB only
    I think mainly because logfiles are indexed multiple times

  4. ASA Logs are only indexed for a few hours after a new index for the day was created.


    even the logfile on the loghost contains 24 hours.
    Every day a few hours after 00:00 when a new logfile is created there are no more entrys indexed

I noticed that this problems began, when i changed the timstamp field to use the cisco timestamps instead of the default timestamp when logstash received the message

I also trying to figure out how to delete indices older than 90 days. Cisco logs on the loghost are deletet after that time too.
I tried it with ILM but alway got errors with the alias and have no more ideas, so any advice would be aprecciated.

Please share some sample messages of both of your logs so people can try to replicate your pipeline and see what the issue could be.

Hello @leandrojmp thanks for your reply,

here are 2 logfiles as example.

Cisco-IOS Log: Cisco IOS Log - Pastebin.com
Cisco-NXOS Log: Cisco NXOS Log - Pastebin.com
(IP-Addresses are changed for security reasons)

I cant provide a ASA Logfile cause its about 200+MB a day and there are to many Informations i would have to edit for security reasons , so i can only provide a few lines here

Jul 18 00:00:00 10.0.0.1 %ASA-5-713041: IP = xx.xxx.xxx.xxx, IKE Initiator: Rekeying Phase 1, Intf outside, IKE Peer xx.xxx.xxx.xxx  local Proxy Address 0.0.0.0, remote Proxy Address 0.0.0.0,  Crypto map (N/A)
Jul 18 00:00:01 10.0.0.1 %ASA-5-713119: Group = xx.xxx.xx.xxx, IP = xx.xxx.xxx.xxx, PHASE 1 COMPLETED
Jul 18 00:00:02 10.0.0.1 %ASA-2-106001: Inbound TCP connection denied from xx.xx.x.xx/yyy to xx.xx.xx.xx/yyy flags ACK  on interface inside
Jul 18 00:00:10 10.0.0.1 %ASA-4-752012: IKEv1 was unsuccessful at setting up a tunnel.  Map Tag = outside_map.  Map Sequence Number = 39.
Jul 18 00:00:10 10.0.0.1 %ASA-3-752015: Tunnel Manager has failed to establish an L2L SA.  All configured IKE versions failed to establish the tunnel. Map Tag= outside_map.  Map Sequence Number = 39.

Todays Elasticsearch Log

[2022-07-18T00:00:07,985][INFO ][o.e.c.m.MetadataCreateIndexService] [elastic.xxxxxxx.de] [.monitoring-es-7-2022.07.18] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0]
[2022-07-18T00:00:09,175][INFO ][o.e.c.m.MetadataCreateIndexService] [elastic.xxxxxxx.de] [.monitoring-kibana-7-2022.07.18] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0]
[2022-07-18T00:00:21,491][INFO ][o.e.c.m.MetadataCreateIndexService] [elastic.xxxxxxx.de] [cisco-switch-2022.07.18] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2022-07-18T00:00:21,650][INFO ][o.e.c.m.MetadataMappingService] [elastic.xxxxxxx.de] [cisco-switch-2022.07.18/wfXi5mPVTnGjdAumEHMqiQ] create_mapping
[2022-07-18T01:00:00,007][INFO ][o.e.x.m.e.l.LocalExporter] [elastic.xxxxxxx.de] cleaning up [2] old indices
[2022-07-18T01:00:00,024][INFO ][o.e.c.m.MetadataDeleteIndexService] [elastic.xxxxxxx.de] [.monitoring-es-7-2022.07.11/BjIQVmj9QF-Co5IWsvNqsA] deleting index
[2022-07-18T01:00:00,024][INFO ][o.e.c.m.MetadataDeleteIndexService] [elastic.xxxxxxx.de] [.monitoring-kibana-7-2022.07.11/ukHPnNwSS6CfQZDzvQg-ww] deleting index
[2022-07-18T01:30:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [elastic.xxxxxxx.de] triggering scheduled [ML] maintenance tasks
[2022-07-18T01:30:00,001][INFO ][o.e.x.s.SnapshotRetentionTask] [elastic.xxxxxxx.de] starting SLM retention snapshot cleanup task
[2022-07-18T01:30:00,002][INFO ][o.e.x.s.SnapshotRetentionTask] [elastic.xxxxxxx.de] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2022-07-18T01:30:00,004][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [elastic.xxxxxxx.de] Deleting expired data
[2022-07-18T01:30:00,025][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [elastic.xxxxxxx.de] Successfully deleted [0] unused stats documents
[2022-07-18T01:30:00,026][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [elastic.xxxxxxx.de] Completed deletion of expired ML data
[2022-07-18T01:30:00,027][INFO ][o.e.x.m.MlDailyMaintenanceService] [elastic.xxxxxxx.de] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask

Logstash Log after Restart on Jul 15th

[2022-07-15T09:12:39,363][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2022-07-15T09:12:50,235][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2022-07-15T09:12:50,410][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2022-07-15T09:12:50,531][INFO ][logstash.runner          ] Logstash shut down.
[2022-07-15T09:13:10,341][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2022-07-15T09:13:10,358][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.2.2", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.14.1+1 on 11.0.14.1+1 +indy +jit [linux-x86_64]"}
[2022-07-15T09:13:10,361][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-07-15T09:13:12,194][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-07-15T09:13:14,714][INFO ][org.reflections.Reflections] Reflections took 112 ms to scan 1 urls, producing 120 keys and 395 values 
[2022-07-15T09:13:15,867][INFO ][logstash.codecs.jsonlines] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2022-07-15T09:13:16,028][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-07-15T09:13:16,134][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://localhost:9200"]}
[2022-07-15T09:13:16,211][WARN ][logstash.outputs.elasticsearch][main] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure remove `ssl_certificate_verification => false`
[2022-07-15T09:13:16,681][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@localhost:9200/]}}
[2022-07-15T09:13:17,265][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@localhost:9200/"}
[2022-07-15T09:13:17,290][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.2.2) {:es_version=>8}
[2022-07-15T09:13:17,294][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-07-15T09:13:17,369][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-15T09:13:17,371][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-15T09:13:17,374][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-07-15T09:13:17,375][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://localhost:9200"]}
[2022-07-15T09:13:17,380][WARN ][logstash.outputs.elasticsearch][main] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure remove `ssl_certificate_verification => false`
[2022-07-15T09:13:17,398][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@localhost:9200/]}}
[2022-07-15T09:13:17,444][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2022-07-15T09:13:17,491][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@localhost:9200/"}
[2022-07-15T09:13:17,502][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.2.2) {:es_version=>8}
[2022-07-15T09:13:17,503][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-07-15T09:13:17,515][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-15T09:13:17,515][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-15T09:13:17,516][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-07-15T09:13:17,532][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2022-07-15T09:13:17,535][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-07-15T09:13:18,313][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-07-15T09:13:18,337][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-07-15T09:13:18,482][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/00-source-input.conf", "/etc/logstash/conf.d/10-cisco-switch-filter.conf", "/etc/logstash/conf.d/20-cisco-asa-filter.conf", "/etc/logstash/conf.d/99-target-output.conf"], :thread=>"#<Thread:0x54ad1e65 run>"}
[2022-07-15T09:13:20,259][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.77}
[2022-07-15T09:13:20,293][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-07-15T09:13:20,323][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-07-15T09:13:20,420][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-07-15T09:13:20,534][INFO ][org.logstash.beats.Server][main][2bfc1993377680eccfc6a707102355626f5e12791a0728b92ef4646cb50d7ce0] Starting server on port: 5044

Filebeat doenst log anything so i cant provide a filebeat Log here, sorry.

I also noticed, that filebeat keeps reading logfiles that are not updated anymore, see screenshot.

The source logfiles are not updates anymore cause syslog-ng creates a new logfile every day at 0:00 but the indices for past logs keep growing and the logfiles are read in over and over again.

Here you can see the problem quite good:
original log: (just one line before clock was syncronized)

Aug  5 21:45:42 xxx.xxx.60.9 snmpv3 USM user, persisting snmpEngineBoots. Please Wait...

Index:

Discovery:

I hope it helps.

Regards
Martin

@leandrojmp i think i narrowed the problem why elasticsearch is indexing my logfiles multiple times.

The problem seems to be based on filebeat.
Filebeat creates its registry file in /var/lib/filebeat/registry/filebeat and creates the files
xxxxxxxx.json
active.dat
log.json
meta.json

the log.json is as far as i see the file that contains the already read logfiles and lines, but this file is recreated when its size bekome biger than 10 to 12 MB
so filebeat seems to loose its track of already read logfiles / lines and start sending the logs over and over again.

The problem was the Open-File-Limit on the Server.
Filebeat tried to open all files at once and ran into the open file limit.

The problem was provisionally solved by increasing the open file limit on the server.
--> How to increase the open file limits on Ubuntu

OK i thought the open file Limit was the problem but it was not.

Right now i have about 300 logfiles sources that are indexed, so far it it ran for about 8-9 days. this morning filebeat again started to read and send all logs in over and over again like filebeat lost tracking of the files it already read in.

I also noticed that filebeat keeps rotating its registry log.json and xxxxxxx.json
under /var/lib/filebeat/registry/filebeat/
when the filesize of log.json exceed 10MB.

I have to index logfile from 400+ Cisco devices that are rotated daily and thats for sure is a problem right now.